The 2014 midterm elections are rapidly approaching. Next week many Americans (well, probably more like 40% of them) will head to the polls to decide the composition of Congress during the remainder of Barack Obama’s presidency. Healthcare, immigration, the administration’s treatment of the Iraq/Syria conflict and latest Ebola outbreak will serve as prominent avenues of attack for the GOP and points of retreat for many Democratic candidates.
First, you have a unanimous forecast of an overwhelming Republican victory. Estimates currently place the chance of a successful Republican consolidation of a Senate majority at various degrees of likelihood. Most exceeding 50% by a very comfortable margin and have been increasing steadily since about mid-month. The Washington Post place the chances of a GOP victory in the Senate at 93%, The New York Times puts them at 66%, YouGov at 63%, and FiveThirtyEight at 62.3%. While this is certainly in line with the majority of the statistical models, the (editorial, not mathematical) certainty with which these predictions are being touted is unusual. Consequently, the dominant narratives have been those featuring liberal America lethargically resigning itself to a predetermined result.
Interestingly, the introduction of “big data” to mass media and the rise of the infographic have given a new angle to the American love of statistics and cemented the role of the pollster in contemporary politics. It seems as though the “triumph” of nerd-prophet Nate Silver in predicting the outcome of the 2012 Presidential election has inspired a sense of reverence among a new generation of Americans (at least those who consume news media) for the projections of popular number-crunchers.
Backing up these relatively dense and technically-oriented predictions have been a series of pieces that remind the electorate of these conclusions. Even back in September, Fox News’ headlines have highlighted the “gloomy” democratic prospects, while Hill contributors are certain of the GOP’s “big victory in the Senate, House and statehouses”, and Bloomberg editors jumping at the chance to “be the first to congratulate Republicans on their victory“. While I am certainly not suggesting that the Democrats are likely to win (or even retain control of the Senate), dismal Democratic midterm performances may not be as certain as the President may think.
The battle for control over the trajectory of American politics exists much as it has for the past few decades. The intense political polarization of Americans is well–documented and looks to be increasing with each passing year. I’ve written before about how so-called “Independents” are really anything but and the decreasing number of swing voters in America is fast becoming a defining element of electoral strategy, two points raised by Lee Drutman and Mark Schmitt in their Washington Post article The 2014 campaign is a campaign about nothing. As familiar as we now are with this theme, Drutman and Schmitt astutely judge that the acute lack of “ideological overlap” between parties and lack of incentives for aisle-crossing centrism are driving the high-cost/low-substance character of the 2014 elections. The intense and deep-seated polarization in Congress reflects the American flight from the center.
Besides, there’s always the presidential election looming in the distance. And if you think that the problem with this election is a lack of exciting new blood, just wait until we’re confronted with Bush vs. Clinton in 2016.
On May 28, President Obama delivered an address to the graduates of the US Military Academy at West Point. His speech focused heavily on the Administration’s approach to foreign affairs for what remained of the his second term in office. The speech featured about as much of the minimally subtle sabre-rattling chest-beating bravado and lofty appraisals of the United States’ capacity and intention to lead the international community as one would reasonably expect from any modern Commander in Chief. However, these remarks also featured an unmistakably subdued tone, a palpable air of cynicism that betrayed the President’s meagre appetite for risky foreign meddling of any sort. Tempting fate is clearly not on Obama’s second term agenda.
The West Point speech marked a significant moment in the Administration’s attempts to translate an increasingly calculated approach and progressively less ambitious worldview into a cohesive foreign policy that will be remembered as the definitive Obama Doctrine. It has become a surprisingly difficult challenge for a President who ascended to the White House on promises to improve the way the United States leads on the international scene and make necessary reforms to combat the rapid decline of the nation’s image in the eyes of the world’s population. In this respect, Obama’s presidency began on a much more confident note, with the Commander in Chief appear to rise to the challenge of maintaining the country’s position of leadership role while modifying the character of its guidance. This was perceived by many as being a return to the triumphant and conscientious American leadership of the golden past. In his acceptance speech for the 2009 Nobel Peace Prize, Obama laid the framework for America’s active role in combating evil as global defender of the righteous:
We must begin by acknowledging the hard truth: We will not eradicate violent conflict in our lifetimes. There will be times when nations – acting individually or in concert – will find the use of force not only necessary but morally justified.
These remarks were certainly made by a very different President, one displaying very few qualms with the mobilization of the United States’ overwhelming military might in noble leadership of the international community against threats to global order and well-being. While Obama has certainly not shied away from his firm commitment American exceptionalism, explaining in his West Point remarks that it is something that he believes in “with every fiber of [his] being”, he has made a visible departure from the ambitious and moralizing rhetoric that was a trademark of his early Presidency.
Instead of playing the traditional role of advocate for active and benevolent intervention, he has embarked upon what amounts to a grand campaign of damage control, an effort that has helped to shape the attitude of the government in facing contemporary conflicts, most notably Iraq and Syria. Indeed, the most poignant statement to come from the President’s speech at West Point addressed the need to avoid relying on the military as the nation’s primary problem solving tool:
Just because we have the best hammer does not mean that every problem is a nail.
The difference in tone between these two quotations is dramatic. In lieu of the lofty ideals of “hope” and “change” that propelled Obama to the White House in 2008, it’s becoming increasingly likely that the Obama Doctrine referred to by future generations will have taken on a much less glamorous character.
Obama used the West Point speech as an opportunity to convince the world of the merits underpinning what will likely come to define the foreign policy outlook of his Presidency to future generations. While the White House defines his doctrine as being “both interventionist and internationalist, but not isolationist or unilateral“, the President himself has a far more forthright way of describing his outlook. According to the media, the President himself has coined (and repeatedly employed) the phrase “Don’t do stupid s—” (DDSS) to describe his current approach to the international policy. Spurred on by a particularly adversarial media climate, the cynical tone reflected by the Chief Executive’s willingness to describe his foreign policy outlook in such bleak and candid terms betrays a growing level of frustration and cynicism in the White House.
However, it would be incorrect to consider this as an abrupt about-face in policy. Instead, it should be viewed as an organic transformation in the President’s approach. The current reticence is consistent with many of the President’s recent declarations about wielding American hard power internationally. Take, for example, Obama’s speech in September of 2013, when he explained to the UN General Assembly his desire to shift the United States “away from a perpetual war footing”. Included in the same remarks were appeals for increased levels of multilateral international involvement in Israel and Palestine as well as a remarkably conciliatory overture to the Iranian administration for increased cooperation, instead of submission, breaking with the traditional demanded from leaders in Washington.
These are marked departures from traditional foreign policy dogma and, contrary to his exceptionalist rhetoric, signal the President’s willingness to see the United States adopt a much more modest role in the international order. The humble character of DDSS doctrine is symptomatic of the difficulties inherent in this Administration’s attempt to reconcile the traditional American position as global arbiter, defender of freedom, and promoter of democracy with the groundswell of public opinion in favor of a more restrained role in global affairs. In many ways this is easily understandable when considering Obama as a President tasked with bridging the gap between generations in an atmosphere of unprecedented political polarization.
At the moment, both extremes of the domestic political spectrum (save for the Tea Party, as seen above) are pushing for dramatically reduced foreign involvement while “establishment” Democrats and Republicans continue to criticize the White House for its reticence in Iraq and Syria. The isolationist camp is primarily composed of small-government (“Independent”) conservatives, who primarily view interventionism as something that the United States can’t current currently afford, and young progressive idealists who oppose intervention on anti-imperial moral grounds. Mainstream Republicans and a significant portion of Democratic leadership, as we will discuss in a moment, still believe in the importance of America’s moral imperative and the maintenance of national security through preventative action.
It’s interesting that the Middle East, a traditional proving ground for imperial ambition, has functioned as a catalyst for the President’s new doctrine of restraint, increased multilateralism, and reliance on the developing world to establish its own security apparatuses. Pundits, however, like the National Journal’s Kaveh Waddell, were quick to point out how ill-suited the DDSS approach is to contemporary conflict. While Waddell bases his judgement (and title of his piece: “Iraq Is a Terrible First Test for Obama’s New Foreign Policy”) on more situational tactical and military factors, such as the combat ineffectiveness of the Iraqi military, there are multiple reasons why Iraq is in fact a very appropriate “first test” for the application Obama Doctrine.
The current situation in Iraq and Syria is an almost perfect representation of the sort of ambiguous and volatile conflict zone that the United States is likely to face in the 21st century. The commitment of armed forces carries with it a huge political risk and virtually no assurance that the conflict won’t become a protracted affair. Unreliable regional actors are subject to sudden disappearance alliances and shifting alliances carry the risk of a sudden inversion of the tactical situation. Even a comprehensive tactical success would bring almost no tangible reward in terms of spoils, political capital. or any sort of goodwill, and would certainly not guarantee a cooperative future regime.
It is for precisely these reasons that Obama’s revised conception of America’s role, as defined by the DDSS doctrine, is a much more appropriate fit for the current situation than the cavalier moral crusades favored by the previous administration. It demonstrates the President’s willingness to confront the challenge of finding a happy medium between War-on-Terror inspired neo-imperial adventurism and the irresponsible and callous inaction that has allowed for events such as the Rwandan genocide of the 90s. The recent remarks made by the President during his weekly address on August 9 illustrate this policy-in-motion and prove that it is possible to reach a calculated plan of action that takes into account both America’s assumed moral imperative and its predilection for reckless military interventionism:
The United States cannot and should not intervene every time there’s a crisis in the world. But when there’s a situation like the one on this mountain—when countless innocent people are facing a massacre, and when we have the ability to help prevent it—the United States can’t just look away. That’s not who we are. We’re Americans. We act. We lead. And that’s what we’re going to do on that mountain. As one American who wrote to me yesterday said, “it is the right thing to do”.
While the Administration can be rightly criticised for not acting swiftly to prevent the escalation of the conflict in Syria or the spread of ISIL throughout the region, its hesitance is not a direct result of DDSS policy. The situation was and is extremely delicate. Backing Prime Minister Nuri al-Maliki may very well have been a poor decision and the lack of support to moderate resistance groups in Syria almost certainly was. However, the lack of Executive action can not be attributed to a policy that is, at its core, “both interventionist and internationalist”. If anything, the failure of the United States (and the West, more broadly) to act effectively and judiciously in Iraq is a failure to apply the principles of DDSS. On the whole, this new blend of international interventionism and cautious multilateralism being pioneered by the current administration is a sure step in the right direction for the United States.
While prudence and multilateralism is often far less political appetizing to American audiences (or, at least, offers up no shortage of ammunition for one’s political opponents) in the short run, it inevitably becomes far more appealing in the longer term and even more so when viewed in retrospect. Unfortunately, the partisan dynamics of the American political scene have completely disincentivized the pursuit of the rational yet unspectacular within the executive branch.
Now, this may simply be a superficial appraisal of the phrase in question, or an attempt to distance herself from an unpopular Democratic incumbent, but it’s equally likely that her comment was made in light of her opposition to a reduced American role in the international community. Clinton believes “that America needs a leader who believes that the country, despite its various missteps, is an indispensable force for good”, a conviction that likely reveals her membership in the Democratic party of a bygone era. Another example is Clinton’s staunch support for Israel and Prime Minister Benjamin Netanyahu, which runs contrary to the rapidly diminishing levels of unconditional support for Israeli among young Americans. Nearly a decade and a half older than President Obama, who based his campaign heavily around his appeal to younger Americans, Hillary Clinton likely faces a stiff challenge in convincing young voters of the necessity of America’s role as global policeman (though persuading the general population may prove less difficult).
The response of the Obama administration and its allies to Clinton’s criticism was quite sharp, though it seems that no lasting damage was dealt. Still, it seems as though Clinton’s foreign policy, as seen by Millennial voters, leaves a lot to be desired and it’s no secret that the gulf between the opinions of American voters and establishment politicians is only widening. As the generation of young people who were raised during the expensive and unproductive wars in Iraq and Afghanistan become young voters, the number of ardent moral crusaders like Clinton will only diminish. While Democratic Party heavyweight centrist Hillary Clinton can certainly continue to promote staggeringly hawkish foreign policy, it is in her best interest to adopt a position that builds upon the trail blazed by President Obama towards a smarter and more nuanced future of American policy-making.
Coming right off the back of a similar ruling in Oregon, the recent Pennsylvania Supreme Court decision against the state’s same-sex marriage ban made it the 19th state, alongside the District of Columbia, to allow gay marriage (or, depending on your tolerance for semantics, the 25th – if you include states that currently don’t disallow it). In striking down the ban, District Court Judge John E. Jones III, a George W. Bush appointee, stated emphatically that:
“We are a better people than what these laws represent, and it is time to discard them into the ash heap of history.”
The decision, made just a few days after the 10th anniversary of the first American gay marriage legislation, has brought with it several noteworthy milestones in the country’s stroll towards equality. For reference, an interactive map with a quick breakdown of the gay marriage situation in each state is available here. The Pennsylvania ruling has consolidated the northeast as the second American region (alongside the states of the Pacific coast) to boast full marriage equality. While progress has largely followed the familiar ‘two-steps-forward-one-step-backwards’ approach to progressive reform in the United States, this week’s decision marked a staggering 14th straight victory for advocates of equality.
Unsurprisingly, the figures also show overwhelming support for same-sex marriage among Millennial Americans, with the 18-29 age group nearly twice as likely to support it as those in the 65+ bracket (78% vs. 42%). It seems likely that the younger generation is poised to drag the country into a relatively progressive future through sheer electoral brute force.
Amusingly, Judge Jones’ decision made reference to the Federal Supreme Court Justice (and prominent conservative judicial activist) Antonin Scalia’s caustic dissent against the Court’s 5-4 ruling against the Defense of Marriage Act’s exclusionary definition of marriage in United States v. Windsor. Couched within his tirade against “same-sex marriage (or indeed same-sex sex)” is a prediction that the actions of the majority in striking down DOMA “arms well every challenger to a state law restricting marriage to its traditional definition”. Regardless of the tone and intent with which Scalia’s prognosis was produced, severalpublicationsacrossthepoliticalspectrum have noted just how prophetic it has turned out to be.
Indeed, in the summer of 2014 we have reached an interesting point where a District Court Judge appointed by George W. Bush is citing language written by a Supreme Court Justice appointed by Ronald Reagan in striking down popular state-level bans on gay marriage. Indeed, he is in good company, with the vast majority of post-Windsor pushes for equality coming via judicial review attached to explicit references to the landmark case (see above).
Dr. Martin Luther King Jr. famously said that “the arc of the moral universe is long, but it bends towards justice”. In America that arc often proves itself to be frustratingly long, but it continues to bends nonetheless. Progress in establishing national marriage equality has, at times, been frustratingly lethargic, but the wave of recent District Court decisions is certainly cause for cautious optimism.
Update (11/08/14): The streak of unbroken pro-marriage equality decisions was finally ended by Roane County Circuit Judge Russell E. Simmons, Jr. in Tennessee. In his decision, Judge Simmons Jr. said that “neither the Federal Government nor another state should be allowed to dictate to Tennessee what has traditionally been a state’s responsibility”. Amusingly, the case of Borman vs. Pyles-Borman was, in fact, brought to the court in order to determine the state’s ability to provide the couple with a divorce (by recognizing the validity of their marriage which took place in Iowa). Thus, the ruling effectively forced the couple to stay married in Tennessee.
Coming of age in an era of acute instability, the American Millennial generation’s formative years have lacked the pervasive confidence that buttressed previous post-war cohorts and hastened the development of cultural pillars that engender generational success. The Silent Generation (1920s-early 1940s) encountered adulthood at the early peak of modern American power, with a sense of steadfast absolutism guiding the country to superpower status and introducing idyllic consumerism to the masses. Throughout their youth, the Baby Boomers (1940s-1960s) busied themselves with a clearly defined (if frequently ill-conceived) agenda of maintaining world order in the name of Western progress during a period of domestic affluence. Generation X (1960s-early 1980s) was perhaps the first to encounter any sort of overarching ambiguity, though the gentle decline of the US as the singular world power was offset by the collapse of the Soviet Union and continued domestic economic prosperity.
While American Millennials don’t lack generation-defining moments, those available are distinctly less inspirational than those of their parents and grandparents. Early Millennials have the misfortune of being old enough to remember the relative luxury of the 90s to juxtapose against more recent experiences that have created a narrative dominated by continued folly on both the international and domestic scale. Combined with two ruinous wars in the Middle East and an exceptionally belligerent War on Terror, the recent recession has left America’s economy and international standing in severely diminished. Uncertainties over the country’s political destiny as well as anxiety over personal economic matters have given rise to remarkable levels of disdain, disappointment, resentment, and disaffection within the Millennial cohort.
Unsurprisingly then, identity is being increasingly defined in negative terms. Tepid anxiety has begun to replace irreverent confidence in the national identity. Young Americans are being reared in a culture that stresses an aversion to things that are seen as harmful or counterproductive, where mistakes are to be avoided at all cost. In the public sphere, groups and movements are prone to defining their missions from a platform of active resistance in lieu of deliberate constructivism. Curiously, this phenomenon, a result of decreasing opportunity and socioeconomic mobility, has coincided with increasing levels of political polarization. The popularization and banalization of fanatical opposition (often among Baby Boomers and Gen Xers frustrated with the contemporary reality) to the perceived enemy has collided with the jaded attitudes of Millennials to create an atmosphere of extreme apathy, where civic participation is perceived as synonymous with acquiescence to extremism.
Politically, this trend has produced a generation, as well as a status quo, that can paradoxically be defined as being “viciously apathetic”. A 2013 Harvard Public Opinion Project poll produced an article in the Harvard Political Review entitled Angry, Yet Apathetic: The Young American Voter, found that while a majority of millennial voters (52% of Democrats and 51% of Republicans) would like to recall every member of the US Congress, only about half of those respondents had definite intentions to vote in the upcoming midterm elections.
Certainly, there is plenty of reason to be dissatisfied. The failure of President Obama’s administration to deliver on many of his campaign promises has put a highly-visible dent in the Democratic Party’s attempt to perpetuate the surge of interest and activity that came as a result of the 2008 campaign. With one of the least productive Congresses in history, young Americans have inherited a system of unimaginable dysfunction and intransigence. This has been compounded by the entrenchment of a quasi-oligarchic political order that has seen influence taken from the democratic masses and concentrated in the hands of the financial elite and now-ubiquitous “Super” PACs.
While conventional thinking dictates that soaring levels of discontent among Millennials would result in a proportionate increase in political participation, this is not borne out through the facts. The aforementioned Harvard poll revealed that 75% of participating 18-29 year olds didn’t describe themselves as being “politically active”. If anything, this dissatisfaction has led to a sort of self-imposed restriction on participation. This overwhelming institutional distrust has driven half of Millennials to self-identify as politically independent (a 10% increase over Gen Xers and a staggering 18% more than the Silent Generation). While this hasn’t resulted in the creation of a viable political alternative or even a tangible effect on voting patterns, it is certainly an appropriate representation of the general attitudes at play.
The name of the game is objection. It has become, above all else, important to know what you don’t want. Emphasis is constantly being pulled away from the merits of compromise and productive dialogue that is essential for the American government to function and instead placed on the sensation of opposition. The logic appears simple. It is, without a doubt, difficult to imagine viable alternatives and work, slowly but steadily, towards effective reform. By contrast, it’s extremely easy to slam the opposition, invent controversy, and laugh at the lunatic fringes. This represents a critical roadblock to contemporary success and perhaps the ultimate pitfall of American-style democracy. Our first-past-the-post take-it-or-leave-it two party system fails to reward participation by providing for all but the most monolithic of majorities and wealthiest of donors.
A brilliant Salon editorial by Matt Ashby and Brendan Carroll portrays the distinctly Millennial reliance on irony and apathy as a coping mechanism. By channeling the apropos musings of the late David Foster Wallace, the authors assert that “lazy cynicism has replaced thoughtful conviction as the mark of an educated worldview”. Indeed, American Millennials are a generation that, almost out of necessity, has embraced irony to an excruciating degree. Wallace places the origins of contemporary pessimism in the cultural backlash that followed the volatile 1960s, during which time an overarching “mood of irony and irreverence” took hold. While this initially fueled productive manifestations of popular outrage in the “global” 60s, it would eventually be co-opted by the pillars of mainstream culture by the 1990s. The late 90s gave birth to reality television, an addictive brand of entertainment that flatters viewers by raising them up above the level of the general(ly ignorant) public. Simply by tuning in, watchers could satiate the nagging desire to feel superior to their fellow citizens. Despite its rather flimsy appeal, reality television continues to be a programming staple.
In a similar vein, the new millennium has seen irony flourish on an excessive scale. The advent of hipsterdom (see: Normcore) and the kale-ification of gentrifying forces are the result of lazy and defensive cynicism that preempts failure and subverts risk. The attitude is evident in many strands of contemporary culture. It manifests itself equally in the diminutive reaction to the advent of Patriot Act-style of domestic authoritarianism as it does in the popularity of American Apparel. While it is easy to romanticize iconic movements of the past, it is impossible to ignore the stark differences between the anti-establishment movement of the late 1960s and the recent Tea Party and Occupy Wall Street movements, which (in a profoundly characteristic manner) managed to be simultaneously virulent and ineffectual.
An opportunity has arisen, however, to transcend the bounds of our dependence on crippling cynicism. As a generation, we have both the circumstances and the ability to use our unprecedented levels of diversity and education to harness the power of dissatisfaction in a productive manner. To do this, it is essential to embrace nuance and accept that failure is necessary element of eventual success. An emphasis on discretion is the key to popularizing productive engagement while avoiding the pitfalls of forces in popular media that divide as they conquer. Polarization is good for business and keeps otherwise irrelevant brands alive, but it often halts progress in its tracks.
Millennials, as a generation, have the task of fostering an environment that doesn’t consider passion in advocacy and participation equivalent to extremism. History has demonstrated that the fruits of civic engagement are not effaced by the ease of recidivism. While it is true that the current system is affected by powerful anti-democratic forces, to participate in the political system is in no way a tacit endorsement of this. Engagement is useful and can’t be considered synonymous with surrender to blind adherence.
Above all, we know that the cure to the ills of our political dysfunction will not be found in smug condemnation. Those who have seized control of the American political system win when sensationalism and division are allowed to succeed in encouraging young people to self-disenfranchise. While the Millennial addiction to ironic angst can be traced back to fairly benign roots, it’s actively detrimental to American democracy and needs to be addressed.
As the conflict in Crimea heads into yet another month of escalation, we have seen what began as a domestic conflict in Ukraine take a decidedly international turn. While Russian President Vladimir Putin’s decision to begin a genuine “boots on the ground” intervention into Crimea may fly in the face of some Western expectations, it is hardly unprecedented given the region’s turbulent history. The contemporary political delineation of the Crimean Peninsula and its valuable “warm water” ports has behind it a rich history marked by episodes of Russian imperialism and East vs. West confrontation. There is no shortage of convenient anecdotes that can be made to support even the most superficial of claims about the current state of the region’s politics. Here, we will take a look at two popular media contentions in specific: How the Crimean takeover represents Russia’s desire for access to “warm water ports” and the invasion has marked the beginning of the Second Cold War.
Commentary from all sides of the developing conflict has shown little hesitancy in drawing from the historical record to defend claims about the nature of the situation and its future. Agenda-driven questions regarding the relevance of historical explanations for contemporary Russian belligerence have dominated explanations in popular media, and often represent a dangerous lack of interest in the regional history of Crimea. While a specific and detailed review of Crimean history certainly lies beyond the scope of this particular text, it is important not to let grandiose perceptions of the classic zero-sum “East vs. West” culture war monopolize our understanding of the current situation.
Per usual, the ambiguous nature of history (and its utility in policy formulation) is not being properly accounted for in the majority of popular media. Instead, it is readily ignored in favor of internet-friendly headlines and general sensationalism. As has been demonstrated ad nauseum in recent years, cherry-picking from the historical record to help mold public opinion is irresponsible and dangerous, though the advent of the internet has made this easier than ever. After taking a brief look at the Crimean past, we will see how a few of the the dominant tropes of historical explanation in popular media fall short of providing a comprehensive explanation for the current situation in Crimea by examining three key moments in the region’s history.
Quite possibly the only other context in which the majority of the Western population has heard of Crimea, the Russian defeat in the Crimean War (1853-1856) served to set the tone in for the future both of Crimean regional politics and Russian expansionism in the modern era. The Crimean War provided the grounds for the oft-repeated “warm water port” explanation for both the 19th century war and Russia’s current interest in Crimea, which we will see is not an entirely accurate conclusion and fails to account for the region’s tempestuous political history, Russia’s unique imperial aspirations, and the present day value and geopolitical significance of Crimean ports. Instead, a more sober historical perspective lends itself to a more complete understanding of the relevant motivations behind the Crimean War and how it actually relates to Russia’s regional outlook in the 21st century.
Throughout the latter half of the 18th century, the Russian Empire busied itself formally conquering the region of Novorossiya (New Russia) along the Black Sea coastline to the north of the Crimean Peninsula. By the turn of the century, Russia had attained regional hegemony and began to refocus its expansionary efforts further south. Access to Mediterranean waters, made possibly by control of the “warm water ports” (i.e. one that remains unfrozen in the winter) of Crimea, ranked among the chief strategic ambitions of Russia’s imperial agenda.
A popular claim about the Crimean War that has resurfaced in recent weeks involves the notion that the Crimean War was brought about by the invasion of the peninsula (in a fashion similar to current events) by the Russian Empire. However, by the war’s inception in the mid-19th century, Russian control over its Black Sea coastline was already a fait accompli with the consolidation of Novorossiya. Instead, the conflict can be viewed as a Western response to the rapidly and continually expanding Russian Empire (Crimean Peninsula included) that began to, from their perspective, overstep its bounds. By continuing its southern trajectory and wrestling control of the Bosporus Straights from the slowly but surely declining Ottoman Empire, the Russians would have a free hand in the Eastern Mediterranean. It is here that its interests began to fall in direct opposition to those of the West, namely Great Britain and France, who joined in an alliance with the Ottoman Empire and the Kingdom of Sardinia and fought to deter further expansion. While the immediate motivations behind Russia’s pursuit of maritime access to the Bosporus and Mediterranean would change, this “Great Game” explanation serves to highlight the enduring motivations behind the 19th century conflict.
Continued stable control of Sevastopol, the primary Crimean port, would allow the Russian Empire full access to the relatively welcoming waters of the Black Sea. This would allow for its Navy to further programs of expansion and development, challenging Western naval dominance. A powerful 19th century Russian Navy would certainly have had a profound effect on military diplomacy and allowed for power projections into the Mediterranean Sea as well as access to many valuable trade routes. While the anti-Russian alliance led by Great Britain, France, and the Ottoman Empire (see above) was also amalgamated on religious pretense, the Western powers were most resistant to the idea of Russian expansion and the prospect of adding another powerful navy to the Mediterranean.
The 1856 Treaty of Paris marked the cessation of fighting and a costly victory for the quadripartite alliance. While the Russian Navy had been decimated and Sevastopol besieged and captured by the allied forces, the Russian Empire successfully cemented its control over the Crimean region. The treaty contained a provision under which the Western occupation of Sevastopol would end in exchange for for Russian assurances that their navy would not operate in the southern regions of the Black Sea or, by extension, the Mediterranean.
Skipping ahead nearly a century to the early years of the Cold War, Soviet leader (and Ukrainian native) Nikita Khrushchev formally attached Crimea (which had existed since 1921 as an Autonomous Soviet Socialist Republic) to the Ukrainian Soviet Socialist Republic. While politically Soviet Russian, Khrushchev spent a great deal of time in Ukraine and governed there as a representative of the war-time government of Joseph Stalin. The decision to effectively hand over Crimea to the Ukrainian state appears, at first glance, quite puzzling from a 21st century perspective, especially given the current state of affairs. Again, proper appreciation must be paid to the historical context in which Khrushchev’s decision was made. At the time, the gesture represented a sure way to augment support from the Ukranian state with little tangible cost or risk of political backfire.
From this, we can see that the Russian decision to transfer Crimea to Ukranian hands was made with a completely different set of geopolitical assumptions. This highlights a common error in the casual use of historical anecdotes to support a modern political judgement. At the time, there was no indication that the Soviet Union would collapse in spectacular fashion or that Ukraine would grow into a sovereign state that would have the power to defy the wishes of Moscow. Thus, it is inappropriate to conclude from Kruschchev’s actions that Russia no longer valued control of the Crimean Peninsula or that it had any intentions of abandoning its position there.
Despite the official Soviet-era amalgamation, the peninsula has always remained tenuously autonomous from the rest of Ukraine, a reflection of its turbulent political history and the aforementioned demographic discrepancy. Since the collapse of the Soviet Union, Crimea has operated as a semi-sovereign political entity within the new Ukrainian state. Accompanied by the partition of the Soviet Navy, the Treaty on Friendship, Cooperation, and Partnership between Ukraine and the Russian Federation was inked in 1997 and helped to ease tensions between countries and provided Russia with the ability to purchase a renewable 20 year lease of naval facilities in Sevastopol. The history of the agreement, however, has been less than perfectly stable even with a staunchly pro-Russian administration in Kiev. Thus, with the recent Euromaidan deposition of pro-Russian President Viktor Yanukovych, the future of the Russian lease in Crimea was cast into doubt.
The agreement’s instability provided a platform from which the Putin is able to challenge Ukranian sovereignty, and has facilitated the current incursion. While the advent of modern technology, including the development of the world’s largest fully-fledged nuclear icebreaker, explaining the Kremlin’s current expansionist push with 19th century “warm water” reasoning becomes untenable. The geopolitical motivation, however, is not entirely without a contemporary equivalent. While the Russian Federation of the 21st century lacks a compelling motivation to secure Mediterranean trade routes, it has certainly found one in the desire to secure a more dependable base from which to project naval power into the Caucasus and Eastern Mediterranean regions.
Taking into account both the turbulent history of the Crimean lease agreement and the necessity of preserving Russian naval potential, the desire for further (re)consolidation becomes clear. One needs to look back no further than 2008 to find an episode of regional Russian military engagement in the Georgian conflict. Equally, the near-escalation of the ongoing Syrian conflict into an international military war serves to substantiate the centuries-old Russian desire to project power into the Mediterranean (albeit for a slightly different purpose). It is here that a comparison with the mid-19th century Crimean War becomes interesting. While the development of modern geopolitics and military technology long ago rendered the 19th century Naval Great Power rivalry obsolete, the region is still as relevant as ever and Crimea still serves as a focal point from which Russia can project military power into the Black sea and Mediterranean.
While the current situation in Crimea certainly plays up Russia’s post-Soviet aspirations of regional hegemony, we must remain vigilant in how the historical record in employed to justify contemporary actions. In the West, especially the United States, knee-jerk reactions of Cold War hysteria have little practical utility in explaining the situation. While these sensationalized claims may seem academic and historically-supported at first glance, they do little aside from helping to inflate the paranoia of casual news consumers and buttress the fear-based belligerent political agenda of those on the far-right. Unfortunately, this rhetoric has a wide generational appeal and is being employed by a more diverse group than usual. Recently France’s UN envoy, Mr. Gérard Araud, produced this gem at a Security Council session:
I was 15 years old in August 1968, when the Soviet forces entered Czechoslovakia. We heard the same justifications, the same documents being flaunted and the same allegations.the same justifications, the same documents being flaunted and the same allegations. We hoped that, with the building of Europe and the collapse of communism, we would awaken from such nightmares. We had hoped that we would have replaced the dangerous logic of the balance of power with cooperation in respect for the identity and the independence of each.
In short, Russia is taking Europe back 40 years. It is all there: the practice and the Soviet rhetoric, the brutality and the propaganda.
Over the preceding weeks, plenty more Cold War comparisons have emerged from the woodwork. Ranging from casual observations to slightly less sensational contentions, these claims are often predicated on out-of-context or lightly analysed historical anecdotes and illustrate the dangers of the incidental use of the past. In the case of the Foreign Policy piece linked above, the integrity of the article is reinforced by a hefty set of caveats that negate much of its substance. Despite the punchy title, Welcome to World War II, contains the following disclaimer:
This new conflict is unlikely to be as intense as the first Cold War; it may not last nearly as long; and — crucially — it will not be the defining conflict of our times.
If this is the case, then the burgeoning conflict really looks nothing like the Cold War outside of the fact that it involves Russia, Europe, and the United States and will likely not result in nuclear armageddon. We can all agree that this sets the bar unreasonably low, and fails to account for any of the unique qualities that made the Cold War such a unique conflict.
Fortunately, many voices of restraint have surfaced that employ historically-minded arguments in an appropriate and judicious manner. LSE Professor of Comparative Politics Jim Hughes provides a sensible look at how the current conflict has roots in the Yeltsin era and how adverse conditions in Europe and the United States have magnified the effects of the change in Russian leadership. A more hawkish and astute Putin, he argues, has the political latitude to institute a more assertive agenda.
A different tack is taken by Adam Gopnik, who tackles the relevance of the First World War and the Yugoslavian war of the 1990s in addition to the aforementioned conflicts. He produces an eloquent debunking of various sensationalist claims about “new Cold War”. In Crimea and the History of History, he explains:
Russia, as ugly, provocative, and deserving of condemnation as its acts may be, seems to be behaving as Russia has always behaved, even long before the Bolsheviks arrived…
The point of the Cold War, at least as it was explained by the Cold Warriors, was that it wasn’t a confrontation of great global powers but, rather, something more significant and essential: a struggle of values, waged on a global scale, between totalitarians and liberals.
There are indeed several valid threads of continuation that tie geopolitical trends of previous conflicts, notably the Crimean and Cold Wars, with the current conflict. However, when looking backwards, it is possible to link these factors to contemporary developments without overextending ourselves historically. It is absolutely possible for some the geopolitical overtones of the Crimean War of the 1850s to remain relevant without it being an absolute parallel. It is equally possible for Russia to act in a belligerent and imperial manner without starting the Second Cold War.
History is an immensely powerful tool that must be wielded responsibly. Its true utility can only be successfully harnessed through discretion and adherence to responsible practices. Policy-makers often exhibit woefully underdeveloped understandings of history, and it must be repeatedly brought to light that a reliance on cherry-picked anecdotes and narrow agenda-driven perspectives is incredibly dangerous. A fine line must be tread, and we must retain a historically rigorous point of view without allowing knee-jerk reactions or judgments driven by poor methodology to dominate our understanding of current events.
While the majority of the world’s attention has been focused on the revolutionary tremors currently underway in Ukraine (or, if you watch cable news, breaking developments in the culinary world), violent protests in Venezuela have been raging. Demonstrations are taking place across the country, with protesters coming out in force on both sides of the leadership divide. Events kicked off on the 12th of February, Venezuela’s Día de la Juventud (National Youth Day), when an anti-administration group comprised primarily of students took to the streets in Caracas to protest against the current government of President Nicolas Maduro. Led in part by the now-jailed Leopoldo López, the group rallied around a wide-ranging platform of political reform that includes an end to government efforts to suppress public protests, the release of political prisoners, and the radical restructuring of the national economic system. Inspired by the government’s authoritarian response in prior weeks to protests in the Venezuelan states of Táchira and Merida, the demonstrators marched through Caracas while pro-government supporters rallied around the incumbent President (who later dismissed the dissenting protesters as part of a nation-wide “nazifascista outbreak” bent on government subversion). After the dust cleared, three deaths, numerous injuries, and dozens of arrests marked the conclusion of the first day of discord.
In order to properly contextualize the current conflict in Venezuela, it is necessary to look at a few different factors. The immediate motivations behind the protests can be best understood by examining the adverse conditions affecting the Venezuelan citizenry as well as the tone and context set by the country’s modern political past. This method of analysis generates insight into the actions of protesters and government officials alike, and offers an alternative historically-driven perspective, as opposed to one of raw politics. This is not to say that politics are irrelevant, as they are most definitely not. However, historical considerations are essential in properly scrutinizing revolutionary action, regardless of culture or end result.
While many are doubtlessly familiar with the divisive and provocative anti-Western rhetoric of the late revolutionary leader Hugo Chávez, it’s important to consider the country’s broader political legacies and the ways in which they have affected, and continue to affect, Venezuelan citizens and their political system. Chavez, and his successor Maduro, represent the most significant manifestation of Bolivarianism, a political philosophy named for the iconic South American anti-imperial military and political leader Simón Bolívar. Branded “chavismo” or “chavezism” by its opposition, the particular brand of Bolivarianism ushered in with Hugo Chavez’s succession to the Venezuelan presidency in 1999. An admirer of Bolívar and his struggle against Spanish domination, Chavez designed his Bolivarian Revolution around policies of nationalism, socialism, and the termination of Venezuelan reliance on the international neo-liberal economic system. For context, it is worthwhile to note that Cuban leader Fidel Castro ranked among the largest influences on Chavez’s leadership. After being released from captivity in the mid-1990s, Chavez visited Castro and the two quickly became close friends. Revolutionary Cuban trappings are evident throughout the Chavista platform, with the Venezuelan leader formulating the original slogan of his Bolivarian Revolution (“Motherland, socialism, or death”) from an amalgam of Castro’s motto of “Motherland or death” Che Guevara’s “Socialism or Death”.
When the success of domestic anti-poverty, resource redistribution, and education programs is juxtaposed against large-scale economic mismanagement, dictatorial absolutism, and a reputation for counterproductive international contrarianism, the lukewarm character of Chavista policies fostered by Chavez and perpetuated by Maduro are shown to have, at best, a lukewarm record. The late President’s curious brand of populism appeals most heavily to urban and rural poor in lieu of the traditional revolutionary mobilization of the working class. Though it may be unconventional, Chavez and his authoritarian brand of revolutionary socialism is nothing if not effective at remaining at the helm of Venezuelan politics. The regime has managed to survive a US-backed coup in April of 2002, a general labor strike later that year, and a recall election in August of 2004.
The Bolivarian commitment to opposition against what is perceived as the Western global hegemony has shown to have won him many regional supporters. Unlike the majority of the world’s developed countries that consider Chavez to have been a chiefly antagonist force, the Union of South American Nations (Union of South American Nations – UNASUL or UNASUR) acted quickly to endorse the results of the April 2013 election that followed the President’s death. The former vice president, Nicolas Maduro, campaigned heavily on a platform of continuity that played up his image as Chavez’s hand-picked successor. The support of neighboring governments was essential in buttressing the legitimacy of Maduro’s victory after the election results were called into question by several members of the international community.
Since the death of President Chavez, the Venezuelan Bolivare has experienced wild inflation and multiple devaluations as a reflection of the faltering economy. The country currently suffers from rampant “currency distortions” due to conditions that economists have characterized as “macroeconomic imbalances”. This includes a popular black market for currency exchanges that reflects a discouraging reality in comparison to the optimistic exchange rates set by the government. While the official exchange rate is somewhere around 6.3 Bolivars per American dollar (USD), the latest government auction of foreign currency revealed that the USD was selling for 11.36 Bolivars. Underground markets, which are fairly ubiquitous in Venezuala, are significantly tougher on the Bolivar, with dedicated exchange rate monitoring sites showing rates as discouraging as 87 Bolivars per dollar. The tangible ramifications of this situation have penetrated well beyond the nation’s financial institutions and into the lives of its citizens. While the government has been successful in significantly reducing the percentage of Venezuelans suffering from hunger and malnutrition in the post-Chavez years, the continued scarcity of common commodities and manufactured goods (most famously, toilet paper) continues to disrupt the lives of citizens.
Further compounding the country’s alimentary difficulties is the country’s continuing struggle with violence. While the government declines to release its internally gathered numbers, the Venezuelan Observatory on Violence, an NGO, has compiled a report on the increasing rates of violence. The Observatory estimates that 24,000 murders took place in 2013, which represents a “14% rise” on 2012 [totals]”. Additionally, the report contends that approximately 90% of all homicides go unsolved. The issue of endemic violence plays a very significant role in the popularization of the most recent iteration of anti-government demonstrations. Most recently, anti-government protesters have rallied around the death of a 22 year-old university student which took place on February 18th. Genesis Carmona, a Miss Tourism winner in her native Carabobo, was shot in the head and killed during a clash between rival participating in one of the demonstrations. To date, at least 13 Venezuelans have lost their lives in the continuing upheaval that has done nothing to diminish the authoritarian character of Maduro’s rule.
The blame for Venezuela’s current social and political woes would, at first glance, seem to fall squarely on the political mismanagement of President Chavez (and, by extension President Maduro). However, the trends and attitudes that dominate the country’s turbulent political history reveal a more nuanced reality. While the questionable decision-making of Chavista politicans certainly has played a role in the perpetuation of a volatile status quo, the traditional “politics of exclusion” that exist in Venezuela provide valid historical grounds from which to explain the current conflict. In explaining this trend’s effect on the turbulent early years of the Chavez presidency, Professor Julia Buxton explains in the Bulletin of Latin American Research that, in many ways, the Bolivarian regime actually resembles the previous government of the Punto Fijo Pact (a coalitional consolidation of Venezuela’s three major mid-century political parties) that was displaced by Chavez and his revolutionary cadre in the late 1990s. She explains:
Rather than undermining an established democracy, Chavismo was characterised by continuity with the illiberal Punto Fijo state rather than change… Both relied on the politicisation of the state to maintain authority and both were hegemonic projects, which denied the voice of opponents on the basis that this was contrary to the national interest. Crucial to the development of this tendency in both regimes was the initial fear of revanchist actions by supporters of the preceding regime.
Buxton astutely points out that until zero-sum attitudes no longer characterize the political understandings and agendas of both the incumbent and opposition parties, “the institutional crisis cannot be approached and consensual institutions cannot be crafted”. By employing a perspective that emphasizes a bit of a “longer” durée, the current protests and upheaval can be traced back to perpetuated political oppositionalism and protracted party vs. party antagonism.
In relation to the protests, the relative inaction of Washington is anything but a problem. It does not demonstrate weakness, nor does it imply tacit approval for the Chavista project. The government’s dubious record on domestic reform, proclivity for bombastic rhetoric, and willingness to embark on campaigns of reckless domestic repression does more harm to the current government’s credibility than any American effort could ever hope to. There are plenty of genuine reasons for the American government to speak out against the Venezuelan administration and plenty of opportunities for it to do so through appropriate channels. Should the United States insist, however, on an inappropriately enthusiastic campaign of overt or covert support for the Bolivarian government’s opposition, the following outcomes are likely: the current conflict will be exacerbated and the death toll will continue to rise, the Chavista regime’santi-Western rhetoric will be strengthened and substantiated, and the American government will face embarrassment on the international scene. As history has demonstrated so many times in the past, ham-fisted over-extension by the White House will result in abject folly when a preferable outcome could have been brought about by the smallest amount of restraint.
As we’ve seen, it is important to properly consider the current situation in Venezuela in a historical context in addition to a purely political one. When analyzing the broader significance of recent events, it is essential to consider the plethora of of historical and political influences, of which only a few are discussed here, that have combined to generate such a volatile atmosphere. The present turmoil in Venezuela has significant grounding in a longer process of political exclusion that began with the Punto Fijo coalition and co-opted by the Chavista government. Furthermore, the government’s eccentric response to domestic dissent and foreign media coverage is explained by the prevailing political wisdom of Chavez and his administration. It is likely that the current crisis can only be successfully and permanently diffused by efforts of reconciliation and compromise that de-emphasize the zero-sum conceptions that dominate approaches to the Venezuelan political status quo. Finally, it is the responsibility of the international community to foster an atmosphere that is conducive to peaceful and prudent rapprochement while resisting the urge to embark on outdated and belligerent interventionism.
One of the primary talking points to surface in the wake of Chris Christie’s ambition-dashing scandal, ‘Bridgegate’, is the importance of ‘aisle-crossing’ moderates in American politics. It’s no big secret that the partisan gulf remains 0ne of the most problematic elements of democracy in the US. The gap, which continues to widen as mainstream neo-conservatism has gained ground, is reflected in record-low public confidence levels. Recently, a Washington Post and University of Virginia study found that 69% of respondents felt that the largest threat to the continued existence of the American Dream is the lack of cooperation in Washington.
The current spotlight cast on Governor Christie has had the side-effect of bringing American centrism to the forefront. Considered widely to be a Republican front-runner for the 2016 Presidential election, Christie’s success (or lack thereof) in the aftermath of the scandal will likely have symbolic repercussions on the credibility of moderates and the bipartisan project for years to come. While the government shutdown of late 2013 represented a significant setback to those who remain committed to productive and conciliatory politics, there are still those who believe in crossing the partisan divide. Notorious aisle-crosser Susan Collins, a Republican senator from Maine, was joined by five other GOP Senators in voting to allow the progression of a bill that would extend benefits from the federal government to Americans experiencing long-term unemployment. It’s absolutely crucial that efforts like these do not go unnoticed.
Using the long-term unemployment bill as a case study, we see that the majority of Republican leadership stands in stark opposition to a compromise. Firebrand Florida Junior Senator Marco Rubio continues to deny that income inequality plays a significant role in the country’s on-going struggle with underemployment in the ailing economy, and remains adamant in finding an alternative to(as oppressed to improving) the White House-sponsored plan. After an erratic (if not downright manic) 2013, during which he played a prominent role in perpetuating the government shutdown, presidential hopeful Rubio kicked off the new year with a speech about poverty in America that slammed the unemployment bill and Democratic strategy. In his speech, the Tea Party superstar outlined a series of policies designed to implement a distinctly Republican-flavored agenda of austerity-based reform. While the Democratic caucus is certainly not rushing to the negotiating table, the proposals of Rubio and his caucus are clearly designed with a priority placed on defeating the White House’s ambitions and not on the creation of a plan to rectify the country’s embarrassing levels of poverty and umployment through traditional means (i.e bipartisan compromise).
While Jonathan Cohn of The New Republic responded in a surprisingly tame manner to Rubio’s remarks (‘Rubio Talks Poverty, Says Things That Are Not Totally Crazy‘), he could not ignore the realities. While he praised the Senator’s initiative, he also made sure to remain honest about the usefulness of the policies:
Republicans these days tend to ignore poverty altogether or to blame it on the poor themselves. Rubio, the Senator from Florida with well-known presidential aspirations, took a different approach on Wednesday. He talked about the persistence of poverty as a crisis. And he made some policy proposals along the way.
That doesn’t mean they were good proposals.
It is plain to see that partisan ideologues have claimed a disproportionate amount of public attention in recent months. Culminating with the government shutdown, the far-right’s efforts to derail the democratic process and hijack centrist discourse have been met with surprisingly little resistance. However, a recently released Gallup study has diverted a fair amount of media buzz away from the now-normal narrative of partisan clashes. The new report, constructed from an aggregation of ’13 separate… multiple-day polls’, shows that a ‘record high’ (42%) number of Americans now self-identify as political independents. Naturally, this spawned a litany of sensational headlines from the blogosphere decrying the death of party politics in the United States. Bloggers like Care2’s Kevin Matthews have predicted trends in America’s heavily-‘Independent’ future such as a ‘shift away from Conservativism’ and a ‘fall (strikeout) dip of the Two Party System’ that not only flies in the face of established logic, but also ignores an overwhelming amount of evidence that points to the continued polarization and entrenchment of both parties at opposite ends of spectrum.
The reality, however unfortunate, is that the shift identified by Gallup is likely far more superficial than a such headlines suggest. The public’s retreat from major party affiliation is, unfortunately, not the sign of an ideological revolution. Rather, it should be viewed as a symptom of the reflexive response that has been developed by the public at large against any close association with the extremist antics so common at the fringe. The American electorate is not rejecting the two major political parties, nor is it demanding establishment of a viable third party. Instead, Americans are simply refusing to be publicly associated with political outliers. The extremists that exist at the outside edges of acceptable party ideology have been publicly rejected and privately revered, much as they always have been since the country’s inception. If anything, citizens are simply becoming less willing to out themselves as party-line voters, even if they continue to behave as such.
Why, then, does 40% of the American population self-identify as ‘Independent’ while behaving to the contrary? What is the aversion to declaring party allegiance? The answer is, as usual, disappointingly simple.
The reality, as betrayed by all of the available evidence, is that an overwhelming majority of American ‘Independents’ adhere just as strongly to major party ideologies, agendas, and candidates as ever before. It is merely the labels that have changed. While the number of those who claim to be politically Independent is certainly on the rise, tangible results have been few and far between. Voters have elected only two ‘Independent’ Congressman. Both are Senators from New England (Bernie Sanders from Vermont and Angus King of Maine) and both have swiftly joined the Democratic caucus after being elected. There are currently no registered Independents occupying seats in the House of Representatives. Likewise, there has not been a US President without major party affiliation since President Andrew Johnson’s failed Nation Union coalition push in the 1860s. Despite the recent surge in visibility of movements like Libertarianism, veritable Independents that actually vote ‘Independently’ compose a very small portion of the electorate. Poll data from the Gallup survey show that 47% of Americans either affiliate with or lean towards the Democratic Party, with 41% enjoying a similar relationship with the Republicans. The ‘40% claim’ needs to be read with this in mind.
The most apparent motivation for such duplicitous behavior is the prevalence of ‘illusory superiority’ among voters. That is, the aversion developed by the American layman to personal association with the individuals (thus, the parties) that perpetuate the folly of partisan gridlock. The excessive amounts of media sensationalism heaped onto contemporary airwaves has only functioned to exacerbate this behavior. In an Op-ed put together by Political Science professors Yanna Krupnikov and Samara Klar (of Northwestern University and the University of Arizona, respectively) entitled ‘Why people call themselves “independent” even when they aren’t’, the duo expounds on this idea. Pointing to the social unacceptability of support for the political status quo of staunch partisanship, the authors have concluded that:
This perception of partisans leads ordinary people to be embarrassed about admitting – including to pollsters – that they identify with a political party. Instead, people have come to believe that they will make a better impression if they say they are independent.
Indeed Krupnikov and Klar, conclude that the recent increase of ‘Independent’ (non) party identification has produced little in terms of change in the voting public’s ‘actual political views’. Writing in Politico, Poli-Sci professor Alan Abramowitz of Emory University provides further support for phenomenon of what he has labelled ‘closet partisan[ship]’. Despite the fact that more and more voters are eschewing party labels, Abramowitz points to the fact that ‘almost three-fourths of independents surveyed by Gallup during 2013 indicated that they leaned toward one of the two major parties’. He believes that despite the shift towards superficial nonpartisanship, Americans are in fact becoming increasingly divided on party lines. Abramowitz points to data from the 2012 American National Election Study to illustrate his argument. Not only did the report show that over 85% of ‘Independent Democrats’ (87%) and ‘Independent Republicans’ (86%) voted for their party’s candidate, but ‘Independent Democrats’ were more likely to vote a straight Democratic ticket than those who reported a weak affiliation with the party. These results are entirely in line with the general decline of ‘split-ticket’ voting patterns, yet another factor contributing to the growing chasm between progressives and conservatives.
It would be irresponsible to highlight the increase in ‘Independents’ without simultaneously giving lip service to the drop in self-identified Republicans. This is an area where the aforementioned desire to publicly distance one’s self from the fringes is especially apparent. The contemporary Libertarian agenda, for example, shares a large number of core values with what is now considered to be the Republican ‘old school’. If anything, the Millennial predilection for Libertarian affiliation should be viewed as a successful rebranding effort, and not an idological shift. The two movements share many elements of their political platform including: the virtues of ‘bootstraps style’ self-determination, regressive taxation, international isolationism, strong national defense, and an unwavering belief in American exceptionalism.
As seen in the graphic above, the Gallup poll shows a sharp rise in Independent affiliation in the fourth quarter of 2013, a period that contained both the government shutdown and rollout of the Affordable Care Act. This seems to support the notion that the increase in ‘Independent’ self-labeling is, in large part, a knee-jerk reaction by voters who are capricious enough to be significantly affected by day-to-day developments in Washington. Extravagant amounts of media sensationalism and spin on topics such as the Obamacare rollout, Benghazi mission attack, and NSA domestic spying scandal, have produced equally high levels of distrust in the machinations of the government. ‘Crooks and liars’ rhetoric, a disturbingly popular excuse for non-participation in America, is as pervasive as ever. Vast swathes of the population have adopted an aggressively apathetic tone and have found a comfortable, if temporary, home under the cover of ‘Independent’ self-identification.
By playing off of the young conservative voter’s fear of being associated with the socially regressive neo-conservative movement of past generations, the Libertarian movement has successfully co-opted many would-be Republicans into its own ranks. This is certainly not limited to the younger crowd, though. A significant number of Baby Boomer-era (former) Republicans that hold traditionally conservative values are jumping ship as well. As the Neoconservative Right continues to abandon a reasonable conservative platform in lieu of one that plays to the extreme periphery of the party, moderate Conservatives will continue to abandon their cause. The neo-conservative commitment to such radical policies (such as: incessant climate change denial, refusal to recognize marriage equality, aversion to ‘common sense’ gun control legislation, removal of the social welfare safety net, commitment to the continued corprocratic influence, and the nonsensical perpetuation of the War on Drugs) is another factor driving a significant number of Americans away from major party affiliation. While this may not account for the majority of the trend towards ‘Independence’, it is plain to see that the demographics of the ‘newly Independent’ and ‘formerly Republican’ have heavily overlapped in recent years.
While the authors of the Gallup report believe that the increasing level of Independence ‘adds a greater level of unpredictability to this year’s congressional midterm elections’, there is little actual evidence to support this. If anything, it is likely that the discontent felt by the vast majority of self-identified ‘Independents’ will result in higher levels of voter abstention rather than a grandiose wave of political coat turning. That is to say, this phenomenon is first and foremost a manifestation of the pervasive desire to ideologically disassociate from the embarrassing political establishment rather than any positive ideological shift. At the end of the day, the increase in ‘Independent’ identification among American voters has much more to do with falling levels of confidence in the dysfunctional establishment than it does with any real shift in political allegiance or beliefs. As long as the intransigence of Washington lawmakers is continually glorified and the efforts of the far-right to destabilize and discredit the political process are tolerated, the number of Americans who are too embarrassed to publicly identify with a major party, especially the GOP, will continue to rise.
Over the past few days, our nation has chalked up two more attacks to the ever-increasing tally of gun violence. Puzzling and shocking as they are, we really should be accustomed to it by now.
From the head-scratching public suicide of the “Mall Shooter” in New Jersey to the back-room poker game shooting in Detroit that the police chief labeled “urban terrorism”, these recent episodes have reinforced the fact that there has been no reprieve since the tragic shooting at Sandy Hook Elementary School in December of 2012. American mass-shootings have persisted at the rate of nearly two per month over the last few years. Despite overwhelming public support (to the tune of ~90%) for several gun control measures such as federally mandated background checks, the government’s failure to adopt even the most bare bones legislation has come to represent one of the Obama administration’s largest second-term failings. The President’s futile expenditure of political capital in the wake of the Sandy Hook tragedy has cost him, and by extension the rest of the nation, dearly in term of deterring future gun violence. Even beyond the failure of newly-introduced legislation, law-makers have again failed to renew the Clinton-era Federal Assault Weapons Ban, a message which reflects a tacit endorsement of private assault-weapon ownership.
Two internationally recognizable facets of Americana, lax gun legislation and the deification of the Second Amendment, remain puzzling propositions to those not raised in or around communities in the United States, where gun ownership is a point of civic pride. Best represented by Charlton Heston’s “cold dead hands” declaration, it is undeniable that gun ownership has become, to many, an inextricable element of American identity. The commitment to private ownership, and often public display, of firearms represents an essential component of contemporary notions of American exceptionalism.
In addition to our unwavering commitment to a global military presence and a “free market” approach to commodified healthcare, the belief in widespread and unregulated firearm ownership is another domain in which America continues to tread against the global status quo. While gun control opponents readily refute comparisons of European statistics as “apples to oranges” when considering the entrenchment and historic political significance of American gun ownership, legislative measures have produced overwhelmingly positive results in countries with a similarly inflated reverence for firearms. Australia, for example, began an ambitious gun buyback program in 1997 that led to a significant reduction in firearm related deaths. A follow-up study conducted by Andrew Leigh of Australian National University and Christine Neill of Wilfrid Laurier University concluded that in the decade that followed the National Firearms Agreement’s implementation, statistics “show drops of 65% and 59%” (in firearm-related suicides and homicides, respectively) without a significant change in non-firearm related incidents.
The success of the Australian legislation led former Australian Prime Minister (and notable George W. Bush ally) John Howard to pen an op-ed entitled Brothers in Arms, Yes, but the US Needs to Get Rid of Its Guns. Citing the “huge cultural divide” between the two nations on the issue of gun control, Howard believes that the choice of the US to shun pragmatism and the safety of its citizens in favor of radical exceptionalism and revolution-era legislation has been disastrous.
The fact that, in the words of Howard, “[The Second Amendment] bears no relationship at all to the circumstances of everyday life in America today” is largely unimportant to many prominent American proponents of gun ownership. It is deeply ironic that many of the anti-gun control activists point to the constitutional amendment as a “historical” argument for unregulated firearm ownership while simultaneously ignoring all of the cultural progress that has occurred over the past 220-odd years. Also frequently neglected is the first half of the Amendment (emphasis is author’s own):
A well regulated militia being necessary to the security of a free state, the right of the people to keep and bear arms shall not be infringed.
Reading the full text calls into question the blatant disregard of gun advocates in organizing any semblance a “well regulated” civilian fighting force within which citizens would use their firearms, as mandated by the text. The question of why the Amendment’s second clause is allowed to exercise absolute supremacy over the preceding phrase in contemporary discourse remains unclear. What is clear, however, is that the American aversion to gun control has much less to do with historical fidelity or a commitment to individual rights and has much more to do with the paranoia-fuelled “culture of violence” that exists in America. It remains plainly obvious that gun ownership concerns are not truly connected to fears of a tyrannical federal government, as proponents often like to suggest. If this were the case, special interest groups like the National Rifle Association would probably be more concerned with reigning in military spending and the fact that the USA fields the most powerful military that the world has ever seen.
As a nation of immigrants, public distrust for the federal government is practically an endemic feature of history in the United States. Owing much to the emphasis placed on local and community-level civic engagement, Americans, especially those in the more sparsely populated central and southern states, remain actively hostile towards “top-down” legislative reform on the federal level. When combined with the pervasiveness of America’s exceptionalist tendencies, anti-reform sentiment is widespread. While other countries have embraced “common sense” revisions to national-level legislation on firearms, the United States remains intransigent. While 17 of the world’s most prominent nations included provisions for the right to personal firearm ownership in 1875, today the number has fallen to three. The failure to adopt reasonable firearm legislation is, unfortunately, just another manifestation of the United States’ widespread refusal to adapt itself to the contemporary era.
Gun control opponents most often point to the fact that violent crime in the United States is falling as evidence that gun-control concerns are unfounded. While it is true that the 24-hour international news cycle, among other things, have sensationalized events and distorted public perceptions of the pervasiveness of violence, rates of gun violence in the USA remain astronomical when placed alongside those of comparable countries. A recently completed study that was rushed to publication in the wake of the Sandy Hook shootings has worked to debunk the common American mythology that the proliferation of gun ownership works to deter gun crime. Published in the American Journal of Medicine, the study turned up two especially interesting (though hardly surprising) conclusions.
First off, the report showed a strong correlation between the number of guns per capita and the number of firearm-related deaths in a given country. The United States, unsurprisingly, finished at the top end of the spectrum (representing high firearm ownership and mortality rates) with Japan on the other. Second, the most significant outlier in the data is represented by South Africa. Despite having significantly fewer guns per capita than the United States, the Republic of South Africa experiences a similar number of firearm-related deaths. This is noteworthy as South Africa is among the countries that boast a significant “culture of violence” that rivals that which exists in America. While the historical circumstances are certainly not the same in the two aforementioned countries, the findings certainly reinforce the notion that, in addition to the raw availability of firearms, there are more complex and nuanced cultural factors that drive gun-violence rates.
A common argument points to deficient mental health provisions as a primary factor in American gun violence. A recent Gallup study found that more Americans fault the mental health system than the rampant availability of firearms for causing mass shootings. It is important to note that, had the proposed Spring 2013 gun legislation passed through Congress successfully, the Navy Yard Shooter would not have been able to legally purchase the firearm that he used in his now-infamous spree. While the feeble mental health infrastructure and poor availability of public programs in America almost certainly exacerbate already abhorrent levels of gun violence, placing blame on them does little to negate the glaring need to address faulty and obsolete laws regarding firearms. Improvements absolutely need to be made, but they alone will not sufficiently prevent future incidents of gun-violence.
No other advanced nation endures this kind of violence.
Here in America, the murder rate is three times what it is in other developed nations. The murder rate with guns is 10 times what it is with other developed nations. And there’s nothing inevitable about it.
It comes about because of decisions we make or fail to make, and it falls upon us to make it different.
The ongoing government shutdown and Congressional stalemate over the government’s ability to fund its programs has highlighted an essential element of contemporary American Zeitgeist. Financial backing for the Affordable Care Act, which was passed into law in March 2010, has run aground while facing opposition from a very vocal Republican minority within the House of Representatives. The anti-healthcare contingent has demonstrated that it has no qualms about doing wide-rangvoiling damage to the government of the United States or the American citizenry in order to divert national attention to their agenda. Despite the fact that Obamacare remains a fait accompli, the far-right remains convinced that by obstructing the continued operation of the federal government, they will achieve their goals. Welcome, everyone, to the post-democratic era of American politics.
The shutdown has come as the manifestation of an increasingly stagnant legislature that has produced record levels of dissatisfaction among constituents. Aggregated across multiple polling efforts, Congressional approval ratings are currently peaking just a fraction above 10%. As discontent with Congressional intransigence continues to swell, especially among disaffected Millenials, the age-old American myth of unadulterated self-reliance has been given a new lease on life. An excellent example of this, if you’ll remember, was the Romney presidential campaign’s attempt to decontextualize the ‘If you have a business: You didn’t build that’ soundbyte from a July 2012 speech by the President. While it remains blatantly obvious to any casual observer (or, in fact, anyone who bothers to read the line in context) that the president was not suggesting than an omnipotent central government was responsible for the success of small-business entrepreneurs in America, that did not stop the GOP from pushing their agenda of finger-pointing and birther-esque slander. Indeed, an appallingly cynical conception of America’s working poor and the willingness of Tea Party politicians to regurgitate an abundance of boldfaced lies have combined with the enduring American tradition of governmental distrust to foster a disconcerting base of support for those committed to the anti-government cause.
The very nation’s commitment to “exceptionalism” at the expense of popular welfare has produced, in its latest manifestation, a detrimental legacy of disregard for the marginalised sectors of society. This tradition of neglect forms an oft-ignored subtext that underscores the increasingly prominent return of rhetoric that fetishizes notions of ‘rugged individualism’ and a disdain for the working poor. Despite the disastrous results of the President Herbert Hoover’s trust in the virtues of self-reliance to guide the country through the burgeoning Great Depression and the absurdity of Reagan’s ‘by your bootsraps’ convictions (we’re still waiting for that wealth to trickle down…), high profile Republicans and Libertarians continue to deliver lines that would make Ayn Rand beam with pride.
Once responsible for fostering the immigrant-friendly ‘melting pot’ culture that attracted the world’s greatest scientific and academic minds, America’s fascination with individualism and self-determination forms an integral part of the national spirit (not to mention the second passage of the Declaration of Independence). Today’s ‘Boostrap revival’ efforts, however, have perverted the egalitarian and anti-bourgeoisie aspirations apparent in the spirit of America’s inception. In an ironic move that has pitted the ‘populists’ against the population, shutdown-era radicals of the far-right equate future lower and middle-class prosperity with the eradication of government assistance to those very same groups. We have to look no further than the mid-September bill pushed through the House of Representatives by the GOP majority. The proposed legislation features deep cuts to federal programs that provide assistance (namely ‘food stamps’, which increasingly come in the form of debit-style electronic cards) to those who otherwise cannot afford to eat.
The Nutrition Reform and Work Opportunity Act, which features over $40 billion in cuts over the next 10 years to the government’s Supplemental Nutrition Assistance Plan (SNAP), was labelled ‘one of the most heartless bills I have ever seen‘ by Democratic Representative James McGovern from Massachusetts. The Republicans, for their part, have an entirely different perspective. House Speaker (and shutdown celebrity) John Boehner claimed that the bill would make ‘getting Americans back to work a priority again for our nation’s welfare programs’. This sentiment, as well as the bill’s lofty title, would lead you to believe that the bill contains some sort of pro-labor provision that would work to help soften the blow of slashed government benefits to the poor. This, however, is far from the case.
When viewing the Act’s contents, any allusions of GOP sympathy for impoverished Americans are quickly dispelled. Boehner’s description of the bill’s utility as a tool in expediting the unemployed masses’ return to work is wildly disingenuous. In place of any remotely proactive initiatives exist a series of draconian measures that highlight the elimination of ‘state performance bonuses’, ‘increas[ed] oversight of SNAP programs for the homeless, elderly, and disabled’, and the consent of the federal government for states to ‘conduct drug testing on SNAP applicants as a condition for receiving benefits’. Voilà, ça y est. Today’s Republicans care little about reinvigorating the working-class foundation of the domestic economy, and much more about preventing the President’s health care bill from coming into effect.
The third item of the aforementioned list has featured heavily in recent conservative agendas. The push to mandate drug testing for SNAP recipients does little to discourage the perpetuation of the caustic and bigoted ‘welfare queen’ mythology. It has become increasingly clear that the modern libertarian equates poverty with sloth and unemployment with apathy. The drug testing initiative, in addition to being ethically and morally objectionable, has been shown to make little economic sense. Florida conservatives were temporarily successful in launching a new law that resulted in a four month period of testing in 2012. In a deeply ironic twist, the examinations produced a failure rest of just 2.8%, which resulted in a cost to the state of $118,140. The program, which cost the state more than the expense of the potential benefits to the 2.8% of drug-using welfare-recipients, was deemed likely to have been a ‘constitutional infringement’ by a Federal District Court who discontinued the testing via temporary injunction.
The unsurprising results of the Floridian experiment have done little to deter the right wing’s push to further marginalize the American lower classes. Regardless of the matter at hand, be it food stamps or healthcare, it is clear that the anti-government contingent of the GOP will stop at nothing to see the income disparity widen and the downtrodden fall increasingly underfoot. The most recent manifestation of this desire, the government shutdown, has only pushed their pursuit further into the international spotlight. It is becoming increasingly apparent that the far-right is far less concerned with implementing alternative routes towards American prosperity than they are to obstructing ideologically undesirable legislation and attempting to annihilate the reputation of Democratic presidents.
The unflattering rightward shift of the political spectrum in the United States has coincided with a growing disparity between the privileged few and the disenchanted masses, with the lower echelons of society inheriting the lion’s share of the resulting burden. The contingent of anti-welfare extremist Republicans in the House of Representatives referred to as the ‘Anarchy Gang’ by Senator Elizabeth Warren and their constituents have achieved an overwhelming level of success. President Jimmy Carter recently remarked that:
The disparity between rich people and poor people in America has increased dramatically since when we started… The middle class has become more like poor people than they were 30 years ago.
Adding insult to injury, the push to disenfranchise (see: the recent fight over voter registration laws) and marginalize the American masses is exacerbated by a declining education system. The de-funding of public schools fits neatly into the far-right’s program to comprehensively privatize American life. It also, not-so-coincidentally, functions to inhibit the upward mobility of citizens and abolishes any prospect of ‘bootstrap’-style salvation. A recent report by the Organization for Economic Cooperation and Development has quantified the decline of American academic prowess. A BBC article on the report remarked that the United States represented ‘an education superpower of a previous generation’, where younger generations are increasingly less educated than their parents. This downward spiral has had tangible effects beyond the general ignorance of the population, with the number of ‘highest-skilled’ professionals in the US falling from 42% to 28%.
The reality, however, has remained almost entirely irrelevant to the far right. Plummeting levels of education and unprecedented levels of poverty Practicalities do not represent any significant impediment to the GOP’s pursuit of unabashedly ideologically-driven agenda. If anything, the downward trend in education enables the radical right’s pursuit of all things anti-science and anti-modern. Equally, it simply does not matter that their ‘small government’ rhetoric runs completely contrary to drug testing for welfare recipients that costs the state exorbitant sums. The fact that Obamacare is based largely on conservative designs and represents a significant step forward for the American population is equally irrelevant. The commitment to antagonism at the expensive of reason has spawned claims about the Affordable Care Act that cross a line drawn far beyond absurdity and extend well into the realm of nauseating obscenity. The legislation, which functions to expand healthcare provisions to significant swathes of previously-uninsured Americans, has been labelled ‘a law as destructive to personal and individual liberty as the Fugitive Slave Act of 1850‘ by Republican Representative Bill O’Brien. Tea Party leader and architect of the ‘not-quite-filibuster’ Senator Ted Cruz has led the charge among the minority of Republicans committed to a protracted shuttering of the government. In his opinion, the President and Senate Majority Leader Harry Reid are responsible for grinding the government to a half over their unwillingness to ‘compromise’ on Obamacare. The reality, in stark contrast to the Tea Party Senator’s remarks, is that the Patient Protection and Affordable Care Act passed through both houses of Congress, was upheld by the Supreme Court, and was reaffirmed by the election of Obama (in lieu of Mitt Romney, whose promise to ‘repeal Obamacare’ formed the basis of his entire presidential campaign) to a second term.
While the shutdown has come as a shock to many, American and non-American alike, it is really far from surprising when viewed from a wider perspective. The current situation has come as a simple product of cause-and-effect. There can be no reasonable expectation of responsibility when ideologues are voted into government. It is not an event that comes without repercussion. It seems especially silly to except the smooth operation of Congress from Representatives that campaign on a platform of anti-government values. When politicians are more committed to the partisan pursuit of destroying the legacy of an incumbent President than they are to providing for the well-being of their fellow countrymen, it becomes absurd to expect a positive result.
It is dangerous, though, to think of the Congressional deadlock as the problem, and not a symptom. The shutdown (and possible upcoming default) has come as a direct consequence of the mainstream acceptance of Tea Party politicians and the dangerous extremes that they represent. Most recently, an individual appeared at the anti-Obama protests in front of the White House accompanied by a Confederate flag. This wildly inappropriate gesture in many ways embodies the senselessness and misguided nature of the government shutdown as well as contemporary American politics at large. Despite impassioned cries of protesters, galvanized by an appearance of Tea Party celebrities Sarah Palin and Senator Ted Cruz, the truth that emerged undeterred. In today’s political arena, the reality has taken a back seat to reactionary fervor. Fear and moralizing partisanship have overtaken the practical considerations of governing, and politics has been reduced to a game of who can behave in the most petulant manner. The reckless brinkmanship is well represented in the recent remarks of President Obama, who has continually affirmed that he ‘will not negotiate’ over things like ‘the full faith and credit of the United States’ or ‘whether or not America keeps its word and meets its obligations’. However, it remains quite difficult not to mentally substitute the latter half of that phrase with its more conventional conclusion.
In response to the flag-bearer’s breach of decorum, Atlantic editor Ta-Nehisi Coates remarked that ‘If a patriot can stand in front of the White House brandishing the Confederate flag, then the word ‘patriot’ has no meaning’. In addition to the immediate significance, the sentiment is especially poignant in considering the contemporary distortion of traditional GOP priorities. Gone are the days when practical economic considerations drove policy within the Republican Party. While it’s very likely that Congress will conjure up a last-minute compromise to avoid a cataclysmic breech of the debt ceiling, it will not be because any minds were changed. No compromises will be struck, because today’s Conservatives are uninterested in doing so. Pragmatism, like bipartisanship, is a relic of the old GOP. The new Republican party is willing to be defined by a small minority of Tea Party extremists who are, by and large, more concerned with portraying the President as a litany of increasingly laughable evils than they are with improving the country, or, as the previous weeks have demonstrated, even allowing it to function.
After reading the shameless attention-grab that was Tim Stanley’s latest Telegraph blog post (Obama and Syria: Britain has helped Obama rediscover the Constitution. No need to thank us, America), I realised that it was not, in fact, the ‘Anarcho-Catholic’ and ‘temperamentally conservative’ author’s attempts at being clever (‘Obama referred to America as a constitutional democracy. It’s a republic, sir, a republic. What grades did he get at college I wonder?‘) that made the largest impression on me. Instead, it was the fact that he considered Obama’s decision to seek Congressional approval for a military intervention to have been a ‘remarkable performance.’ The notion that the President’s decision to seek out the proverbial ‘green light’ from Congress is at all controversial is deeply worrying. While the school of thought that considers unconstitutional every post-Second World War American armed conflict (John Nichols writes that ‘no president since Roosevelt has respected the Constitution sufficiently to seek a formal declaration of war.’) fails to account for the insufficiency of archaic international institutions, political assumptions, and legal norms, it certainly seems more lucid than the alternative. As House Republican Peter King sees it, Obama is ‘undermining the authority of future presidents’ by not acting unilaterally in lobbing cruise missiles into the Syrian conflict. ‘The president doesn’t need 535 Members of Congress to enforce his own redline‘, he argues. But at what point was the Commander-in-Chief given the power to draw these red lines in the first place?
Jack Goldsmith, Professor at Harvard Law School, expert on international law, and former Assistant Attorney General published a blog post entitled ‘Why Doesn’t President Obama Seek Congressional Approval for Syria?‘ a few days ago, when a Presidentially-sanctioned unilateral strike seemed all but imminent. After running through a litany of potential justifications for such a broad conception of presidential power (e.g. ‘military action is being rushed’, ‘formal congressional approval is not a priority’, etc), he concludes that exactly none ‘are good reasons from a constitutional perspective, and in light of the costs of unilateralism’. While the first half of that statement is rather self-evident, the latter half deserves more than a passing acknowledgement. Thus, we will return to the notion of unilateralism in concluding this text. It would seem that Constitutional law scholar Garrett Epps concurs with his Harvard colleague, going as far as to title his first article in The Atlantic on the topic ‘The Authority to ‘Declare War’: A Power Barack Obama Does Not Have‘. In a twist that only becomes ironic after reading the commentary of Tim Stanley, Epps points out that while the Prime Minister of the United Kingdom reserves the right (backed by ‘Royal Prerogative’) to send Britain into war without Parliamentary consent, the American President surely does not. He continues on to explain why historic precedents of Presidentially-sanctioned intervention (most notably: Korea) do not readily apply to the current situation. As any action would, inherently, lack the pretence of defensive or emergency action, Epps judges that: ‘This is precisely the kind of situation for which the Framers of our Constitution designed its division of authority between President and Congress.’ He conjures up a very appropriate quote from South Carolina Governor and author of the United States Constitution, John Rutledge as he argued against a President with exceedingly broad powers during the from the minutes of the Federal Convention of 1787:
‘ [Rutledge] said he was for vesting the Executive power in a single person, tho’ he was not for giving him the power of war and peace.’
If that is not applicable to the debate over the Framers’ intentions, I do not know what is. Epps went on to publish a second article, Yes, Congress Can Authorize War Without Formally ‘Declaring’ It, which refutes the notion that a Congressional decision is an ‘all or nothing’ affair that was meant for national mobilisation for Total War. In agreeing with Alexander Hamilton’s judgement that the ‘powers of war and peace’ should be viewed as ‘a concurrent authority’ that is shared between the President and Congress. He further levels his sights against those who share the views of John Nichols in pointing out that:
‘If every “undeclared” conflict is a violation of the Constitution, we need retroactive impeachment of Adams, Jefferson, Monroe, Eisenhower, Johnson, Reagan, and both Bushes.’
Finally, and most importantly, Epps emphasises the fact that international law is ‘very much a part of the constitution’ (see, for example, the Supreme Court’s recognition of international treaties in relation to the Supremacy Clause [This Constitution, and the Laws of the United States which shall be made in Pursuance thereof; and all Treaties made, or which shall be made, under the Authority of the United States, shall be the supreme Law of the Land…]), and, as such, war can not be used as mechanism of foreign policy. In any case, I do not find persuasive arguments that place a higher importance on defining the words ‘declaration’ and ‘war’ than on the notion that the Constitution has provided the United States Congress with the responsibility (emergency situations notwithstanding) of deciding when military action is appropriate, and when it is not. Even a cursory glance at the eighth section of the very first article will give the impression that Congress was to be given the responsibility of authorising the use of what has become the world’s most powerful army. I believe that unilateralism, in its various manifestations, is the quintessential problem with contemporary American foreign policy. That is to say, I think it is the most fundamental flaw. A lack of regard for the opinions of the rest of the planet is the culprit behind such a staggering proportion of the world’s (freely substitute: “Intellectuals”, “Europe’s”, “the United Nations'”, etc.) problems with the last remaining superpower. It is so rare to encounter individuals who harbour a genuine aversion to the concept of humanitarian intervention, though it is equally rare to find someone who believes that the United States has done a stellar job of spearheading such efforts. Similarly, the question should not, as Kerry and company have indicated, be centred around whether or not President Obama reserves the right or privilege of commanding the military into action. Instead, I propose, it should be about whether doing so would be such a good idea after all.