Posted by: seanmichaelbutler | March 4, 2010

THE NEOLIBERAL REVOLUTION

For 25 years following the end of the Second World War, the global economy experienced an unprecedented period of sustained growth. In the industrialized world, millions of people joined the ranks of the middle class, and wealth inequality sunk to historic lows. After decades of strife, labour and capital reached a relative ceasefire, and a mixed economy of governmental macroeconomic guidance combined with private microeconomic initiative emerged. Capital was able to make healthy profits, while much of the rising productivity of labour was passed on in the form of higher wages. Governments made full employment a priority, and increasingly accepted the responsibility of providing for the poor and disadvantaged. By the late 1960s, governments were seriously considering implementing a basic income (also known as a guaranteed annual income) and many policymakers thought that our biggest problem in another 20 years would be what to do with all our free time once the work week had been significantly reduced. This exuberant economic attitude was arguably reflected in the radical social experimentation and revolution that emanated from universities now accessible to the majority, and in the various movements for liberty and social justice erupting worldwide. For many, all this social and economic optimism had one man to thank: the British political economist John Maynard Keynes, who had emerged from the academic wilderness in the 1930s to play a leading role in the design of the post-war economy at Bretton Woods, and whose focus on the counter-cyclical stimulus of aggregate demand became the lynchpin of governmental economic policy in subsequent decades. “There was a broad body of optimism…that the 1950s and 1960s were the product of Keynesian economic engineering. Indeed, there was no reason why the prosperity of the international economy should not continue as long as appropriate Keynesian policies were pursued…”  In 1971, even the conservative US president Richard Nixon would famously proclaim, “We are all Keynesians now.” The triumph of Keynesianism seemed complete.
 Yet shortly after Nixon uttered these words, it all fell apart. That same year, Nixon ended the era of dollar to gold convertibility, a move that many see as the beginning of the end for the great post-war compromise between capital and labour. Three years later, in the face of the first oil embargo and other pressures, the economy nose-dived into the worst recession since the Great Depression, never to rebound to earlier levels. Worse still, the theoretical underpinnings of Keynesianism were called into question by the simultaneous appearance of high inflation and high unemployment – a new phenomenon dubbed “stagflation”. While Keynesianism floundered for an explanation, new theories stepped into the breach; monetarism and supply-side economics were the two most popular. While these new theories had distinctive approaches, both shared the belief that big government – namely Keynesianism – was the problem, and that the solution to stagflation was to restrict government intervention in the economy to a strict inflation-fighting monetary policy (in the case of monetarism) or to cut taxes to stimulate private investment (in the case of the supply-siders). This move away from government intervention and the welfare state, and towards more emphasis on an unfettered market, can been summed up by the term “neoliberalism”. As the 1970s ran their course, neoliberalism gradually took over from Keynesianism as the reigning economic orthodoxy, to be consummated in the Anglo-Saxon world by the elections of Margaret Thatcher in the UK in 1979, Ronald Reagan in the US in 1980, and Brian Mulroney in Canada in 1984.
The story told by the victors of this ideological battle – the neoliberals – is that Keynesianism, despite its apparent success for 25 years, was in the end responsible for the constellation of economic crises that descended on the industrialized countries during the 1970s, and that neoliberalism was the remedy. The shift from Keynesianism to neoliberalism was, according to this story, the only rational option in the face of stagflation; as Thatcher crisply remarked at the time, “There is no alternative.”
I will call into question this story, by first examining the causes of the 1970s economic malaise, and then looking at what interests were behind the promotion of neoliberalism as a solution, how it gained political power, and how it was disseminated around the world. I will fashion an alternate narrative, one in which Keynesianism was not to blame for stagflation, in which the economic crises of the 1970s put the compromise between capital and labour under severe strain and ultimately broke it, in which the capitalist class went on the offensive partly because it feared for its very survival, and in which this class achieved its ends by forming an alliance with social conservatives equally fearful in the face of the 1960s counter-cultural revolution. The protagonist of this story will be the United States; as the capitalist world’s superpower, it was largely responsible for the crisis of the 1970s, it suffered the worst from it, and it led the way down the new path of neoliberalism.

THE FALL OF KEYNESIANISM…
As one of the principle fathers of neoliberalism, the economist Milton Friedman’s indictment of Keynesianism is of special relevance, for it is emblematic of the neoliberal attempt to – quite successfully – pin the blame for chronic recession squarely on Keynesian shoulders. Briefly, Friedman theorized that there was a so-called “natural” rate of unemployment, which persisted in the long-term despite governmental attempts to stimulate demand through spending. Running a budget deficit to pump money into the economy might bring down the unemployment rate in the short term, he thought, but in the long run it would only create inflation, while unemployment would inevitably return to its natural rate – now higher because of the inflation. He essentially argued that fiscal policy was useless – even damaging – and that if governments wanted to bring down the natural rate of unemployment, they should focus on keeping inflation low through monetary policy, while loosening restrictions on markets so that, for instance, wage levels could find their equilibrium point. This explanation for the stagflation encountered in the 1970s proved quite convincing to many searching for answers to the predicament, as well as enormously appealing to those who had always wished for a return to unfettered markets, and played a key role in justifying the switch from Keynesianism to neoliberalism, in its guise of monetarism.
How realistic is this account? Certainly, deficit financing played an important role in the soaring inflation of the 1970s, but was this solely the result of spending on social programs, such as under president Lyndon Johnson’s Great Society initiative, or were there other causes for deficit spending? The Vietnam War, combined with Johnson’s unwillingness to raise taxes in the face of rising war expenditures, caused the US Federal Reserve to print large amounts of new dollars. Military spending is often seen as the most inflationary form of government spending, because it puts new money into the economy without a corresponding increase in output.  The US had some leeway to get away with this rapid increase in the money supply, since the dollar was the international reserve currency, but there was a limit to this, and the explosive inflation of the 1970s was the result.
It must be noted that the US proved a dismal failure in its short-lived role as manager of the world’s monetary system. At Bretton Woods, it had been entrusted with the task of maintaining a sound monetary system, through the gold exchange standard, just as Britain had previously. Britain, being a trading nation, had had a strong interest in maintaining a sound international monetary system, and had been effective (some would say too effective) at maintaining it. The United States, on the other hand, traded much less, and consequentially took its responsibilities much less seriously. It is easy to speculate about the justification made by US officials as they printed irresponsible amounts money to pay for their war in Vietnam: they surely saw themselves as defending the free world against the tyranny of communism, a cause for which a little monetary instability, shouldered by the “free world” in general, was a small price to pay.
The first cracks in the system started to show during the series of currency crises that struck in the late 1960s. By the end of the decade, the dollars held outside the US were worth eight times as much as the US had in gold reserves.  In 1971, rather than saving the system by devaluing the dollar, and fearing a run on US gold, Nixon ended the gold exchange standard. The US had abused its power of seigniorage (as monarchs before had), but wouldn’t escape without paying a price.       
The result was more inflation, as the dollar, now cut loose from the Bretton Woods standard of $35 per ounce of gold, shed its inflated value. The lower dollar also raised the cost of imports to the US consumer, further fueling domestic inflation. (The end of dollar convertibility also brought with it more far-reaching consequences. The fixed exchange rates of the 1950s and 60s were incompatible with free flows of capital. Yet taking the dollar off gold led directly to floating exchange rates, which in turn paved the way for freer flows of capital between countries. This development would later aid greatly in the furtherance of the neoliberal agenda.)
As if these developments were not inflationary enough, the Yom Kippur War of October 1973 led OPEC to restrict oil exports to Israel’s allies, quadrupling oil prices virtually overnight.  Yet this was inflation of a different nature than the kind that had been building up in the 1960s; rather than being linked to excess demand and an overheated economy, it was driven by increases in costs on the supply side and brought with it recessionary pressures. An increase in the price of oil, being fundamental to so much of the economy, is “similar to the imposition of a substantial sales tax. The price of the product goes up and consumers have less income available to spend on other goods and services. The result is a bout of inflation, at least temporarily, and sluggish economic expansion if not recession.”  This goes a long way towards explaining the supposedly impossible coincidence of high inflation with high unemployment.
Yet there were other factors that also contributed to the so-called “misery index” (inflation rate plus unemployment rate). The most basic of these was that governments tried repeatedly to beat inflation by attacking perceived excess demand through restrictive monetary and fiscal policies; when Nixon tried this strategy in 1970, it resulted in recession. His successor, Gerald Ford, tried the same approach in 1974 – despite the fact that inflation at that point was not being driven by excess demand, but by high costs on the supple side (namely oil). Thus, poor governmental reaction to inflation caused recession and rising unemployment, while failing to master inflation.
Another factor contributing to the slow-down of growth in the US economy was the end of the privileged position it enjoyed as the only power to emerge from the Second World War relatively unscathed. As Germany and Japan laboured to reconstruct their war-ravaged economies, the US faced little competition. Yet by the end of the 1960s, the old Axis powers, now recast as capitalist democracies but still economic powerhouses, were flexing their economic muscles again. This, combined with increasing competition from newly industrialized countries in East Asia and from other developing countries, cut into the robust economic growth the US had enjoyed for two decades previously.
To sum up, inflation caused by first the Vietnam War and later the oil embargo (itself the result of war in the Mideast), coupled with increasing competition to US business internationally, along with the shock of the collapse of the Bretton Woods framework, were the major factors that combined to create the “perfect storm” known as stagflation:

…the stage was set for the deepest recession since the 1930s. The long period of post-war expansion had at last come to an end; America and world capitalism entered a new phase of turbulence which, amongst other things, threw economic policy and economics as a theory into a state of flux.   
… AND THE RISE OF NEOLIBERALISM
In the previous section, I outlined the confluence of factors that led to the crisis of stagflation in the 1970s. In the following section, I will describe the reaction to this crisis – the how and why of neoliberalism’s triumph as the new economic orthodoxy.
Different authors ascribe to different points in time when the balance decisively shifted from Keynesianism to neoliberalism – some place the tipping point as early as the latter half of the 1960s,  others as late as the ascendancy of Thatcher and Reagan  – but the midway year 1974 seems as good as any. It was in this year that Gerald Ford came to the White House with the slogan, “Whip Inflation Now” (WIN), declaring that inflation was public enemy number one and that reduction in government spending was the chief means to that end. It was also in this year that inflation peaked (at 11% – although it would later be surpassed by a second peak of 13.5% in 1980), and that the “perfect storm” that had been building for years, catalyzed by the energy crisis, finally unleashed its full fury on the economy. In declaring war on inflation, Ford broke with the Keynesian bias of giving precedence to full employment; whereas before inflation had been a tool to control unemployment, now unemployment was to be used as a tool to control inflation:

The choice seemed to be stark: accept some inflation as the price of expansion and adapt business and accounting practices accordingly, or pursue a firm deflationary policy even if that meant accepting a higher level of unemployment than had been customary since the Second World War. 

In choosing the latter, Ford shattered the fragile compromise between labour and capital and, favouring capital, took America on its first real steps towards neoliberalism.
Yet, as the crisis had gathered steam in the early 1970s, it was by no means clear which way the winds would blow. It was well remembered that the last major economic crisis, in the 1930s, had resulted in the socialist policies of the New Deal, and indeed in the 1970s labour again called for more governmental intervention as the solution to the crisis. Capital, meanwhile, as it suffered from reduced profits due to increased competition abroad and recession at home, also saw the crisis as both an opportunity to advance its interests and as a threat to its interests from an increasingly militant labour. “The upper classes had to move decisively if they were to protect themselves from political and economic annihilation.”  The ceasefire between labour and capital had held when times were good, but as soon as conditions started to sour, both sides went on the offensive. It was to be one or the other. 
Sensing both the opportunity and the threat presented by the crisis, the capitalist class put aside its differences and united against the common enemy of labour. The 1970s marked the beginning of the right-wing think tank, with corporate dollars founding such now well-known beacons of neoliberal thought as the Heritage Foundation, the Hoover Institute, and the American Enterprise Institute. Lobbying efforts, though such umbrella organizations as the American Chamber of Commerce, the National Association of Manufacturers, and the Business Roundtable (a group of CEOs founded in 1972), were massively ramped up; business schools at Stanford and Harvard, established through corporation benefaction, “…became centres of neoliberal orthodoxy from the very moment they opened” ; and “the supposedly ‘progressive’ campaign finance laws of 1971 [that] in effect legalized the financial corruption of politics,” were followed by a series of Supreme Court decisions that established the right of corporations to make unlimited donations to political parties.  “During the 1970s, the political wing of the nation’s corporate sector staged one of the most remarkable campaigns in the pursuit of power in recent history.”
  The ideology adopted by capital during this remarkable drive to win the minds of the political leadership “…had long been lurking in the wings of public policy.”  It emanated largely from the writings of the Austrian economist Friedrich von Hayek, around whom a collection of admirers (including Milton Friedman) called the Mont Pelerin Society had formed in 1947. This group’s ideas became known as neoliberalism because of its adherence to such neoclassical economists of the latter half of the 19th Century as Alfred Marshall, William Stanley Jevons, and Leon Walras.  Hayek had argued presciently that it might take a generation before they could win the battle of ideas;  by the time he won the Nobel Prize for economics in 1974, followed by Friedman two years later, victory was indeed close at hand.
Why did capital “…[pluck] from the shadows of relative obscurity [this] particular doctrine that went under the name of ‘neoliberalism’…”?  Was it to save the world from the ravages of Keynesian stagnation and to free people from the heavy hand of bloated government? This was certainly part of the rhetoric used to sell neoliberalism to the public, but one need only look at who benefited from neoliberalism to get a strong sense of whose interests it really served. It was eventually quite successful in lowering inflation rates, and moderately successful in lowering unemployment, but failed to revive economic growth to pre-1970s levels; meanwhile, it resulted in levels of wealth inequality not seen since the 1920s in the US, stagnating real wages, and a decreased quality of life for those reliant on government services. Alan Budd, Thatcher’s economic advisor, was candid about the real motives behind the neoliberal rhetoric when he said, “The 1980s policies of attacking inflation by squeezing the economy and public spending were a cover to bash the workers.”  Neoliberalism was capital’s way of disciplining labour through unemployment, creating what Marx called an “industrial reserve army” that would break unions and drag wages down.  Reagan facing down the air traffic controller’s union, PATCO, during a bitter strike in 1981, paralleled across the Atlantic by Thatcher’s similarly tough stance with the National Union of Mineworkers’ year-long strike in 1984-85, was emblematic of the new hostile approach to labour reintroduced to state policy by neoliberalism. In short, neoliberalism was driven by class interests; it was the vehicle best suited “…to restor[ing] the power of economic elites.”  The true point of neoliberalism is revealed by the fact that whenever the dictates of neoliberal theory conflicted with the interests of the capitalist class, such as when it came to running massive budgetary deficits to pay for military spending during peacetime, neoliberalism was discarded in favour of the interests of capital.
Before neoliberalism came to roost in the White House, however, there were several experiments conducted in the periphery. It is revealing to note that the first nationwide imposition of neoliberalism occurred under conditions of tyranny: Augusto Pinochet’s Chile; it is likewise fitting that neoliberalism drove from Chile its antithesis, the communism of Salvador Allende, and that it was imposed through a US-backed coup. After the coup in 1973, Chile became a field school for graduates from the economics department of the University of Chicago, where disciples of Milton Friedman, who taught there, had formed their own monetarist/neoliberal school of thought. These economists attempted to remake the Chilean economy into the ideal neoliberal state (in the same way that US neoliberals are currently attempting in Iraq), a transformation that likely would not have been possible without the Chilean military ensuring a compliant labour. Despite lackluster economic results (particularly after the 1982 debt crisis in Latin America), Chile served as a model to neoliberals who wanted the rich countries to follow the same path. 
There was another coup, of sorts – less known and less violent – that occurred in New York City in 1975. In that year, the city went bankrupt, and the subsequent bailout came with strict conditions attached, including budgetary rules and other institutional restructuring. “This amounted to a coup by the financial institutions against the democratically elected government of New York City, and it was every bit as effective as the military coup that had occurred in Chile.”  It was “an early, perhaps decisive battle in a new war,” the purpose of which was “to show others that what is happening to New York could and in some cases would happen to them.”  “The management of the New York fiscal crisis pioneered the way for neoliberal practices both domestically under Reagan and internationally through the IMF in the 1980s.”
While coups, either military or financial, were possible against developing countries and municipalities, neoliberalism would have to gain dominance in the US federal government through slightly more democratic means. As noted earlier, the intense drive to power through lobbying, think tanks, and academia convinced many in the elite of the virtues of neoliberalism, but ultimately this ideology would have to sway masses of people to actually vote in favour of it. In order to secure the broad base of support necessary to win elections, neoliberals formed an alliance in the 1970s with the religious right (a move that has forever since confused the terms “liberal” and “conservative”). While this significant segment of the American population had previously been largely apolitical, the counter-cultural revolution of the late 1960s and early 1970s provoked many of these “neoconservatives” to enter the political arena to oppose the perceived moral corruption of American society – a movement that came to fruition with preacher Jerry Fallwell’s so-called “moral majority” in 1978.  While neoliberals and neoconservatives may seem like strange bedfellows, the coalition was likely facilitated by religious fundamentalists’ relative indifference towards the material, economic world; according to their extremist Christian worldview, their material interests in this world would be well worth sacrificing to secure the spiritual interests of their nation in the next world. Furthermore, both religious and economic fundamentalists must have found a comforting familiarity in each other’s simplistic extremism (the “invisible hand” of the neoliberals’ free market is eerily similar to the Christians’ God in its omnipotence, omnipresence, and inscrutability).
The Republican Party gathered under its banner these religious reactionaries, as well as those non-religious (largely white, heterosexual, male, and working-class) who simply feared the growing liberation of blacks, gays, and women, and who felt threatened by affirmative action, the emerging welfare state, and the Soviet Union.  “Not for the first time, nor, it is to be feared, for the last time in history had a social group been persuaded to vote against its material, economic, and class interests for cultural, nationalist, and religious reasons.”  It was this alliance of social fear and economic opportunism that swept arch-neoliberal Ronald Reagan to the White House in 1980 – “…a turning point in post-war American economic and social history.”  After a decade-long campaign, the neoliberals had come to Washington. 
Of course, the crusade to reshape society along neoliberal ideals was far from won; Reagan faced a Democratic Congress, and was often forced to govern more pragmatically than ideologically when his supply-side policies failed. As Margaret Thatcher said, “Economics are the method, but the object is to change the soul,”  and it takes time to change people’s souls.
There was also still a whole world to convert to the gospel of market liberalization. The crisis of stagflation that had opened the door to neoliberal ideas in the US had also created financial incentives for the dissemination of neoliberalism to other countries. With the impact of the first oil crisis flooding New York investment banks with petrodollars, and a depressed economy at home offering fewer places to spend them, the banks poured the money into developing countries. This created pressure on the US government to pry open new markets for investment, as well as to protect the growing investments overseas – helping to bring US-bred neoliberalism to foreign shores. 
Yet these pressures were only a taste of what was to come; after the Iranian revolution in 1979 caused oil prices to suddenly double, inflation in the US returned with a vengeance. This in turn led the US Federal Reserve, under its new neoliberal-minded chairman Paul Volcker, to drastically raise interest rates. This “Volcker shock”, resulting in nominal interest rates close to 20% by 1981, coming on the heels of the profligate lending of petrodollars during the 1970s, played a major part in the debt crisis that descended on the developing world during the 1980s.  As countries defaulted on their debts, they were driven into the arms of the International Monetary Fund (IMF), which, after what economist Joseph Stiglitz described as a “purge” of Keynesians in 1982, became a center “…for the propagation and enforcement of ‘free market fundamentalism’ and neoliberal orthodoxy.”  Mexico, after its debt default of 1982-84, became one of the first countries to submit to neoliberal reforms in exchange for debt rescheduling,  thus “…beginning the long era of structural adjustment.” 
Many of the IMF economists who designed these Structural Adjustment Programs (SAPs), as well as those who staffed the World Bank and the finance departments of many developing countries, were trained at the top US research universities, which by 1990 were dominated by neoliberal ideas – providing yet another avenue by which neoliberalism spread from the US to other parts of the world.  By the mid-1990s, the process of neoliberal market liberalization (under the supervision of the World Trade Organization (WTO)) came to be known as the “Washington Consensus”, in recognition of the origins of this ideological revolution.

THE REVOLUTION CONTINUES
Some authors have called neoliberalism the antithesis to Keynesianism , yet its real opposite is communism; Keynesianism represented a compromise between the two – a middle way. Yet this fragile balance did not survive the economic crucible of the 1970s. Neoliberalism’s strategic political alliance with neoconservatism can be seen as a natural reaction to the rapid changes that had unfolded during the 1950s and 60s in both the US economy (with the growth of the welfare state) and society (with the rise of the counter-cultural revolution); at the same time, it can also be seen as an opportunist power grab by the capitalist class during a period of uncertainty about the foundations of the old order. The fear of communism – captured succinctly in the title of Hayek’s famous work, The Road to Serfdom – drove neoliberals to the opposite extreme: the belief in the superiority of the unfettered marketplace as the guiding principle to human civilization. Neoliberalism, therefore, represents an extremist ideology that, if carried through to its end, will likely end up being as destructive to the societies it touches as extremist socialism was to the former Soviet bloc.
Although the neoliberal revolution is still winning many political battles, such as the growing attack on Medicare in Canada or on Social Security in the United States, evidence of an emerging counter-movement (such as the poorly named “anti-globalization movement” – anti-neoliberalization would be more apt) is growing. As Karl Polanyi described in his classic, The Great Transformation, the industrialization and economic liberalization of the 19th Century resulted in a reaction from society for more governmental intervention to protect people and communities from the destructive effects of unfettered markets. It is highly likely that we are now witnessing the first stages of a similar reaction to the latest round of rapid technological change and market liberalization. Hopefully, this reaction will lead to a society that better balances capitalism’s creative destruction with the needs of humans and their communities for continuity and security.  

Copyright Sean Butler 2006

Written for an Intro to Political Economy class at Carleton University in 2006

Advertisements
Posted by: seanmichaelbutler | March 4, 2010

TANGLEROOT SUMMER

What is a weed, and what is not? If the notorious dandelion is now a value-holding commodity at the grocery store, can we confidently label anything with the pejorative “weed” anymore? The dividing line between bad and good once seemed so clear, so absolute; now, all is hazy and relativistic. It seems the best definition we can hope for is limited to a private one: something you don’t want growing in your garden.

To Janet, co-owner of Tangleroot Gardens and our boss during our organic farm apprenticeship, there is no philosophical grey area when it comes to weeds. As soon as the spring sogginess leaves the topsoil, we attack the garden beds with shovels, rakes, forks, and hands, casting crabgrass and creeping Charlie from the garden like vengeful gods.

Soon, this small patch of Nova Scotian soil is host to an international gathering: tomatoes and corn originally domesticated in Mesoamerica, onions and peas from the Mid East, cucumbers and eggplants from India, and sunflowers from our own North American backyard. It’s chlorophyll-green proof that globalization is not just a recent phenomenon – it’s only speeded up of late.

From a garden not much bigger than a hockey rink, Janet and David satisfy most of their vegetable needs. What isn’t eaten fresh in the summer is preserved for the winter months: beans and herbs dried, onions and garlic tucked into a dark closet, potatoes secreted into the root cellar, tomatoes sealed into sterile jars, squash – well, squash just left lying around in corners – and everything else that can take it mashed into overflowing freezers. Their diet is filled out by meat, eggs, and milk (made into yogurt and cheese) from the animals — who also contribute generous quantities of manure, ideal for fertilizing next year’s crop. Some foods, of course, are either too much trouble to grow, harvest, and process (such as wheat or cooking oil) or unsuited to northern climes (like bananas or avocados). For these essential ingredients and exotic treats, there’s always the Superstore down in the valley, or an organic food co-op willing to ship bulk orders to their door.

All this food purchased with their sweat — but not their wallets — takes a bite out of their expenses, but they still need jobs to pay the other bills. David is a self-employed tax accountant and Janet is the editor of a farming magazine and a freelance writer. From a purely economic point of view, their decision to produce much of their own food is completely irrational. Both have marketable skills that could be traded for money to buy food in much less time than it takes them to grow it themselves. According to my very rough calculations, when the cost of animal feed is factored in, their farm labour earns them about two dollars worth of produce for every hour of work. Obviously, there’s more to their decision than the bottom line.

David’s path leads back to the sixties, when he hopped aboard the back-to-the-land movement and dove into the ideals of self-sufficiency: he fixed up a 150-year-old farmhouse, cultivated a large garden, cut his own firewood, and even grew the hops for his own beer. Gradually, however, his do-it-yourself pluck lost steam in the face of the sheer amount of work this lifestyle demanded. He earned money, bought things, and let the garden go to seed. (He never gave up his love of good food, though – he still cooks nearly everything from scratch.)

Shortly thereafter, Janet arrived on the scene, near the head of the nineties’ organic movement, and reinvigorated the farm with a new idealism. Like many others into organic farming, either as a consumer or producer, she believes that the way we are practicing our most fundamental livelihood – farming – is dangerously unsustainable. Pesticides are killing the ecosystem and causing an epidemic of cancers and other illnesses; chemical fertilizers are fouling the water and depleting non-renewable fossil fuels; monoculture and mechanization are causing precious topsoil to erode away; agribusiness is ruining the family farm and turning the countryside into a wasteland; and corporations armed with genetic engineering are gambling irresponsibly with the building blocks of life while claiming ownership of our common birthright. This approach, she believes, is unhealthy for the environment, for community, and for the individual.
 
Organic agriculture is about more than just pesticide-free produce. To obtain organic certification, a farmer also can’t use chemical fertilizers or grow genetically modified organisms, can’t inject their livestock with hormones or antibiotics, and can’t feed them the rendered remains of their kin. Animals can only eat organic feed, and must have access to sun, fresh air, and adequate room. This is not just ethical, but also practical; an animal not doped up on a pharmacological cocktail has to be kept healthy the old-fashioned way – with quality food, exercise, and minimal stress. Similarly, once the soil has kicked its drug habit, farmers have to look to different ways of growing crops. Many of these techniques – such as fostering biological diversity, using native plants, or reintroducing beneficial insects – are inspired by processes in the natural world. Nature is the muse for a yet more profound version of organics – permaculture, a comprehensive philosophy that seeks to design sustainable systems, for both agriculture as well as human culture, that work with nature as much as possible, rather than against it. When Janet intersperses wildflowers amongst the vegetables, for instance, she’s employing a permaculture strategy that attracts bees to pollinate all of her plants. 

But maybe all these principles and high ideals are just an excuse to eat real good food. There’s nothing like the herbs we pick straight from the garden, and the eggs are rapturous – yolks so golden, the uninitiated often suspect food colouring has been added. But they’re our reward for letting the hens have the run of the farm. Healthy hens — and a healthy ecosystem — equals tasty food and healthy people. It’s a simple equation. Much simpler than trying to decide what’s a weed and what’s not.  
©Sean Butler 2004
Published in the Ottawa Citizen, Citizen’s Weekly, September 5, 2004

Posted by: seanmichaelbutler | March 4, 2010

A PILLAR OF AIR

I stood at the top of Mont St. Pierre, the mountain, and looked down over the side of the cliff at Mont St. Pierre, the town, 1400 feet below. A few stray sounds – blues music from the bar, the cough of a rusted muffler – drifted up like memories. This little crossroads of settlement – more a confluence of natural features like river and sea, valley and mountain, than manmade roads – had been my world for the past ten days. Now I had ascended far above it and could see it in its whole.
It looked little changed from the village I saw in a framed black and white photograph hanging inside the tavern, taken many years ago from this same spot. How could that photographer have guessed that one day people would launch themselves off this cliff and soar over the rooftops below? I found it equally hard to believe that I was about to do precisely that.
I had come to this tiny village on the north coast of Quebec’s Gaspe Peninsula as part of a crew shooting an episode for a travel and adventure TV show. It was our last day and the producers had granted the crew members time for a quick hang glide. I would be flying tandem with Patrick, a devotee of the sport since its birth.
“I grew up by the coast in France,” Patrick quietly explained, “watching the seagulls. Then I read Jonathan Livingston Seagull and wanted more than ever to fly.” When he saw an early hang glider on TV, he wasted no time in writing away for one. Soon the package arrived in the mail and he carried it up to the top of a small mountain and put it together. “I sat there all day, looking out over the valley below, trying to find the courage to fly. Finally, it started to get dark, and I didn’t want to walk back down, so I put on my harness and flew to the bottom. I will never forget that first flight.” Now, after countless subsequent flights, he tries to recapture the exaltation of that first step into thin air the best way he can: vicariously, by taking others for their first flight.
For tandem flights, the takeoff is crucial. Patrick strapped on our harnesses and told me we were going to practice “the takeoff run”. I put my arm around him and leaned against his side like a drunk. The physical intimacy seemed appropriate with a man I was trusting my life to. We held this position for a moment, imagining a cliff before us. Then I felt his weight shift almost imperceptibly forward and I moved with him.
“Run! Run! Run! Run!” he shouted as we broke into a mad dash toward the imagined cliff’s edge. All week long, I had watched dozens of others practice this same drill, and wondered what was going through their heads. Now I knew: denial. Still, if we should actually go through with this, I reasoned, I would stand a decent chance of living to see tomorrow. After all, Patrick had been flying hang gliders since before I learned to walk.
The waiting was the worst part. After I got hooked into the glider and walked up to the edge of the cliff, there was nothing to do but wait for the right headwind. There was still the faint hope that it would never come and we would have to cancel. While I nourished that thought, I had far too much time to consider the folly of this endeavour. What I was about to do contradicted all my instincts. Nor could I find refuge in feigned ignorance of my impending fate – for there it was, the void, yawning irrefutably before me.
My thoughts drifted to the bottom of that void, at the base of the mountain, where, for some reason I dared not perceive, stood a tiny graveyard. It was its location that troubled me; it seemed to mark the exact place you’d fall if gravity won out over aerodynamics.
I tried to reassure myself with the encouraging precedent set by all the fatality-free flights I had personally witnessed over the course of the week. After all, there had only been that one crash. And both the pilot and his unlucky passenger had walked away from the accident rattled, but unhurt.   
Finally, a breeze (or was it a zephyr?) idled by and stirred the little ribbon indicating wind direction that Patrick had been watching like a hawk. He lifted the control bar of the glider and said three words to me, as we had practiced in training. They now sounded like the three most ominous words I had ever heard.
“Are you ready?”
I had watched an earlier first-timer not hear this question, transfixed as she was on that empty space before her. If there was one thing I was going to do, it was avoid a similar embarrassment in front of the assembled onlookers. “Yes,” I gulped.
I pressed my body into his, waiting for that hair’s width shift forward that would signal the beginning of our run. Once we started down the short ramp that ended in open air there would be no going back.
Then I felt it and everyone yelled, “Run! Run! Run!” – a collective urge willing us airborne. But I didn’t need to be told. I ran. I closed my eyes just as my feet kissed the sweet earth goodbye.
***
The Wright brothers were not the first to fly. Before their historic first powered flight, pioneering humans had joined the birds in the sky with unpowered gliders made of canvas and willow wands.
The first person to fly a heavier-than-air craft assumed his heroic place in history reluctantly. In 1853 Sir George Cayley, then 80, persuaded his coachman to climb into his “governable parachute” and soar some 900 feet. People ran to where the prototype crashed to hear the shaken coachman declare, “Please, Sir George, I wish to give notice. I was hired to drive and not to fly.”
Nearly 40 years later Otto Lilienthal picked up with more enthusiasm where the coachman had left off. The world’s first true pilot of the skies, he became an expert at controlling his gliders by shifting his weight. He completed more than 2000 flights in increasingly sophisticated gliders before, in 1896, the odds caught up with him and he was killed when a sudden gust of wind sent him crashing to the ground.
Other inventors were inspired by Lilienthal’s example and built gliders of their own, with varying success. But glider experimentation was brought to a virtual halt when, a few days before Christmas in 1903, the Wright brothers flew with an engine from the beach at Kitty Hawk. The world became captivated by speed, size and altitude. In a mere blink of an historical eye, the Wright’s Flyer evolved into the X-1 breaking the sound barrier in 1947.
But somewhere between those two points of development the spirit of flight that moved the ancients to soar with the birds was partially lost. Cockpits became enclosed, pushing buttons replaced shifting weight, and pilots began flying more with their heads than their hearts. People became immune to the wonder of flight. It would take gliding to help them rediscover it.
***
Yvon Ouellet could be Captain Mont St. Pierre. Born and raised in the little Gaspesie village, he has all the characteristics of a native superhero. He can even fly.
“About eight year ago,” he explained in strained English, “I made the decision to wing my life away, like a bird. Since that time, I never have regret.” He opened his unusually long arms heavenward like a pair of great wings and gave voice to a cry that demands daily venting: “Thank you, Cosmos!” 
People say he spends more time in the air than on the ground, though physically he is an unlikely candidate for defying gravity. He is, quite simply, built like a giant. With his long, curly red hair, cleft chin and prominent brow, he could have just stepped off a Viking raiding vessel. Yet such fearful impressions are banished the moment he flashes his perfect, superhero smile.
Yvon makes his living taking tourists on tandem hang glider flights, and when the weather conditions are right, he’s kept busy with a constant backlog of customers. He winters in Mexico and Guatemala – not to vacation, but to keep flying more customers.
While other hang glider pilots eke out a living in his shadow, Yvon is clearly at the top of the food chain. Seeing him in action helps explain his appeal. As a grand finale to his flights, he dives in low over town and executes a series of roller coaster twists and turns, invariably producing screams of terrified delight from his captive passengers, cries that he augments with loud “ye-haw’s!” and crow “caw’s!” In this way, he manages to broadcast an advertisement to potential new customers even while flying with his current one. Add to that a contagious laugh, his reputation as an impeccable pilot, and, of course, that winning smile, and Yvon’s service is too potent for most to resist.
***
There is no single inventor of modern hang gliding; rather the contributions of a disparate many tinkering in workshops and testing their creations in isolation. Volmer Jensen was one of the first, building and flying his first glider in 1925. Then Francis Rogallo, an American, designed a light, flexible wing.
Fast forward to Australia in the 1960’s, where the sport of flying from gliders towed behind speedboats was gaining popularity. Part of the sport involved performing daredevil acrobatics from a horizontal “trapeze bar” below the wings of the glider. John Dickenson, one of these aerial entertainers, married Rogallo’s superior wing with this bar, and the basic design of the modern hang glider was realized.
Only, no one knew it yet.
It took a great showman, and a small accident, to take it the final step. Bill Bennet, also an Aussie, was flying with the new Rogallo wing one day, when his towboat ran into a sand bar. He had no choice but to release his line and found to his happy surprise that he was able to guide his craft safely to the water below. From then on a release became part of his performance, and from the reaction of the awed crowds who watched he knew he was onto something. He took his show to America in 1969, his tour culminating in a triumphant flight around the outstretched arm of the Statue of Liberty. Gliding had at last recaptured the popular imagination.
***
The new sport of hang-gliding caught on early in the tiny village of Mont St. Pierre. Situated halfway down the coast of Québec’s Gaspé Peninsula, this community of 290 people is tucked between two rolls of the modest Chic-Choc Mountains and stretched along half a mile of shoreline overlooking the Gulf of St. Lawrence. On a clear day, you can just make out the Laurentian Mountains on the far side of the water.
Behind the village stretches lush green farmland, bisected by a small tree-lined river that ambles up the valley. Farther inland is the Parc québécois de la Gaspésie, where woodland caribou forage in the shadow of 4000-foot mountains that remain covered with snow for most of the year. But the vast majority of the area’s human inhabitants live by the sea, joined by the coastal highway that circuits the peninsula. In the summer months, a steady trickle of tourists round the bend on that highway and pull into Mont St. Pierre to watch hang gliders and, more recently, paragliders, launch from the neighboring mountain.
That same stretch of highway serves as the village’s main drag and social meeting spot. Telephones are redundant, as you are bound to pass the person you want to talk to several times a day on this strip of road. If they still manage to elude you, chances are you’ll find them at the village’s one bar, Les Joyeux Naufragés (The Happy Castaways) , that night. There, you will see Quebec culture at its finest. The private world soon dissolves in the cozy common space of the bar. Patrons dip behind the counter to change the music while visitors who stay the night are often treated to a stream of drinks bought for them. With such inherent hospitality to be drawn upon, the bar needs no wait staff, and will only hire one, lured willingly from the crowd, on the busiest of nights.
The one night a year sure to be bustling is the party for the visiting pilots of the annual Hang-Gliding Festival. Usually held at the end of July or beginning of August and lasting a week, the festival has been a big event in town since it began in the 1970’s, drawing many pilots from New England, Quebec and Ontario. When the wind blows in from the sea and creates an updraft off the side of the mountain, the flying conditions are perfect and a dozen or more gliders can hang in the sky for hours. But the pilots come for rarer pleasures than just a good breeze: beach fires at night, the camaraderie of a shared passion for flying, the joie de vie of the town’s people, and the rhythm a place where nature still takes precedence over strip malls.  
Somehow, years of dreams realized in the sky above have rubbed off on Mont St. Pierre. The freedom of flight has slowly worked its magic into the people who live with it daily, permeating their dreams at night, and imbibing them with a subtle joy that holds the whole village in a kind of enchantment, as if it’s not quite part of the regular world, like some hidden Tibet of French Canada.
***
 When I opened my eyes, the jagged rocks of the cliff face were falling rapidly away and I was floating above the earth as if in a dream. Perhaps my eyes remained closed the whole time, and I did dream it all – such was the detachment from reality that I felt. Never having experienced anything like this, my brain simply refused to believe what it was seeing.
The trees, houses, river and fields below all looked laughably fake, like something out of a model railroad landscape. I felt a reckless sense of invincibility, as if I could dive toward the ground and perch on a treetop by simply willing it.
Once I remembered to breathe, I relaxed my tensed muscles and began to trust the counterintuitive notion that the air could hold our heavy bodies as if it were a solid structure – a pillar of air. I felt little movement, just a serene suspension and the wind in my face, as if I was 13 years old and coasting on my bike down the long hill near my home. I wanted it to last for hours.
“How do you steer?” I asked.

Patrick gave me a demonstration: push the bar to one side, the glider banks in the opposite direction; push the bar away, it slows down; pull the bar closer, it dives and speeds up. This last maneuver really got my attention. Suddenly the ground seemed a lot more tangible. 
Like the wall of consciousness at the end of a dream, the ground continued its inexorable advance. While we could buy a little time to float between heaven and earth, gravity always wins in the end. Instead of alighting like a bird, we landed gracelessly on wheels, my body skidding through the cut grass like a baseball player sliding into home base.
The spell was broken. I was back in reality. But I had flown – really flown – for the first time, if only for five minutes. Compared to this, all those routine flights in jumbo jets were no more than glorified bus rides.
I had lived the dreams of the ancients. This area’s ancient inhabitants, the Mi’gmaqs, had named it “Gespeg”, meaning “land’s end”. How right they were.

Copyright Sean Butler 2001

Published in The Ottawa Citizen, August 19, 2001

Posted by: seanmichaelbutler | March 4, 2010

PEOPLE POWER

One side of an age-old debate was bluntly expressed last week by Canadian Alliance leader Stephen Harper when, in response to the government’s decision to follow the majority of public opinion and stay out of the war on Iraq, he said, “I don’t give a damn about the polls.”

He was referring to polls like the one conducted by Ipsos-Reid, for the Globe and Mail and CTV, which found that only 15% of Canadians thought Canada should contribute troops to a unilateral attack on Iraq.

Apparently, Jean Chretien did give a damn about this, and acted accordingly.

This debate over the legitimacy of public opinion stretches back in time at least as far as the birth of the modern world’s first democracy, in post-revolutionary America. Harper probably would have agreed with one of the American Constitution’s authors, Alexander Hamilton, who said in a speech:

“The voice of the people has been said to be the voice of God; and however generally this maxim has been quoted and believed, it is not in fact true. The people are turbulent and changing; they seldom judge or determine right.”

It is indeed odd that Harper – the leader of a party that advocates (to paraphrase its own document on policy) binding referenda, the recalling of unpopular MP’s, the representation of constituent consensus over party and personal views, and a high level of citizen participation in the democratic system – would hold public opinion in such low regard.

Chretien, on the other hand, would have found a supporter in another architect of the U.S. Constitution, Thomas Jefferson, who once wrote to a friend:

“We both consider the people as our children. But you love them as infants whom you are afraid to trust without nurses; and I as adults whom I freely leave to self-government.”

My guess is that most citizens would prefer to be treated as adults, but a number of world leaders have exhibited an ideology in the past weeks that falls more into the Hamilton camp than the Jefferson.

Tony Blair has taken what he calls a “moral” stance for war, irrespective of the more than 9 out of 10 Britons who oppose a war not mandated by the UN. John Howard has sent Australian troops to fight in Iraq, despite polls showing only 6% of Australians support such action without UN authorization. Italian Prime Minister Silvio Berlusconi has thrown his support behind the U.S.-led coalition, ignoring polls showing a 74% opposition to an American intervention in Iraq. Further polls have shown that upwards of 80% of Spaniards oppose violent regime change in Iraq, yet Jose Maria Aznar has lent his country’s support to the war cause. Perhaps the Japanese Prime Minister Junichiro Koizumi summed up the view of these leaders best when, brushing aside the over 80% of Japanese who are against removing Saddam Hussein’s regime without UN blessing, he said, “There are times when we make mistakes following public opinion.”

Underlying this worldview is a profound cynicism about the decision-making capabilities of the “masses”. They are accused of being emotional, fickle, superficial, selfish, ignorant, and unwise.  This expression of elitism is deeply undemocratic, as it presumes that power is best left to a small group of individuals, who know what’s best for everyone else. This is not democracy; it is oligarchy.

The “fickle masses” are often written off by the same business and political elites who promote policies – such as cuts to education or the consolidation and narrowing of media voices – that inhibit the ability of the public to make informed choices in the first place. Politicians ignore public opinion for decades and then wonder why less and less people bother to vote each election. They chalk up this apathy to some inborn weakness of the “common person” rather than admit that they’ve been deaf to the public’s wishes all along.

Most people’s beliefs are, in fact, anything but fickle. Opinions may change, sensibly enough, as access to new information becomes available, but most people’s core ideas are well-reasoned and very slow to change. One must only look at the string of broken election promises to see that, if anyone is fickle, it is our leaders. This is because people’s opinions are generally governed by their conscience, which can’t be easily compromised, while the politician’s “opinions” are often opportunistic realpolitik – compromised daily in the complex interplay of diplomacy.

“Leave it to the experts,” is the message ordinary (as if there is such a thing) Canadians consistently hear from their leaders. How conveniently this message removes the messier aspects of participatory democracy from the job of formulating policy, and consolidates the power of those at the top. If the citizenry is really too stupid to know what’s right for itself, then perhaps they’re too dumb to even choose the right leaders and we should do away with elections too.

The truth is, people are extremely well-informed. Satellite TV, the Internet, higher education, and access to literally thousands of magazines, journals, newspapers, and books, have combined to shape increasingly sophisticated opinions about the world. People have never been better equipped to make informed decisions than they are now. 

It is tragically appropriate that the issue of war in Iraq should also show the true colours of some leaders over the issue of the legitimacy of public opinion. The belief that foreign powers can intervene in other cultures and successfully “nation-build” – the myth of benevolent imperialism, popularized by such authors as Rudyard Kipling – is intimately tied to the notion that the few know what’s best for the many.  

Unfortunately, some citizens do accept this logic of father-knows-best. They put strong personalities ahead of strong policies. The same poll that found 9 out of 10 Britons against Tony Blair’s war policy, also found that about half of respondents continued to admire him, agreeing with the statement, “He does what he believes to be right for Britain.” The virtues of moral clarity and absence of doubt are sometimes allowed to trump all other concerns. The desire to offload personal responsibility to a higher power, and to be provided with clear and simple answers to complex problems, can cause people to willingly surrender their democratic freedom. It’s a chilling reminder that some tyrannies are welcomed with open arms by the populace – in the beginning at least.

We need to learn to admire a new kind of strength in leadership – the kind of leader who is strong enough to submit her own ego to the service of the public will. We need to elect leaders who desire nothing more than to truly represent the hopes and dreams of their constituents as completely as possible. We urgently need the kind of leaders Ralph Nader is describing when he says that, “the function of leadership is to produce more leaders, not more followers.”

We are right to want leaders with conviction; but they should be leaders who hold a conviction in the wisdom of the people above all else.

Sometimes, as Koizumi pointed out, this will lead us to “make mistakes.” But, in the end, no one knows policy better than the people affected by that policy. Abraham Lincoln recognized his own limitations as a leader when he asked rhetorically, “Why should there not be a patient confidence in the ultimate justice of the people? Is there any better or equal hope in the world?”

Public opinion is the very essence of democracy. Any effort to marginalize it is a blow against the ideal of self-rule. The full potential of democracy is only served when the public’s opinion takes its rightful place, centre stage.

Copyright Sean Butler 2003

Published in The Ottawa Citizen, April 5, 2003

Posted by: seanmichaelbutler | March 4, 2010

MANAJAR IN MEXICO

I was driving through a forest in Mexico at night when I spotted the flashing lights of the policia in my rear view window.
“I think they want us to pull over,” said my travel partner, Vikki Mara.
“But I’m only doing 5km/h,” I said.
I stepped on the brake and waited for the cop to approach. The air was brisk with the scent of pines as it wafted in my open window. We had found a beautiful campground by a lake just outside of the town of Creel, deserted save for the Tarahumara Indian who had emerged from a smoky hut to collect our pesos at the entry gate. After having dinner in Creel, we had just been returning to our campsite when the police appeared. It was our third night inside of Mexico, our first in the wild region known as the Barrancas del Cobre – or Copper Canyon.  
“Where are you going?” asked the cop in perfect English.
“To our campsite. This is a campground, right?” I gently reminded him.
This seemed to throw him off. I suspected that a camper hadn’t been spotted here since the week of semana santa.
“Well, where did you come from?” he soldiered on.
“Canada,” I proudly announced.
“What? In this car?” he snorted incredulously, stepping back to get a better view of our dirt-encrusted Subaru Loyale station wagon, the back seat containing two large dogs and an Israeli traveller we’d picked up in town.
Already the dire warnings I had received about driving in Mexico seemed to be coming true. But, up till now at least, it hadn’t been that bad. 
For starters, we had been allowed to roll over the border without even stopping – a free zone extends for about 20 kilometres south, which you can enter and stay in for three days without any formalities – but as we continued south we soon came to a large checkpoint where we had to get out and present our documents. For most tourists, this shouldn’t have presented much of a problem – all you need is a passport, driver’s license, vehicle ownership, and a major credit card, which is photocopied to guard against you selling your car while in Mexico. But our situation was complicated somewhat by the fact that, somewhere in eastern Texas, Vikki had left her wallet on the roof of the car after filling up at a gas station and then driven for five hours before discovering it missing. She lost all her and the car’s documents, except her passport, but fortunately we still had photocopies of everything. These, after being shuffled around to different teller windows, eventually passed inspection and were duly ennobled with official stamps.
We had obtained certificates of health and vaccination records for the dogs, but no one ever asked to see them. Neither were they interested in seeing any car insurance. Car insurance in not mandatory in Mexico, but it is highly advised, since, without it, if you are in an accident the police will throw you into jail first and ask questions later. Canadian and American insurance policies are rarely valid in Mexico, but there are scores of companies at the border who will sell you insurance. Alternatively, you could do what we did: buy your insurance over the phone.
Since Vikki’s car – a 1991 Subaru Legacy – wasn’t worth much anyway, we opted to just get basic liability. We took Kemper Mexico Seguros – whose California-based broker in ADA VIS Global Enterprises – up on their offer of US$58 each for one year of coverage, and for an additional US$25, Vikki got access to their Mexico Legal Service, which can provide legal advice or a lawyer if necessary.
The red tape behind us, we drove straight to Chihuahua. The toll highway was nearly as good as any American interstate, with noticeably less traffic and slower speeds. I had read that the new toll highways in Mexico can be expensive, so I wasn’t surprised when we had to shell out 110 pesos ($20) to cover the 350 km journey.
I was shocked, however, when we stopped to fill up the gas tank and the numbers on the pump spun crazily up to nearly 300 pesos ($50). At first I thought I had already fallen victim to one of Mexico’s well-publicized gas station rip-offs, but when I checked the per litre cost, I found it was equivalent to about $1 a litre.
Our next stop was Creel, some 200 km southwest of Chihuahua. To get there, we had to leave the artificial world of the toll highway, or cuota, and join the real Mexico on the free highway – the carretara libre. While further south these roads can become choked with diesel spewing busses and trucks, and passing becomes a game of life and death played with all too much nonchalance, in the northern states traffic is at a minimum and they make for relatively stress-free driving.
Like any secondary highway in the States or Canada, travel time stretches considerably on the free highways as you pass through town after town. But unlike towns north of the Rio Grande, every community in Mexico worth its name builds speed bumps, or topes, at the entrance and exit to town, with a few sprinkled in between for good measure. Usually there is fair warning about an approaching tope, but not always. Fail to slow down for one of these, and you’ll soon have a healthy respect for what they can do to the bottom of your car and the top of your head. Most Mexicans slow down to a crawl before easing their tires over the steep slopes of these overgrown speed bumps. Inevitably, thinking you have passed the last of the topes, you will eagerly start accelerating back to highway speeds, only to slam on the brakes yet again as one last tope rears up before you.
Just outside of Creel, we found our beautiful deserted campground by Lake Arareco. Although Mexico has few official campgrounds, camping is spectacularly easy. Would be campers benefit from the fact that Mexicans, while not big on camping themselves, are seriously into picnicking. One is rarely far from a little clearing in the pines where someone has been kind enough to pour some concrete fire pits with rebar grilles, and one is welcome to spend the night, free of charge. A willingness to ask locals for directions and, of course, your own transportation are essential prerequisites for taking advantage of these hidden gems. Without a car, most of these sites are about as accessible as Tierra del Fuego.
The same can be said for the beach, which, by law, is public property and can be camped on for free. If your ideal beach does not include hordes of people, portable stereos and wandering vendors, then having your own vehicle gives you access to some of the best – and least visited – stretches of sand on both coasts.
Other camping options include that beach away from the beach – the balneario – a kind of outdoor water recreation resort, always popular with Mexicans eager to escape the heat of the city. Shortly after dark, however, these places empty of people and you can often camp there for free.
Indeed, upon asking an attendant if you are allowed to camp on any parcel of land set aside for some natural wonder, such as a waterfall or a lake, you are likely to receive a shrug and partly mystified, “Porque no?” – “Why not?” – and left to camp wherever you wish.
As a last resort, you can always drive out of town and, whenever you come to a crossroads, take the smaller road, until, after a few such turnoffs, you should soon find yourself on a tiny dirt track in the middle of nowhere. You then need only to find a space wide enough on the shoulder to park and set up your tent. I did this several times and my solitude was, at most, only momentarily interrupted by a truck or horserider passing at dawn.
Mexicans, by Canadian or American standards, are very accepting of strangers camping out. Vikki and I once set up our tent in an empty lot next to a warehouse on the outskirts of Guadalajara. We were just downing a few nightcaps of tequila when a police pick-up truck pulled up beside us and an officer leapt out to ask if we were alright. We explained we were just camping and the police drove off. The next morning, as we still lay in our sleeping bags, another cop dropped by to check if we were okay.
But the cop we were now faced with in the campground outside of Creel didn’t seem to be motivated by a desire to help us. The beam of his flashlight probed every inch of our loyal Loyale. I was acutely aware of the fact that we were completely alone at night in a foreign country with two men wearing guns. After eyeing us suspiciously for a few moments, though, he cracked a smile and waved us on, apparently figuring no one would make the a preposterous claim of having driven several thousand kilometres in this piece of junk if it were not the truth.
As we drove away, I felt we had passed some sort of test, and that we were now, for real, in Mexico. “Bienvenidos a México!” I felt the cop should have shouted as we left, presenting us with a memorial sombrero and bottle of tequila.
Later I would learn why someone driving through the trees late at night would be regarded with suspicion in the Copper Canyon. The trackless wilderness of the region is a paradise for growing marijuana and opium, and the area is crawling with drug producers, smugglers and, correspondingly, federal narcos.
We decided to plunge into the heart of drug country, and set our sights on Batopilas, a town that sits at the bottom of a canyon, at the dead end of the only road that cuts into the Parque Natural Barrancas del Cobre. 
An Aussie traveller named Jerry warned us to not attempt driving there ourselves: “It’s 1000 feet straight down, no guardrails. One wrong move and you could easily kill yourself. Sometimes,” he elaborated about his recent bus trip there, “we’d meet another vehicle coming around a blind curve and both vehicles would slam on their brakes and sort of skid around each other.”
The weight of the evidence was convincing us that we should just take one of the thrice weekly busses – until we spoke to Casy, an American living in Creel.
“It’s dangerous,” he conceded, “there’s no search and rescue is something goes wrong. But it’s also one of the best things you can do in this part of the world. The road is half the trip.”
We decided that if we were to die on this road, we’d rather do it to ourselves than hand the honour to some bus driver.
The first half of the journey was along a winding but newly paved road, then we pulled off the blacktop and onto a dirt road that led to the edge of the canyon. From there we began the slow, spectacularly torturous descent to the river far, far below. I sat perched on the passenger window, gaping at the breathtaking view. Some of the canyons in this area hold more air than Arizona’s Grand Canyon. My mind boggled at the sheer amount of space defined by the canyon walls; space that seemed palpable enough to touch – if there wasn’t such a giddy aura of danger surrounding it. The far side of the canyon seemed so far away, it was almost like observing something astronomical, rather than terrestrial.
Vikki drove for about an hour, never once touching the gas pedal. Nor did we see another vehicle, and the deeply grooved dirt road enforced it’s own slow speed limit. We eventually rolled across a bridge and into the streets of Batopilas just as darkness fell. It had taken 6 hours to drive 130 km.
Batopilas is an odd place. Semi-tropical in climate, it feels like the point where the old world meets the old west: men in white cowboy hats and shiny new pick-up trucks cruise down tiny streets lined with a continuous wall of crumbling white stucco homes. Empty shells of deteriorating buildings are a common sight; after silver was discovered in the 1600’s, the town swelled to 20,000 people. Now, some 1100 residents live in what has become a major bottleneck for the drug trade. Drugs come in from the surrounding canyonlands and are shipped out on the road we had just arrived on.
For someone used to a clear conceptual line being drawn between commercial and residential space, Batopilas can be an eye-opener. Signage is at a minimum; the open door you are peering into, waiting for your retina to adjust to the gloom within, could lead to someone’s bedroom, or it could be a tiny shop – or both.
The town has officially been declared “dry”, but it was a Saturday night and the beer was clearly flowing in every dark corner or pick-up with its doors hanging open and the stereo playing. Perhaps prohibition had been declared because of the influence of the local Tarahumara Indians, who go on ritual drinking binges lasting days, during which time normal societal laws are put on hold. Even murder during a tesguino, as the drinking festivals are called, is thought of as no more than an unfortunate accident.
We joined in the illicit beer consumption, buying Tecate from our hotel owner, who was doing a thriving drive-through trade out his window – narrow streets with no sidewalks make every business a potential “drive-thru”. The next day, just to add to the sense of being in some mutated old west setting, we went panning for gold in the Batolipas river, and Vikki turned up a tiny nugget.
But we eventually tired of dust and cowboy hats, and made a bee-line for the coast. Around Chamela we found just what we had been looking for: a wide stretch of sand, deserted save a few fishermen, but backed by a small village with a handful of restaurants. In my enthusiasm, I drove straight for the sand and, taking note of a predecessor’s tire tracks criss-crossing the beach, I hit the button for 4-wheel drive and ploughed ahead onto the playa.
No sooner had all four wheels touched the beach then we came to an abrupt stop. It was as if my tires had been suddenly cast in concrete. We weren’t going anywhere. Which, I decided as I popped open a cold beer and watched the sun set into the ocean, wasn’t such a bad thing.

Copyright Sean Butler 2002

Posted by: seanmichaelbutler | March 4, 2010

LIFE, LIBERTY, AND A LITTLE BIT OF CASH

Three years ago, Jay Hammond figured his time was nearly up. At least he’d led a full life: Marine Corps fighter pilot in World War Two; bush pilot in Alaska; master hunter and fisher with the U.S. Fish and Wildlife Service; and over two decades of political service, culminating as the Governor of Alaska. Shortly after retiring from office, he dreamt that he’d be granted 20 more years of life at his beloved Lake Clark homestead, to do penance for whatever “sins of omission or commission” he may have inflicted. When those 20 years expired in 2002, Hammond waited stoically for the end.

But his premonition proved false. Hammond may be slowing down, but he’s not stopping yet. Just last year, after crashing a meeting of the Conference of Alaskans,  the octogenarian flew to Washington, D.C, for the third annual gathering of the U.S. Basic Income Guarantee (USBIG) Network – a group advocating a guaranteed annual income to be paid by the government to every citizen. Hammond, far from barging in uninvited this time, was its keynote speaker.

He came to talk about the Alaska Permanent Fund – the world’s only basic income – which he wrestled into existence in 1976. The idea first came to him back when he was a Republican representative for the fishing village of Bristol Bay. He noticed a lot of wealth flowing out of the village – in the form of salmon – while the local people remained poor. His idea to collect a three percent tax on all fish caught by non-residents wasn’t that unique, but what he proposed to do with the revenue was: redistribute it equally to each resident of the village. Ten years later, Hammond was Governor, and had the chance to test his ideas at the state level.

Whereas fish had been the source of wealth for Bristol Bay, oil was the cash cow for Alaska in general. Hammond wanted to return a portion of the state’s massive oil royalties directly to its citizens. A referendum, a constitutional amendment, and several years of political wrangling later, he had his wish.

The Alaska Permanent Fund invests at least a quarter of the state’s mineral revenues annually. Depending on the success of those investments, a dividend is paid to each Alaskan – one citizen, one share. The dividend peaked recently at nearly US$2000 – that’s US$8000 extra a year in the pockets of a family of four. Thanks in part to these payouts, Alaska has the smallest gap between the rich and poor in the United States. (Although everyone gets a dividend, the rich lose more of it to taxes than do the poor.)

Understandably, the people of Alaska have come to love their Permanent Fund, which has now grown to almost US$30 billion. Hammond has received accolades from groups as diverse as environmentalists, developers, and sportsmen. After being honoured by his old nemesis, the Teamsters Union, he marveled, “what can I expect next…an award for my contributions to public morality, co-sponsored by Jerry Falwell and Larry Flynt?”

Praise has also come from Nobel laureate economist Vernon Smith, who called the Fund, “a model governments all over the world would be well-advised to copy.” In particular, both he and Hammond single out Iraq as an ideal starting point. “This is the time and Iraq is the place,” said Smith, “to create an economic system embodying the revolutionary principle that people’s assets belong directly to the people and can be managed to further individual benefits and free choice without intermediate government ownership.” 

Shortly after taking over his new post in Iraq as Chief Administrator, Paul Bremer spoke in favour of an Alaskan-style dividend program for post-war Iraq. And according to Alaskan Senator Ted Stevens, even President Bush was “very much interested” in the idea when he spoke to him in 2003.

The success of the Fund has attracted attention from other oil-rich regions. Alberta Premier Ralph Klein, in a December 2000 Calgary Herald story, was reported to be considering following Alaska’s lead. The idea was backed by academics, consumer advocacy groups, the Canadian Taxpayers Federation and the Alberta Liberals, and is still being discussed as a responsible way to deal with Alberta’s budget surpluses. 

“There’s been a revival in [basic income] the last six years, particularly out of Europe,” says Mike McCracken, a founder of the Ottawa-based economic research firm, Informetrica. Ireland, according to Scottish Parliamentarian Robin Harper, “is now seriously discussing the introduction of a citizen’s income…”

While these initiatives simmer, there is a country that is about to unseat the Alaska Fund’s claim to uniqueness. That country is Brazil, which, starting this year, plans to begin to phase in the world’s first national basic income.

It’s not surprising that Brazil should be the first nation to do so; while reasonably developed, it has one of the most unequal distributions of wealth in the world. (Another industrialized country in the wealth inequality basement is South Africa, where, not coincidentally, a coalition representing over 12 million people is propelling basic income onto the government’s agenda.)

The plan in Brazil is to extend the basic income first to the nation’s most needy and then, over a period of several years, to the entire population. It hasn’t been decided yet how much will be paid, but it’s been suggested that R$480 a year (US$180) would be a good starting point. It doesn’t sound like much, but for the 22 percent of Brazilians who survive off less than US$2 a day it would make a huge difference, boosting their incomes by at least a quarter. A basic income for Brazil could be, in the words of its former President Fernando Henrique Cardoso, a “realistic utopia”.

 It just so happened that the father of the Brazilian basic income, Senator Eduardo Suplicy, also presented at the USBIG conference last year. During his speech, he noticed Jay Hammond sitting in the front row, and, to warm applause from the assembled crowd, descended from the stage to shake his hand. The two basic income pioneers had at last met.

Yet Hammond and Suplicy make an odd couple. The Republican Hammond, with his Hemmingway-like white beard and grizzly build, wears his far north ethos of self-reliance with pride. Suplicy, on the other hand, a founding member of the left-wing Worker’s Party and a U.S. trained economist, has the dignified appearance of an intellectual and professional politician. It’s tropical socialism meets arctic capitalism, yet somehow, when the two come together over basic income, they actually get along.

———————————————–
Perhaps the first basic income – albeit limited to free adult males – was in ancient Greece. In a 2,500-year presage to Hammond’s creation, dividends from the lease of the silver mines outside Athens were distributed to the republic’s citizens. A mere 200 years ago, Thomas Paine, the pamphletist and political philosopher whose words are credited with inspiring popular support for the American Revolution, argued that a portion of the proceeds from agriculture belong to everyone as the “natural inheritance” of land. He detailed a plan, involving inheritance taxes, by which a fund could be established to compensate every adult on a yearly basis for the privatization of land. And it was around this time that England, in an attempt to deal with the dire poverty of the early Industrial Revolution, started paying benefits to the working poor in a system called Speenhamland. But, because the benefits were so low, and were only paid to those with a job, they ended up becoming a subsidy to employers and were abandoned in 1834.    

The apogee of the basic income concept began in post-Second World War America. In 1946 Nobel laureate economist George Stigler originated the idea for a Negative Income Tax (NIT), and in 1962 neo-liberal economist Milton Friedman – also a Nobel winner – along his wife, Rose Friedman, refined and popularized the idea in their book, Capitalism and Freedom. The NIT concept is elegantly simple: if your income falls below a certain threshold, you receive a refundable tax credit that is a percentage of the shortfall. For example, say the threshold is set for $5,000, and the negative tax at 50 percent. If you only earned $3,000, your credit would be 50 percent of the difference between $5,000 and $3,000, which equals $1,000, bringing you total income up to $4,000. If you earned nothing, your credit would be $2,500, effectively establishing that as the minimum income. 

In 1967, President Lyndon Johnson established a National Commission on Guaranteed Incomes, and after two years of hearings and studies the body’s business leaders, organized labour representatives, and other prominent Americans were unanimous in their support for a basic income. In 1969, shortly after over 1,200 economists had signed a petition in favour of the NIT, and with rioting growing every summer in the poor, black neighbourhoods, President Richard Nixon proposed a version of the NIT to replace the existing welfare program. Limited to families with children and contingent on the acceptance of work or training, it established a minimum income of US$1,600 a year for a family of four (about US$8,000 in today’s dollars). For the next three years, the bill was fought over by politicians, with liberals demanding that none of the poor receive less benefits under the NIT than they currently got, and conservatives insisting on more onerous work requirements. Several different versions of the bill passed votes in Congress, but none successfully ran the gauntlet of the Senate. In the end, the Nixon administration abandoned the plan after it became clear that the urban rioting of the mid to late 60’s had miraculously ceased.       

In Canada, we seem to toy with the basic income idea every 15 years or so. In 1971, the Special Senate Committee on Poverty produced a report, known as the Croll Report, which recommended Canada replace its existing welfare programs with a single basic income, set at 70 percent of poverty levels (about $16,000 in today’s dollars for a family of four) and limited to families or those over 40 years old. In 1985, the Royal Commission on the Economic Union and Development Prospects for Canada – the MacDonald Commission – proposed a below-subsistence basic income (about $10,000 in today’s dollars for a family of four) with work conditions, and limited to families or those over 35 years. While the Croll Report had been focused on fighting poverty, the MacDonald Commission was more interested in scrapping social programs that interfered with the free market, and replacing them with an efficient social safety net that would catch those who might suffer from the liberalization agenda. The Commission’s other major recommendation, free trade with the United States, was, of course, adopted by the Mulroney government, but basic income was ignored.

Shortly after being elected to his third straight majority in 2000, Jean Chretien started casting around for a legacy. The idea that he was considering a guaranteed annual income was leaked to the media, and a flurry of editorials followed. In the National Post, McGill economics professor William Watson complained that “not even the wacko lefty party dared propose such an ambitious spending plan,” while an Ottawa Citizen headline predicted a “Guaranteed Annual Deficit.” The Prime Minister quickly pleaded ignorance, saying, “I don’t know where this idea came from. I never said a word on this.” While that may be questionable, it is true that he never said a word about it again.       

Many public figures, however, have been less shy about voicing their support for basic income. They include nine Nobel laureates in economics, from James Buchanan and F.A. Hayek to James Tobin and James Meade (who was a lifelong member of the Basic Income European Network), as well as other well-known economists like Samuel Britton, who wrote a book on the subject, and J.K. Galbraith, who in 1999 called it one of two pieces of “unfinished business” for the millennium. Other influential spokespersons include Bertrand Russell, John Ralston Saul, Desmond Tutu, and Martin Luther King. Political parties that back it are often of the Green or left-liberal persuasion, but not always – in 1993 the Reform Party of Canada, taking a page from their Social Credit predecessors, put basic income on their election platform.      

Is BIG better? 
So far, all of these proposals, from Nixon’s to the MacDonald Commission’s, have not been true basic incomes, because strings have been attached. You must be a parent, or over a certain age, or under a certain income, or accept work requirements. Retired Canadian Senator Lois M. Wilson, in her forward to Laval University professor François Blais’ 2002 book, Ending Poverty: A Basic Income for All Canadians, gives a definition that gets to the heart of the basic income vision: “an unconditional income that the government awards to every citizen.” The ideal basic income is unconditional, and therefore universal. While it may seem odd at first to give money to rich and poor alike, no questions asked, in Canada and much of the developed world we have generally accepted the universality of health care, roads, libraries, schools and many other services offered free of charge to anyone who wants them – all in the interest of promoting equality.

But there are other reasons why basic income’s advocates tend to favour universality. Support programs targeted at only the poor often result in a “poverty trap” – the work disincentive created when the government “claws back” 75 cents or more in benefits for every dollar of earned income. Targeted programs also stigmatize recipients, create an “us versus them” mentality, are intrusive into people’s personal lives, and require large and expensive bureaucracies to administer. The money saved from the elimination of the middlemen could help fund a basic income.

But perhaps the strongest argument for a universal basic income comes not from a claim of utility, but one of rights. This concept is legally enshrined in Article 25 of the UN’s Universal Declaration of Human Rights, which Canada signed in 1948: “Everyone has a right to a standard of living adequate for the health and well-being of himself and of his family…” If this right cannot be met with a job, the thinking goes, it must be met with some form of basic income.

This right is based on what one needs to survive, but there is another approach to rights, based on Thomas Paine’s notion of “natural inheritance”. In his essay, “Agrarian Justice”, he argues that the value of agricultural produce comes partly from the labour of the farmer, but also partly from the land itself. While the farmer is entitled to the value he added to the land by growing crops, he should share the base value of the land, which is the natural birthright of every person.

Paine, writing at a time when farming was the main source of wealth, was only concerned with land. But today, we can apply his principle to many forms of natural and social capital provided by the Earth and past generations. This “unearned wealth” includes the air, water, land, sunlight, the ecosystem, the ozone layer, mineral resources, and the body of knowledge, infrastructure and institutions built up by civilizations over the centuries. According to many economists, this commonwealth is worth more than the total of private wealth. Whenever it is taken over by private interests – justified by the greater productivity this usually produces – its previous owners, the people, should be compensated. It’s at this point that the argument for a right to a share of the commonwealth presents a solution to the problem of how to fund a basic income. 

In an Atlantic Monthly cover story, “A Politics for Generation X”, New America Foundation CEO Ted Halstead writes that “…America could raise trillions of dollars in new public revenues by charging fair market value for the use of common assets – the oil and coal in the ground, the trees in our national forests, the airwaves and the electro-magnetic spectrum – and the rights to pollute our air…and [return] the proceeds directly to each American citizen…” What he, and others, advocate is not common ownership, but common benefit from our natural and social inheritance. It’s like the Alaska Permanent Fund, only writ large.

This idea of switching taxes from “goods” like income or trade to “bads” like resource exploitation or pollution has been favoured by many environmentalists for years, as a way of incorporating the true cost of the ecosystem into the price of business. So, while raising money for a basic income by taxing limited resources, we would also be moving toward a more sustainable economy without over-regulating. But natural resources aren’t the only scarce resource. Britain, for example, recently auctioned off licenses for use of the radio spectrum for the third generation of mobile phones over the next 20 years, raising £22.5 billion for the public purse.

This approach to funding circumvents a typical criticism of basic income (and redistribution in general) – that it gives people “something for nothing”. The conservative conviction that TNSTAAFL (There’s No Such Thing As A Free Lunch) ignores the fact that we all get something for nothing – in the form of gifts from nature and society – every day. Senator Suplicy points out that we recognize the right of property owners to receive an income from the rent, interest, or profits of their property, without necessarily doing any work. If the gifts of nature and society are rightfully the property of all, then why shouldn’t everyone be entitled to an income from them without working? Sharing these gifts equally wouldn’t be redistribution so much as what some have called “predistribution”.   

While these ideas are the intellectual progeny of Alaska’s precedent-setting Fund, there is a more traditional approach to financing a basic income: income taxes. It’s at this point that Hammond and some of his admirers part company from people like Waterloo professor Sally Lerner, co-author (with C.M.A. Clark and W.R. Needham) of the 1999 book, Basic Income: Economic Security for All Canadians. The book gives a rough sketch of how a yearly basic income of $7,000 for seniors, $5,000 for adults, $3,000 for children, and an additional $5,000 per household could be funded by a 40 percent flat tax on all earned income. Under this scheme, people in all categories who earned less than $30,000 a year would see their net incomes rise compared to the current tax and benefits system. While some who earned over $30,000 would see their net incomes decline, even at the $100,000 income level seniors and families of four with one wage earner would still benefit.

Lerner and her co-authors admit this funding formula has its drawbacks. For instance, a basic income would likely cause some people to reduce their paid employment, thus undermining the very income tax revenues that fund it.

Yet Karl Widerquist, the economist who has organized the USBIG conferences since their inception in 2002, isn’t too concerned about which way to fund it – resource rents or income taxes – “because it ends up coming out of the same pockets either way. The people who own most of the natural resources are also the people who have the highest income.”

Whichever way it’s funded, most BIGists would agree with Canadian-born Harvard economics professor J.K. Galbraith when he opined, “Everybody should be guaranteed a decent basic income. A rich country…can well afford to keep everyone out of poverty.”

The Ends Game
We may have the rights and the means to a basic income, but would we want the ends? A common complaint is that the security of a basic income would create a work disincentive – people would choose to work less and, as a result, the economy would suffer. Back in basic income’s heyday, the U.S. and Canadian governments set up ambitious social science experiments (surreally, the U.S. tests were run by Donald Rumsfeld and Dick Cheney) to measure the extent of this effect, giving thousands of families a NIT for several years. Unsurprisingly, they had no difficulty finding experimental subjects. They found that American husbands worked an average of 6 percent less hours per year, and their wives 19 percent less; Canadian wives reduced their paid work by 3 percent, while their husbands worked only 1 percent less. These results, however, are likely understated because recipients knew the benefits were only temporary.

But in light of the current climate of jobless growth, automation, outsourcing, and offshoring, one has to wonder if a disincentive to work might not be such a bad thing. “The absolute number of hours that people need to work is probably coming down in order to meet society’s needs,” says Informetrica’s Mike McCracken, “and there’s every reason to think that that will continue in spades.” The percentage of the U.S. population working on farms has declined from 40 percent a century ago to less than 2 percent today. While the increased wealth resulting from this huge leap in productivity has created many new jobs in the manufacturing and service sectors, increasingly, the work we do is not geared towards survival. A study by the International Metalworkers in Geneva predicts that within the next 30 years, 2 to 3 percent of the world’s population will be able to produce everything we need. “Canadians,” write Lerner and her co-authors, “should now begin to find ways to decouple basic economic security from the traditional jobs that may not be there for a growing number of people.”

“With a guaranteed income,” writes professor Michael L. Murray in “…And Economic Justice for All”: Welfare Reform in the 21st Century, we could “greet the prospect of higher levels of unemployment not with fear and loathing but with pleasure.” Perhaps the idea of full employment is now as anachronistic as the idea of everyone having their own little farm. Perhaps we are beginning to enter the state of post-scarcity John Maynard Keynes foretold for his grandchildren, whose major problem, he thought, would not be the adequacy of resources, but how to best distribute the surfeit of wealth made available by rising productivity. It’s a big jump for a society whose oldest members lived through the Great Depression to now accept that survival needn’t be a struggle anymore. But Murray believes “…we no longer need the drastic negative enforcement of poverty. The United States is sufficiently wealthy…that we can survive quite nicely with the incentive to do better. We no longer need the incentive to survive.”

Such a perceptual leap would represent a profound revolution. That’s why Philippe Van Parijs, currently a visiting professor at Harvard and secretary of the Basic Income Earth Network (BIEN), sees basic income as “…a deep reform, which belongs in the same category as the abolishment of slavery or the adoption of universal suffrage.” Over the past few centuries we have seen the gradual emancipation of the individual from church, state, and fellow human, but one major coercive element remains: the power of the employer over the employee.

“We talk about being a free country, a free people, which is hugely important to us,” says Widerquist, yet people are often forced to take any job they can get. A basic income would give them the power of refusal – if the job conditions or pay weren’t satisfactory – that’s essential in any economic transaction, thus correcting the typical power imbalance between employer and employee. “We do not need to build our society on the unfreedom of poor people,” says Widerquist. “We can have the people at the bottom not be destitute and still have a functioning economy.”

While there would undoubtedly be some who took their cheque but gave nothing back to society, we should, according to Parijs, be “even more comfortable about everyone being entitled to an income, even the lazy, than about everyone being entitled to a vote, even the incompetent.” Besides, “even the lazy” would at least be spending their money, providing jobs for those who wanted them.  

While a basic income would give people freedom from unsatisfying or unrewarding work, it would also give them the freedom to spend their most precious resource – time – how they wished. How many people, one wonders, would go back to school, start a small business, spend more time with their kids, volunteer in their community, or pursue the artistic impulses they never had time to nurture? How much better would democracy be served if people had more time to stay informed and participate in the system between elections? In 1997, the OECD Forum on the Future acknowledged the value of “a universal citizen’s income intended to put greater value on the broad range of human activities that extend well beyond paid work”. At last, the hard work of raising a family or caring for the elderly – long unrewarded by the market economy – would be implicitly valued by a basic income. 

Far from hurting the economy, freedom from the punch clock could actually release a flood of human creativity and ingenuity that would reverberate through it for years. “The work which improves the condition of mankind,” wrote Henry George, the 19th century author of Progress and Poverty (which the Wall Street Journal called the greatest economics treatise ever written by an American), “which extends knowledge and increases power and enriches literature and elevates thought, is not done to secure a living…In a state of society where want is abolished, work of this sort could be enormously increased.”

Paul Kane, author The Play Ethic, gave an example of this in a recent interview with The Herald in Scotland: “J.K. Rowling is a classic example of the play ethic. There she is, a single mother, no partner, surviving on benefit and a six-grand grant from the SAC. And in that moment when she determines her own space and time, she creates the biggest cultural franchise in the world.”

Indeed, basic income has the potential to be a great boon to capitalism in general. The secure foundation of a basic income, acting in a similar way to a corporation’s limited liability, would probably encourage more entrepreneurial risk-taking and innovation, and allow workers to adapt to the more flexible hours requested by businesses. Moreover, while saving society from the costs of poverty and inequality (a B.C. government study estimates that one homeless person costs from $30,000 to $40,000 a year in shelters, courts, and other services), a basic income would accomplish a classic goal of the political right: put money back into people’s pockets. This would give more people greater spending power, spurring consumer demand.

But perhaps the greatest gift to capitalism that basic income has to offer is a solution to the age-old war between government intervention and free markets. Ever since the introduction of the welfare state in the 1930’s, governments have sought to correct the perceived injustices of the market system by imposing an array of regulations and redistributing wealth. While these measures have relieved poverty with varying degrees of success (or, depending who you talk to, exacerbated it), they usually distort the efficient functioning of markets. The battle lines are drawn clear, for example, around minimum wage. Right-wingers complain that it prevents the price of labour from finding its natural level, while left-wingers point out that nothing in economic theory guarantees that the natural level will be enough to live on. So government steps in and fixes the minimum price for labour. If, however, instead of trying to meddle with the outcomes of economic activity, governments were to collect a portion of the value of essential inputs at their source (resource rents, for instance) and “predistribute” them sufficiently to eradicate poverty, the rest of the economic system could be run as a neo-liberal utopia without too much worry about inequality. Why have labour legislation when workers can bargain for better conditions themselves from the improved economic standpoint of a basic income?

It was no coincidence that the original welfare state sprang into existence in response to the economic hardship of the Great Depression. Likewise, writes Carleton University professor Manfred Bienfeld, “…the basic income movement experienced its recent strong revival as a direct reaction to the destruction wrought in people’s lives by the neo-liberal revolution.” The difference is that, this time, the reaction doesn’t have to be in opposition to the forces that compelled it, but instead can actually support them. 

————————————————–
If basic income has so much to offer both the Hammonds and the Suplicys of this world, it begs the question: why don’t we have one already? While both sides of the political spectrum are attracted to the idea, says Widerquist, both “have some reason to be angry about it.” Canadian Auto Workers economist Jim Stanford cautions against allowing “the BI movement’s slogans about providing basic coverage to every Canadian to be used to bring about a ratcheting-down of hard-won and already-threatened social benefits,” and instead calls for a “living wage” through collective bargaining, minimum wages, and other forms of labour market regulation. While his fears are warranted (the MacDonald Commission proposed exactly this sort of “ratcheting-down” in exchange for a meager basic income), to Lerner his solution “seems to fly in the face of the new world of work created by globalization and technological change.” She believes “we can demand living wages, but, to the extent that employers can find low-wage workers elsewhere, and replace demanding workers with smart machines, there is little leverage to get those wages.”

The left has been fighting a desperate rear-guard action for the past 30 years, so it’s not surprising the many on that side can think of little else but to hold on to the disintegrating remnants of the welfare state. But the right, though ascendant, is also clinging to an outdated morality of work. Martin Luther King was ahead of his time when, in appealing for a basic income he wrote, “We are wasting and degrading human life by clinging to archaic thinking.” It seems that King was ahead of our time, too. The left will need to move beyond its efforts to control the market, and the right abandon its fictional American Dream, before a consensus around basic income can be realized.     

“…it will require a suspension of deeply held beliefs for many Canadians to think calmly about a BI,” write Lerner and her co-authors. To even speak of a basic income in an age of curtailed public expenditures, thinks McGill University professor and BI-supporter Myron J. Frankman, “seems like dreaming in Technicolor.” Yet change often comes faster than we imagine. “No one reading the press or the journals of 1929,” writes Bienfeld, “could have imagined the arrival of the New Deal in 1933 in the United States.” In the end, opposition to basic income seems to stem more from a paucity of imagination than of means. In the referendum that gave birth to the Alaska Permanent Fund, about a third voted against it; if the vote were held again today, almost no one would oppose it. Whether we decide a basic income is the right thing to do, the best thing to do, or the only thing to do, it seems likely that the freewheeling imagination that inspires Jay Hammond and Eduardo Suplicy will eventually work its way into the rest of us.

Copyright Sean Butler 2005

Published in Dissent Magazine, Summer 2005

Posted by: seanmichaelbutler | March 4, 2010

LETTER TO MY COLLECTION AGENCY

Dear R. Max Gold, LL.B.,

Thank you for your recent letter informing me of my outstanding debt to NCO Financial Services Inc. (formerly Financial Collection Agencies), authorized agents for Citibank MasterCard. I really do appreciate the diligence you have shown in reminding me of the money I owe. Without the industriousness of yourself and your colleagues, I should surely have forgotten all about my obligation by now.

Your conscientiousness stands in stark contrast to the state of neglect in which I have left our correspondence. By way of reparations, let me humbly offer an explanation for the tardiness of my payments. Simply put: I never borrowed any money.

Being a lawyer, Mr. Gold, you may not be as familiar as your client no doubt is about how money is created in our society. To best explain this process, permit me to back-up a few hundred years. 

The practice we now call banking started in several different places and times. But for illustrative purposes let’s take the case of the English goldsmiths of the 16th and 17th centuries. In those days gold coins were one of the main forms of money. When people accumulated many coins they would often deposit them with the local goldsmith, who tended to have a secure vault. In exchange for their coins the goldsmith would issue them a paper receipt showing how much had been deposited. People soon found that, instead of withdrawing their gold to make a purchase, it made a lot more sense to simply sign over an equivalent amount of these receipts to the seller. Once the goldsmiths caught on they facilitated this by not making their receipts out to any one person, but simply to the bearer. Whoever held the receipt was entitled to the gold. The Bank of England later refined this further by issuing notes in standard denominations, each worth a specific value in gold. And so paper money was born.

At first, depositors only wanted their coins back when they presented their receipts. This was because coins were often tampered with by shaving or “sweating” small portions of the gold off, or recasting them with less precious metals. But when Sir Isaac Newton was put in charge of the Royal Mint, he developed a coin that rang at a certain pitch when struck. Any adulterations to the coin would result in it being out of tune – it quite literally would not ring true. With the value of all coins now assured, goldsmiths could lump all their deposits together into a common sum.

Seeing this vast pool of gold all in one place got some of the more enterprising goldsmiths thinking: most of the time, their vaults were full of gold. Sure, in theory all their depositors could simultaneously demand their gold back and empty out their vaults. But barring a wide-scale panic, it wasn’t too likely.

Sensing a business opportunity, the goldsmiths began loaning out, at interest, some of this surplus gold sitting idle in their vaults. Of course, instead of loaning the actual gold, it was much more convenient to simply loan out more of those handy paper receipts.

In so doing, the goldsmiths crossed a crucial line – they issued more receipts for gold than they had gold in their vaults. With each new loan they made, they increased the amount of these paper receipts circulating in the economy, thus creating more money.

No one went without so that others could be loaned this money; no depositor’s receipts were taken away and given to a debtor. Completely new receipts were printed, in addition to the old ones, with both old and new promising to pay the same gold to the bearer on demand. It was, in essence, a vast game of musical chairs. The only thing holding the goldsmiths back from issuing unlimited amounts of paper money was the fear that there might be a run on their bank, and they’d have to redeem all those receipts at the same time and go bankrupt.     

Except for a hundred-odd year interval during the 19th and early 20th centuries when the world towed the British Empire’s line and adhered to the gold standard – which enforced a one-to-one ratio of gold to money – we’ve pretty much stuck to this same system of money creation through debt. The old goldsmith’s fear of a run on his bank has been replaced with government-regulated “fractional reserves”, but these only require banks to possess a small fraction in assets of the total amount of money they loan out.

So just like back in Elizabethan England, when I “borrow” money from a bank these days, that bank conjures new money out of thin air and makes the numbers appear in my account as electronic impulses.

Since this money was not previously owned by anyone, including the bank, it cannot be considered as “borrowed”. It simply appeared, like some immaculate conception, in my bank account. If this money has a creator it is surely I, and not the bank, since it was by my will that this money came into existence. The bank was merely my instrument.

I’m sure you’ll now agree with me that I should be under no obligation to hand over my money to a financial institution in repayment of a loan that – when viewed properly – never even existed. Rather, in light of these revelations, may I respectfully suggest that you should in fact thank me for my noble display of generosity in assuming the awful burden of money creation through debt and spreading the resultant wealth far and wide.     

Yours,
Sean Butler

Copyright Sean Butler 2004

Posted by: seanmichaelbutler | March 4, 2010

THE GROSS DOMESTIC HOAX

On a sunny Saturday four summers ago the gaiety of a festive afternoon was shattered by the terrible sound of impacting metal and breaking glass. A minivan heading toward Ottawa for Canada Day had lost control rounding a bend in the highway and smashed into a Honda Civic going in the opposite direction. At the last moment, the driver of the Civic, a woman in her 30’s, swerved to take the full force of the collision on her side of the car, so saving the lives of the two young daughters riding with her.

The mother was killed instantly. The minivan’s driver, a 57 year old man with a family of his own, was admitted to hospital in critical condition, while its other five occupants were treated for minor injuries and released, as were the two now motherless girls from the Honda.  The ripples of grief and trauma, as always with horrific accidents, were immeasurable and in a way eternal – certainly the families of the victims and the survivors would never forget the day, and never be able to calculate the weight of pain that it had caused.

But something else was triggered by that afternoon in July that was eminently calculable and measurable.  Doctors, nurses, and paramedics sprang into action to treat the injured; police and fire departments rushed to the scene; tow trucks cleared the wreckage; insurers worked on claims; mechanics repaired property damage; auto plant workers churned out replacement vehicles; public works employees improved highways for better safety; the media reported the story; driving instructors read the story to their students, who booked more lessons; employers of the dead hired and trained new workers, employers of the injured paid sick leave and hired replacements;  lawyers and notaries were engaged; judges and other court personnel heard the case when charges were laid; and, finally, the funeral industry gained another client. In total, those few seconds of horror, and their lingering aftermath, contributed almost $1 million to the economy – a sum representing years of work for the adults involved.

And the GDP – the country’s Gross Domestic Product – rose.

   *     

In a world divided by religion, politics, and culture, one thing unites us all:  the Gross Domestic Product.  From Washington to Pyongyang, New Delhi to Berlin, Riyadh to Rome, government and business the world over worship it with absolute devotion. Legions of acolytes track its progress and prophesize its future course, believing wholeheartedly that GDP growth is the road to salvation, and GDP decline the path to damnation.

In fact, though it is trumpeted almost daily by the media and government, many – possibly most — people are not quite sure what GDP actually is. Ask a random sample of people to define it, and you’ll get answers ranging from professions of ignorance: “I actually have no idea how they measure these things,” to brave but misguided stabs: “The product that Canada produces that brings in the most income,” to pretty close: “The tally of all financial transactions that take place in a, I don’t know, culture?”

In reality the GDP can be described in three equally accurate ways:  1.  the total value of all the goods and services produced in a country in one year   2.  the total income of a country in a particular year  3. the total expenditures on goods and services of a country in a particular year.  These three different ways of looking at the figure yield nearly equal results. Basically, GDP can be thought of as an indicator of the amount of economic activity going on in a country. We call an increase in activity “economic growth” and a protracted slowdown in that increase a “recession”. GDP, which Statistics Canada calculates once a month, is the pulse of the economy.

What it isn’t is a measure of societal well-being.  This is possibly the most insidious part of the unintentional GDP myth:  the belief that economic growth – as defined by the GDP — equals an improved quality of life.  While GDP is good at what it does – adding up economic activity – equating mere activity with well-being is worse than questionable. “There is probably no more pervasive and dangerous myth in our society than the materialist assumption that ‘more is better’,” writes Canadian political scientist Ron Colman. “The GDP is a quantitative measure of size. How can that, by definition, measure quality of life?”  John Kenneth Galbraith recently highlighted the shortcomings of GDP as one of the most important issues of “unfinished business” facing economics. And Roy Romanow makes his opinion of GDP clear when he calls it the “Gross Distortion of Prosperity”.

It may be more perverse than that.  Each day in this country there are an average of eight car crash fatalities, 623 injuries, and 1700 collisions. Taken yearly, the car crash “industry” runs into the billions. A conservative estimate made by the SAAQ – Quebec’s public auto insurer – puts the economic yearly “benefit” of car accidents at nearly $3 billion for Quebec alone: or 1.4 percent of the province’s GDP.

Economic growth, it turns out, can be very good for countries, but very bad for people.  And as much as the GDP can be called the Gross Domestic Product, it could equally be labelled the “Gross Disaster Product”.

   *

Criticisms of the limitations of GDP are nothing new; they go all the way back to its conception, in the Second World War. Simon Kuznets, the Russian-born, American-naturalized economist who won the Nobel Prize for being the principle architect of the GDP, became one of its most vocal critics. He cautioned that “the welfare of a nation can scarcely be inferred from a measurement of national income as defined by the GDP,” and spent much of his life pointing out the limitations of the framework he had helped to create. How, then, did we manage to get to where we are now?  How did we conspire to disregard Simon Kuznets to the point where we tacitly accept as our unassailable gauge of “progress” a number that is compromised at best, and meaningless at worst?  And why does that number, when it comes to those things we consider elemental parameters of happiness, function as a kind of unwitting con-game?

GDP actually diverges from true well-being in two major ways. First, it counts as “progress” things that any common-sense definition of the word would exclude: pollution, divorce, problem gambling, crime, war, disease, loss of free time – all these cause an increase in spending, and a correspond ing rise in GDP. “The nation’s central measure of well-being,” write Clifford Cobb, Ted Halstead and Jonathan Rowe (of Redefining Progress, a San Francisco-based economic think-tank), “works like a calculating machine that adds but cannot subtract.” As the saying has it:  “Economists have to learn to subtract.”
The second main digression GDP makes from genuine well-being is in its exclusion of every contribution to society made outside the market. There’s a lot more that can’t be bought besides love: strong families, robust communities, healthy ecosystems, fit bodies, wise hearts. The GDP, Robert Kennedy once observed, “measures everything, in short, except that which makes life worthwhile.”   
Sometimes referred to as the “love economy”, volunteering and household work contribute the equivalent of an estimated $325 billion a year in services to Canadians – equivalent to nearly one-third of our GDP, but utterly ignored by it. In fact the GDP is blind to the love economy whether it goes up or down. From 1997 to 2000, says Colman, 12.3 percent fewer Canadians volunteered, ‘and it’s not a blip on the radar screen of any policy-maker because no money changes hands.’”
Similarly, the contributions of the natural environment go unnoticed and unappreciated. For instance, forests are typically valued only for their GDP-boosting timber. But if we consider their value in protecting watersheds and biodiversity; guarding against erosion and flooding; regulating climate and sequestering carbon; and providing recreation and spiritual enjoyment, they may be worth more standing than cut. A team of economists and ecologists estimated in a 1997 Nature magazine article that the value of nature’s services was $33 trillion a year, or nearly double the gross world product of $18 trillion.
 “Much of what is recorded as economic growth,” writes Colman, “is merely a shift in work from the unpaid household economy to the paid economy.” Ditto for the ecological economy. Parenting becomes childcare, shade trees become air conditioners, road hockey becomes PlayStation, clean water becomes a Brita filter, community cohesion becomes locks on the door, and free time becomes a second job to pay for all these things that were once provided free. Every time, the GDP goes up – but does our well-being? “Growth,” write Cobb, Halstead, and Rowe, “can be social decline by another name.”
The rise in American unemployment currently baffling U.S. observers (In a “surging” American economy) is just another example of the pernicious disconnect between our economic indicators and reality. Little wonder that through the myopic lens of GDP, building highways through neighbourhoods, clear-cutting old growth forest for toilet paper, and working people to an early grave all seem to make good economic sense. Just count the fallout as a benefit, ignore the human, social and ecological assets you’re bulldozing, and watch the economy grow.
*
In the evening of March 23, 1989, an aircraft carrier-sized supertanker named the Exxon Valdez slipped its last mooring at Alaska’s Alyeska Pipeline Terminal and manoeuvered through the Valdez Narrows, out into the darkness of Prince William Sound. The ship increased speed for the run down the coast to Long Beach, California, laden with over a million barrels of Alaskan North Slope crude oil.  As the Valdez accelerated southward, the radar operator reported small icebergs — sloughed off the Columbia Glacier – blocking the outbound shipping lane ahead. The captain of the tanker, Joseph Hazelwood, was faced with the choice of slowing down to proceed through the bergs safely, or diverting into the inbound shipping lane and continuing at speed.  He chose the inbound lane.  Thirty-four minutes later  the Valdez, with the third mate at the helm, ran aground on the Bligh reef, in the process rupturing 8 of its 11 storage compartments. 

As the equivalent of 125 Olympic-sized swimming pools of oil seeped into the surrounding ocean, the world quickly learned the name of this supertanker, and soon came to associate the Exxon Valdez with the worst environmental disaster in the United States since Three Mile Island. The death of marine wildlife from the spilled oil could only be guessed at: 250 000 seabirds, 2 800 sea otters, 300 harbour seals, 250 bald eagles, up to 22 orcas, and billions of salmon and herring eggs, according to the best estimates. A disaster of this magnitude – and publicity – begat a colossal response; at the height of the clean-up effort, approximately 10000 workers, 1000 boats, and 100 aircraft scrambled to contain the spill, rescue wildlife, and scour beaches, costing Exxon $2.1 billion. But that was just the proverbial  tip of the iceberg. Spending on legal and court costs, media reporting, and repair to the tanker pumped billions of extra dollars into the Alaskan and American economies, a payday that has yet to run dry, judging from the recent $6.75 billion (including interest) awarded to 32000 Alaskan fishermen and residents who brought a suit against Exxon. In the end, the total economic windfall of this terrible disaster dwarfs the $22 million that the oil would have been worth had it been delivered safely to port. The GDP benefitted accordingly, and hugely.

Oil spills aren’t the only disasters that are a growth industry. After the wildfires in California last summer, the Los Angeles Times predicted the tragedy  would, “pump some juice into the economy,” as homeowners replaced their losses. The Oklahoma City bombing prompted The Wall Street Journal to forecast a rise in the share prices of firms in the security industry. And the ice storm that hit Ontario and Quebec, while initially causing economic losses of $1.8 billion, ended up generating 16 000 jobs and a net gain to GDP of $1.5 billion.

All these disasters are, in effect, little wars. So what happens when we engage in the real thing? Warfare contributes to GDP twice – once when we make the bombs and hire the people to drop them, and again when we dole out contracts to rebuild what we’ve destroyed.

History doesn’t have to be destiny, though, not even when it comes to wars or math.  The question begs itself: can’t we come up with a better number?

    *

In fact, someone is doing just that. Since 1997, GPI Atlantic, a non-profit research group based in Halifax, has been developing an index of sustainable development and well-being for Nova Scotia. So far, it has completed indicators for 16 of the 22 areas it set out to measure. In another year, says the group’s founder, Ron Colman, it hopes to have all 22 indicators finished, and the province’s first Genuine Progress Index – or GPI – up and running.

Yet the index’s work-in-progress status hasn’t stopped the group from grabbing headlines in Halifax papers over the past few years with reports that put a dollar value on things not usually measured monetarily; costs such as crime, clear-cutting and smoking, and benefits like volunteer work, forests and childcare.

The GPI, explains Colman, a former Assistant Professor of Political Science at St. Mary’s University and researcher and speechwriter at the United Nations, “uses monetary values as a communications strategy to demonstrate the economic value of non-market goods and services.” It does this by calculating what it would cost to replace those non-market goods and services in the market economy, or, conversely, how non-market costs adversely affect the economic bottom-line. 

For example, Nova Scotians volunteer more than anywhere else in Canada, to the tune of 140 million formal and informal hours a year. “People might say, ‘That’s very nice.’”, says Colman. “But then if we say that’s the equivalent of $1.9 billion worth of services – 10% of our GDP and larger than all government services combined – then the politicians take notice.”

“If you don’t measure it,” he points out, “the real message you’re conveying is it has no value. If you measure it properly, it has value, it gets attention, it changes behaviour. Indicators are tremendously powerful that way.”

In a way, Colman’s strategy is: if you can’t beat ‘em, join ‘em. Policy-makers are so used to looking at things in monetary terms that he, in effect, shows them the money – for everything of value. He speaks their language. He also speaks about values.  His 22 sets of indicators, ranging from measurements of the value of leisure time, sustainable transportation, soils and agriculture, to income distribution, and health, reflect “consensus [Canadian] values. In other words, if there was an indicator that was acceptable to the Right and not the Left, or vice-versa, then these things would never be accepted by Canadians.” 

It makes sense, as does the GPI’S extension of the concept of capital – usually thought of as the total assets of a company – to humans and nature. It’s a basic accounting principle that you don’t count the depletion of capital as income, yet that’s exactly what the GDP does with human and natural capital.  “If Coca-Cola operated its accounts the way we operate our System of National Accounts,” says Mark Anielski, “they’d be bankrupt.”

“I would say that we are at a turning point in human history where the window of opportunity is quite small,” says Ron Coleman. “When I was my daughter’s age, we didn’t dream that a whole fish stock could collapse… we don’t have much time or luxury to fool around anymore, and until we actually measure these things, we’ll only have words.”
   *
How then, did we learn to stop worrying about our most cherished values for community, family and the environment, and come to love the GDP?

One obvious explanation is self-justification.  If people like to think of themselves as relatively well-off in the world, as fortunate, as happy, then what better strategy than to find something we do well and attach our gauge of happiness to it?  In the western world, we’re good at at buying, selling, and accumulating.  If we decide that these activities equal well-being, then it’s tautological that we’re the most successful  — the most fortunate, the happiest – people on earth.

But what if, in fact, we – western capitalists – aren’t the most gratified people on earth?  What is the GDP then but a tool that has turned the tables on its creators?

The highly regarded World Values Survey recently asked people in over 65 countries how happy they felt. The results?  According to their data the happiest people on earth were Nigerians – 70% of the respondents in that country somehow got by their abysmal per capita GDP of $840 and reported to the survey that they felt “very happy”. Canada placed in the top fifteen, with 45% in good spirits.  In the U.S., according to a similar study, the number of people who describe themselves as “very happy” has fallen from 35 to 32 percent in the past 40 years – despite a doubling of inflation-adjusted per capita income.  In fact, although incomes have risen considerably across the board in the industrialized world, only in Denmark have people become more satisfied with life over the last three decades. 

The King of Bhutan, perhaps sensing that money doesn’t necessarily lead to contentment, recently declared that his country is more interested in Gross National Happiness than Gross National Product.

    *

Mike Nickerson, founder of the Seventh Generation Initiative, an advocacy group predicated on the Native American tradition of considering the implications of current actions on the next seven generations, likes to compare the growth of a society to the growth of an organism.  In the early years, physical growth is important, but there comes a time when it naturally ends – from then on we call growth “obesity”. He believes that, “as a culture, we are in late adolescence.”

If so, then Albertans are the teenagers in the rec-room. At just over $40,000, Alberta is the Canadian GDP per capita all-star. Over the past 10 years, its economy has grown at an average rate of 4.2 percent — the highest in the country. Per capita investment, exports, and population are all growing faster than the Canadian average, while unemployment rates continue to undercut the rest of the country. All the standard economic indicators forecast smooth sailing and the captain of the ship, eyes fixed firmly on the horizon, shouts a hearty full steam ahead.

But down in the bowels of that vessel, a group called the Pembina Institute for Appropriate Development has been poking around the engine room, trying to see if the machinery is really functioning as well as the gauges up top suggest.  Working in parallel with GPI Atlantic, but going into less detail, a team of researchers at Pembina assembled over the course of a year and a half a GPI of 51 indicators for Alberta, and published their results in 2001.

Tracking data from 1961 to 1999, they found that, while per capita GDP in Alberta has risen at an average rate of 2.2 percent a year, the GPI has actually fallen by an average of 0.5 percent yearly, although it has been holding steady for the past 20 years. (see Chart #1). The composite index for what they call “Economic Well-Being” faired slightly better than the overall GPI, but still fell far short of GDP growth, at an average gain of only 0.4 percent a year. Their “Personal-Societal Well-Being Index”, on the other hand, seemed to have an inverse relationship with the Economic Index – while the Personal-Societal declined, the Economic increased, until both plateaued in the mid-1980’s.  Finally, the “Environmental” Index has been dropping an average of one percent a year – not surprising when you consider that Albertans’ ecological footprint – the amount of productive land and resources needed to sustain one’s lifestyle – has swelled 66 percent since 1961, and is now the fourth largest on the planet, after the United Arab Emirates, Singapore, and the United States. “Albertans,” writes Canadian economist Mark Anielski, “are not living off the interest of their natural capital and are, therefore, not living sustainably.”  All this, of course, is going on while Alberta’s GDP continues its oblivious ascent, at 2.2 percent a year. 

The translation: not only does GDP fail to accurately measure those things that make a society livable, it fails just as completely to take into account the complexity of the interaction between those things. The challenge, says Mark Anielski, is to “think about the world as a system – a complex, integrated system – rather than dividing the world up into portfolios.” The universe, economic and otherwise, doesn’t work in the linear way we’d perhaps like it to. Instead, it’s more like an interconnected web, where cause becomes effect, and effect becomes cause, and a butterfly flapping its wings can trigger a hurricane on the other side of the globe.    

*

The attempt to look beyond money and measure the real well-being of people and ecosystems hasn’t been restricted to Canada, of course. One of the best known international scales is the UN’s Human Development Index, which has been around since 1990. Canada was in first place for much of the 1990’s, but the most recent report now puts Norway first, with Canada in eighth place. But the HDI is extremely limited, looking at only four indicators: life expectancy, literacy, school enrolment, and per capita GDP. Lesser known UN measurements include the Human Poverty Index, which pits rich countries against each other in the areas of poverty, illiteracy, unemployment, and life expectancy (Sweden fares best in this context, the U.S. worst), and the Gender Empowerment Measure, which measures women’s participation in politics and business (Botswana, Costa Rica, and Namibia rank higher than Greece, Italy, and Japan). The World Bank has also recognized the importance of a more complete accounting for true wealth, by measuring five kinds of capital – financial, physical, human, social, and natural.

Nevertheless, no state has yet incorporated anything like a GPI into its national accounts. Ron Colman, who lived in Australia and the U.S. before immigrating to Canada, thinks Canada is well positioned to be the first. “Statistics Canada is ranked as the best statistical agency in the world by The Economist magazine year after year,” he says, “so we have everything it takes to be a world leader in this field.”

In fact, the Liberal government has taken the first tentative steps in this direction. Back in 2000, when Paul Martin was still the Finance Minister, he gave $9 million to what was called the Environment and Sustainable Development Indicators Initiative, stating that, “the current means of measuring progress are inadequate.”  The Initiative produced six new indicators, for water, air, greenhouse gasses, forests, wetlands, and educational attainment, and recommended that these indicators be added to Canada’s System of National Accounts. In this year’s budget, Martin’s new government announced $15 million in spending over the next two years to implement this recommendation – albeit with only three of the indicators: air, water, and greenhouse gasses.

Of course, measurement is a means, not an end; the implications of a more holistic concept of wealth are many. For example, the GPI’s “human capital” approach could change the national discourse around healthcare, by showing the potential savings of a shift from “sick care” – which currently boosts GDP with every new illness that needs treatment – to a more preventative approach to health care. By bringing the full costs to society and the environment into the equation, a GPI might also show us that the benefits of some activities are simply not worth the costs. In 1994, the Clinton administration suggested the baby step of subtracting resource depletion from the GDP. The coal industry was aghast. If the costs of air pollution were also subtracted, said Congressman Alan Mollohan of West Virginia, “somebody is going to say…that the coal industry isn’t contributing anything to the country.” Exactly. Without a GPI, in fact, Colman sees a real difficulty in implementing the Kyoto Protocol, and similar agreements. “So long as we measure the burning of fossil fuels as a contributor to prosperity, nothing much is going to happen”

But perhaps most importantly, a GPI could finally make us question our unhesitating acceptance of economic growth, and instead ask ourselves, Growth towards what? To regard disaster and suffering and deprivation as positives is perverse, but this is partly, in a way, what the GDP does.  A majority of Canadians today believe that fundamentals like their health, education, democratic freedoms, and the environment should come before the economic bottom line. We’re all grown up now, thanks. Isn’t it time that our central forms of measurement – society’s eyes and ears, and government’s guide for policy decisions – caught up? Maybe then the numbers would start to add up, too.

Copyright Sean Butler 2004

Published in Saturday Night Magazine, June 2004

Posted by: seanmichaelbutler | March 4, 2010

THE GRAMMAR OF LOVE

“What’s the difference between ‘need to’ and ‘have to’?” asks Jen, poking her head out of her intermediate English textbook.
“’Need to’ is stronger, like ‘I need to see a doctor’,” I reply, “while ‘have to’ is used more for obligation. You explained this to me yourself two months ago.”
“Did I?”
“Sorry to interrupt,” chimes in Mary, a middle-aged Brit teacher, “but does anyone have a rubber?”
Jen and I, both Canadian teachers, burst out laughing at what we know to be another case of mistranslation from British to North American English. Mary looks back and forth at us in bemused but genuine puzzlement.
“Do you mean an eraser Mary?”
High school English classes be damned, grammar can be fun. At least this was the conclusion I came to after six months of teaching English – while trying to stay one step ahead of my students and learn its finer points – in Quito, Ecuador. Why, you might ask, is ‘bad’ worse than ‘too bad’? Why do noses run and feet smell? By what logic are ‘wise man’ and ‘wise guy’ opposites, or ‘quite a lot’ and ‘quite a few’ synonyms? Have you ever run into someone who’s gruntled, ruly, or peccable? And if a vegetarian eats vegetables, just what does a humanitarian eat, anyway? This confusion was aggravated by the fact that even between people who, in theory, speak the same language, the mix of British and North American teachers at my school couldn’t even agree whether to meet ‘at’ or ‘on’ the weekend.
Teaching English abroad has to be one of the few professions you are allowed to learn on the fly, due to the near desperation of schools worldwide for teaching staff to meet the demand for the latest international language. At my school, each teacher became a disciple of a particular grammar book, cherishing our chosen tome with the fervor of the devout. With our favourite volume clutched tightly to our side, we felt we could smite the foulest of grammar riddles.
Like sailing, grammar has its own vocabulary. “Check the gudgeons and pintles, secure the antecedents, tighten the cunningham and hoist the non-restrictive relative clauses!” Yet the worst riddle a teacher can face is not one peppered with this esoteric grammatical terminology, but simply, why? “Just because” will only quell rebellion for so long before you have to shift tactics. “Why do you think?” is a good way to stall for time while your mind races to find the creative grammatical leap that the situation demands. If you’re lucky, a student will suggest a way to reel in the rogue grammar point before desperation forces you to find a hidden order amidst the linguistic chaos:
“Well, if the word starts with the first 12 letters of the alphabet, or refers in some way to the colour blue, or rhymes with ‘suite’, then it is followed by a predicate nominative. Of course, there are a few exceptions, as always, but let’s not worry about that right now.” It’s amazing the clarity of mind a bit of pressure can impart.
While teaching, I hit upon an idea for a book. I would call it The Grammar of Love. The present continuous tense is the most romantic of tenses, I think. Doesn’t the phrase ‘I am loving you’ revive the tired old simple present ‘I love you’ with the new life a moment unfolding as you speak? And what a difference the humble preposition ‘in’ makes when dropped in front of ‘love’? To ‘fall in love’ may be good grammar, but how much more poetic and descriptive to ‘fall into love’. Poetry and idealism can be found even in the names of the tenses themselves: a perfect past, a perfect present and a continuous present – selective amnesia, wild optimism and Taoism all find expression in grammar.
But grammar gives with one hand while it takes with the other. As my knowledge of English grammar blossomed, my grasp of Spanish grammar remained mired in the tangled undergrowth of verb conjugations. With a bit of practice, however, I found that I could cloak the language in the skimpiest of simple present tense garments, button it up with a few choice words like ‘since’ and ‘ago’, and get away with it. I could dodge the tricky present perfect answer demanded by the question, “How long have you been here?” with a simple and to the point, “Since September.” I became a master of brevity: When asked at an informal get-together to translate what the director of the school had just said in English to the Spanish-speaking staff, I pointed and complied with an efficient, “Comida alli (Food there).”
If my Spanish was occasionally a source of amusement for Ecuadorians, then their English provided me with equal entertainment. The burgeoning tourist trade is currently far outpacing advancements in the knowledge of English, resulting in a plague of bad English on signs and in publications everywhere. As profitable as a service that offered to correct these slip-ups could be, owners of certain signs, such as, “You are invited to take advantage of the chambermaid,” or, “The manager has personally passed all the water here,” should be kept blissfully unaware of their gaffes.
Before I started teaching I had always thought of grammar – if I thought of it at all – as a set of rules imposed on once liberated speech by dusty, obsessive-compulsive men in tweed suits. However, I soon realized that English grammar is as anarchic as the words it delineates. I remember the moment that my perception of grammar underwent this fundamental shift. I was looking up a problem in my faithful grammar book and found as an answer, “This is an area of grammar not yet fully understood – more study is required.” The thought that there were uncertainties in grammar, frontiers in knowledge to be pushed further back, staggered me. Grammar is not consciously invented, it is discovered! My heart sang at the thought of untamed conditionals still roaming free, resisting the grammar cowboy’s lasso and branding iron. Images of tweed were replaced by men clad in denim, watching over their herds of clauses.
So if you think you know all about semi-colons, you can find me out on the range. We’ll settle our differences like true grammarians: by drawing grammar reference books at high noon – on the weekend, okay?   

Copyright Sean Butler 1999

Published in the Ottawa Citizen, April 24, 1999

Posted by: seanmichaelbutler | March 4, 2010

FATTENING UP FOR WINTER

This fall, much to my surprise, I was seized with the sudden urge to get fat. For most of my adult life my lanky 6″1′ frame has weighed in at a scant 135lbs, prompting cries of distress from aunts who wanted to fatten me up, and giggles from girlfriends the first time they saw me in shorts. I was well proportioned until I hit puberty, after which my weight remained constant while my height towered skyward, stretching my body thin like a taunt elastic. This despite eating enough on a regular basis to feed a village – a classic case of high metabolism. (Or so the “experts” would claim; I wouldn’t rule out the possibility that all those extra calories were needed to feed a brain of exceptional qualities. Either that or a tapeworm was sharing my meals.) Despite my scrawny appearance, I was the envy of many because I could indulge in every food fantasy without paying a price in pounds. It was not until my early thirties that my weight started to creep leisurely upward; by the time I turned 34 this past year, I weighed about 144lbs.
  The reasons for my sudden revolt against the lank are probably more complex than I can fathom. A significant direct catalyst was spending six weeks as an intern where my room and board were provided. While the board was very healthy, it lacked sufficient carbs to feed my insatiable metabolism and I returned home with my normally gaunt figure looking even more emaciated than usual. It was a visceral (quite literally) lesson in the close relationship between the words “internship” and “internment”. I hadn’t felt so skinny since the time I ran off with a girl to Mexico with only $500 in my pocket, split up with said girl once down there, and lived off a poverty diet of tortillas and beans for two months until I could scrape together enough English teaching money to buy a bus ticket home.
 Understanding the more indirect causes of my revolt require the context of my peculiar character: I have always been a contrarian, delighting in confounding other’s attempts to pin my personality down, yet at the same time taking a sincere interest in self-renewal, reinvention, and growth, and this fall my need for renewal was peaking (yes, the end of yet another relationship was involved). I had also convinced myself, not unreasonably, that the direction of weight gain was, for me, the way to greater health. Some website told me that the “healthy” weight range for someone of my height began at 150lbs, and I reasoned that six extra pounds would provide a bit of a buffer should I again fall into love or internment. Finally, winter was approaching, so a little fattening up seemed in order. Knowing my motivations to be less than fully sound, however, I entered into the endeavour in the spirit of idle curiosity and experimentation rather than dire commitment.
A distinctive feature of my weight gain program was that I cared little whether the extra pounds came from fat or muscle – I would gladly welcome them in whatever form they chose to come. Of course, certain forms were more happily obtained than others. Dessert every night seemed a more pleasurable course than a set of pushups. Yet, given the immensity of the challenge I had set myself, it knew it would be unwise to ignore any tactic. After more internet consultation of inconsistent merit, I settled on an overall plan of three snacks a day (one mid-morning, one mid-afternoon, and one just before bed), dessert every night, as much lazing about as practicable, and daily pushups. The basic idea was to increase caloric intake (from fat especially), decrease activities that burn a lot of calories (like exercise), and give a nod to resistance training. I’ll be the first to admit that perhaps it was not a regime designed for maximum health. But I did believe it would be the fastest way to pack on the pounds, which was my only goal, after all. (Interestingly, the modifier “healthy” was usually inserted before “weight gain” in the online advice, but rarely preceded “weight loss”, implying that all weight loss is by definition healthy while only some forms of weight gain are – a tellingly suspect assumption.)
I stuck to my program about as faithfully as your average dieter probably sticks to theirs; I skipped a lot of snacking, was remiss in dessert on a number of occasions, and was unfaithful to my pushup routine to the point of dereliction. Although I did my best to avoid any activities that got my heart rate up, thus wastefully burning precious calories, I was occasionally peer pressured into walks, sometimes even uphill. Nevertheless, after about a month of the new regime I had gained two pounds! I danced for joy (although not too aerobically), and started eating with a renewed fervour. Bring on the triple cream brie!
I only bothered counting my calories one day (in the belief that since I was simply trying to eat as many calories as I could there was little point in counting them) but that day coincided with one of my best performances to date: a sprawling, greasy breakfast of eggs Benedict and fried potatoes, a large bag of Doritos in the mid-afternoon (which, to my delight as I read the nutritional information on the back of the bag, accounted for nearly 1000 calories alone), a couple of beers, and not one, but two dinners. All told it added up to almost 4000 calories – enough to feed two of me!
But this heroic day also alerted me to my limits, for I didn’t feel entirely well after that gorging – had I been in ancient Rome I may have paid a visit to the vomitorium (although apparently it is a misconception that the Romans had such rooms). I needed to pace myself better if I was going to maintain my appetite. Christmas – the Olympics of Eating – was only weeks away, and my stomach needed to be in top digesting form if I was to reap the incredible weight gain potential that awaited me in the holiday season.
In the weeks leading up to Christmas, a few women friends started voicing concerns for my health, foreseeing diabetes and other grim misfortunes if I kept eating like I had been. While I appreciated their concern for my well-being, I brushed it aside (friends always resist change!), instead pulling up my shirt and asking them to poke my new layer of fat. I was pretty sure it was there – a thin stratum of chub just under my skin – and was eager to have it validated.
Finally the long feast of the Christmas season was upon me, kicked off with a traditional turkey dinner at my father’s, followed by a vegetarian smorgasbord with my mother. The leftovers from both repasts fell to myself – my fridge packed to the brink in solidarity with my stomach. Next a seemingly endless parade of friends’ dinners, potlucks, and parties followed. I stumbled from one buffet table heaped with rich delights to the next, groaning from the ceaseless barrage of delicacies, my digestive tract reeling, unable to deal with one assault before being hit by a fresh volley. At what I considered must be the peak of my gluttony, I crawled upstairs to weigh myself once more on the bathroom scale – and tipped it at just barely 150lbs. Yes! At last, I was healthy!
My concerned friends were relieved to hear me declare an end to my weight gain program. Mission accomplished. But like that other mission once declared accomplished, it’s the peace which follows victory that’s much harder to win. The gridlock in my intestines was backed up for miles. I yearned to feel that long lost feeling again – hunger – and for the elevated taste of food that comes with it. I craved emptiness. I had hit a wall; I could eat like a pig no more. My flirtation with 150lbs was fleeting, and unsustainable. By the end of the holidays I was back down to 145lbs.
Yet I’m glad I took this roller coaster ride; I am proud to have joined that society of great athletes who have pushed their bodies to their very limits. But one cannot live always in the airy heights; one must, most of the time, accept oneself, and be happy with who one is.

Copyright Sean Butler 2009

Published in The Ottawa Citizen, March 28, 2009, as “Pass the Poutine (sigh)”

« Newer Posts - Older Posts »

Categories