Anti-austerity protests in Spain.

Since the initial days of the Obama administration, pundits and politicians alike have been predicting what, they call, “a complete government shutdown.” Fix the deficit, we need a balanced budget, cut spending; such is the rhetoric coming out of the Republican establishment that has deluded the political mainstream. As the federal public debt now approaches $16 trillion, it begs the question — when can we expect this doomsday scenario? When can we expect this assumed government collapse? And then there’s the endless threats of future hyperinflation — by the same crowd that has been making such silly “predictions” for over a century.

With all these individuals expecting a inflationary Armageddon and looming debt crisis, you would expect the argument to hold some water. To understand why such fears are unfounded, the nature of money has to first be properly understood in modern context. Its techicalities can be explained with an economic theory I find particularly fascinating; Modern Monetary Theory (MMT).

To start, we must realize that money is no longer pinned to gold. Its subsequent value is backed by the state (i.e. fiat money). This has profound implications in economic theory. For one, it means that the validity of the currency itself is based on the government maintaining a monopoly in controlling it. The government asserts this value through taxation. Thus, private confidence and taxation establish the basis of exchange value in a national economy; fiat value has no intrinsic value on its own. From this basis, can government truly “run out” of money, if it is its sole provider? Before the end of the Bretton Woods System in 1971, when currency was still pinned to gold, it most definitely could. Since moneyed printing was linked to gold in ratio, states were forced to limit their spending in accordance to revenue or be forced to promptly borrow from other governments. Now, the monetary system has been changed. Government is no longer like a a “credit card,” as its so absurdly claimed, that we just add our expenses to and pay for it a later. It is the issuer of currency, not a receiver of it as households in the United States are.

The economic flow can be broken down into two main spheres — the private sector and the public sector. The private sector accumulates assets by spending less than its income, resulting in savings. In retrospect, this savings increase is an accumulation of government-backed currency and bonds. Therefore, in order for private wealth to accumulate, government liabilities must rise parallel to it. In order for this to be done, government must spend more than it receives from taxation to create more IOUs (fiat money).  This is what is commonly as known as “the deficit,” which is the stock of government debt minus its tax revenue. Therefore, government’s financial liabilities are equal to the private sector’s net private financial assets, since a creation of private financial wealth demands a greater circulation of fiat money, which is created through printing currency. Interestingly enough, this means that when the budget is fully balanced, the net private financial wealth is at zero. And if a government enters a surplus, net financial falls into the negatives, since the private industry is now indebted to the public. Likewise, it is impossible for both the public and private sector to simultaneous experience surpluses since one’s ‘debt’ is the others surplus [1].

The above graph shows the aforementioned relationship. It can be represented by the following formula:

G – T = (S – I) + (M – X)

In which, government spending (G) minus taxation revenue (T) gives the fiscal situation, either deficit or surplus, represented by the blue line on the graph; this equals savings (S) minus investment (I) added to balance of payments (i.e imports minus exports) represented by the red line. The relationship is demonstrated quite clearly; in order for financial assets to rise, financial liabilities in the form of government deficits must rise as well. The correlation is especially strong in the dates after the dismantling of the Bretton Woods agreement, after which the United States become fully based on fiat currency rather than be linked to gold. Ever since, the values of deficit to private wealth has been relatively equal in their absolute values.

This, in itself, has profound implications. For one, now we understand the link between government deficits and accumulations of private financial assets. But now, ever more, we can now use taxation to curb negative externalities and to regulate key industries rather than to simply gather revenue since we now understand government’s monetary role. The function of government, in essence, changes and allows for it to further alleviate unemployment woes & elements of poverty. However, keeping economic oversight is crucial, since if deficit spending exceeds full employment, inflationary pressures can ensue because accumulation of financial assets, for the moment, would stagnate.

Now, the inevitable questionwhat about Greece? 

Greece is in a complex situation, much different than that of the United States. Greece uses the Euro, which is controlled by the European Central Bank of the Eurosystem,  the monetary central authority of the European Union. Since Greece lacks control over its own currency, it has been brought into complete chaos with forced austerity cuts, bailouts with strings attached, and violent public unrest. Since it lacks monetary sovereignty, being restrained to reserves beyond its grasp, it is unable to control its debt crisis. The same situation plagues the rest of Europe. The Euro states are unable to print their own currency, and are thus forced to succumb to the bullying of Germany to balance their budgets, which has left countries like Spain in disastrous economic conditions.

Oftentimes, the issue of Germany or even Zimbabwe is also brought up as a counter-argument to the validity of modern monetary theory to showcase hyperinflation caused by fiat currency. Economist Randal Wray addresses this issue in his writing, talking about Germany after WW1:

Yes, once the economy gets to full employment, then extra government deficit spending can start driving up prices. But what happened in Weimar Germany was very different. During that time, the government was forced to pay extremely large war reparations in foreign currencies which it didn’t have. So it had to aggressively sell its own currency and buy the foreign currency in the financial markets. This relentless selling continuously drove down the value of its currency, causing prices of goods and services to go ever higher in what became one of the most famous inflations of all time. By 1919, the German budget deficit was equal to half of GDP, and by 1921, war reparation payments represented one third of government spending. And guess what? On the very day that government stopped paying the war reparations and selling its own currency to buy foreign currency, the hyperinflation stopped [2].

Now, there is one particular example we can point to to show the prowess of MMT. Currently, Japan’s debt to GDP ratio is over 200% and growing:

And yet they experience no instances of “government shutdown.” Perhaps even more interestingly, they are the largest single non-eurozone contributor to the rescue projects that have been instated to ‘save’ the European Union — a total of $60 billion in March of 2012. Japan is even responsible for pumping in $100 billion dollars into the IMF during the height of the crisis in 2009, all whilst its adversaries deemed it to be “bankrupt” [3]. How is this possible? Why is Japan not defaulting? The key is in its monetary system and its handling of deficits. Most importantly, Japan controls its own debt; 95% of it is held domestically by the Japanese themselves, the rest being foreign owned by other central banks [4]. Since the debt is largely owned by the Japanese themselves, they are able to collectively maintain their deficits whilst also keeping an impressive social program system.

Although I elaborated on a really rudimentary view of Modern Monetary Theory, this should suffice as to how misguided the current discussions in the political realm are. Rather than discuss the structural issues of the American economic system, we’re bickering over the technicalities of a budget whilst standards of livings drop and income inequality rises. To make matter worse, a crucial aspect of debt is consciously ignored — the issue of private debt. Households are crippled by personal debt to make up for stagnant wages, an issue I actually discussed in detail in my piece on debt deflation and crisis. The situation is dire and the proposals to cut necessary programs for already-struggling families to “balance the budget” is laughable at best, and downright frightening at its very worst.

***

The Deficit: Nine Myths We Can’t Afford 

Deficits Do Matter, But Not the Way You Think

Marxism and Monetary Theory: A Bibliography 

Understanding the Modern Monetary System

Debt, Deficits, and Modern Monetary Theory

.. hidden from me in my miscellaneous assortment of unfinished notes from last summer.

I. THE NECESSITIES IN FREEDOM  

The prerequisites of liberty are simple and natural. They correspond with one’s aspirations, the triumph of human will, and the realization that man’s mind is his greatest tool. It is thus omnipresent in the human imagination and it has been made conscious ever since man’s first walk out of the swamps of his ignorance. The application of this ideal, however, is fairly recent and symbolically represents a shift in the human mind; from one of negative dogmatism and intellectual chains to one of free-thought and beauty. It is in man’s liberation, emancipating him from the shackles of mental slavery, he will find his place in the natural order – one that maximizes the potentiality of his rational mind, humanizes his labour, and eliminates his alienation from the fruits he creates.

The intellectual origins of freedom date back to Enlightenment thought and the beginnings of modern scientific inquiry. It is of the Lockean concept that man in nature is in perfect freedom, and it is only when he accepts the social contract with the state he relinquishes such freedom. Therefore, we are bestowed certain self-evident inalienable rights that are given to us for simply being individuals and such that cannot be usurped by any sovereignty. These positive natural laws serve as a humanizing factor and divides humanity, philosophically, from being a mere lowly creature; that man is much more than simply some “object.” He is to be free from coercion, his life cherished, and his freedom preserved – for his mind is ever-growing, and that is must be protected for it exists and it is invaluable. It is from this axiom we postulate a corresponding society; a society that values such pure absolute liberty as static, never-changing, and unable to be forsaken – one that realizes that free association is the only proper mechanism in determining ethical relations in reaching a supposed outcome since it is the only such system that fosters a free society of independent peoples. It is from this our true emancipatory potential is reached, to its utmost extreme.

Paris Commune, 1871.

I. The First Big Leap

The transition to a new communicative medium has never been easy for any society. From our lofty origins in oral tradition to the new techie substitutes, such a dynamic has never been without consequences. With the advent of a new methodology, comes a losing of the elements of the old. And with it, also comes those that oppose the change — those that regard it as vile and damaging to order and stability. Socrates, for one, was skeptical of the early transition to written word. In his dialogue Phaedrus, Plato captures Socrates’s words (perhaps ironically) in a story about the Egyptians:

Socrates: But when they came to letters, this, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality [1].

Using this tale, Socrates tells us what is lost with the written word: the passion of speaking, the revelation of dialogue, the bearing of truth. He postulates that writing not only degrades truth, it only works to reciprocate it rather than expound it authentically. To Socrates, it denigrates memory by promoting record-keeping rather than mental recollection and contemplation. In essence, it introduces forgetfulness and keeps man from bearing the responsibility of remembering for himself. It is also constant; it bears no substantive change over time, other than, perhaps, its interpretation. And finally, it does not discriminate its audiences — making it accessible even to those that do not understand it. A speaker can change his tone and message depending on the audience. A work of writing can not.

Through this dialogue, Plato captures Socrates’s main concern, which was sustaining the art of rhetoric and fruitful dialogue. Was Socrates right; were some of his ‘predictions’ fulfilled? Absolutely, we certainly did lose something when oral tradition lost prominence. We lost the art of “story-telling,” and perhaps also some of the values of tribal kinship, but we remarkably gained much more. We attained the ability to spread ideas quicker and keep thoughts well-preserved for future generations to enjoy. Satirically, it was because of writing that Socrates is so revered today, despite the criticisms he had of it.

Not surprisingly, however, much of the initial mistrust that was said of the development from oral tradition to written word has been lost. Without a written account of these criticisms, such accusations have failed the test of time — Socrates is the only ones that remains, due to Plato’s writings, but we can only assume similar criticisms were being thrown around at the time. It is very unlikely that Socrates was the only individual making such claims in his day and age.

II. Suppression and Turmoil

“The printing press is either the greatest blessing or the greatest curse of modern times, one sometimes forgets which” – James Matthew Barrie 

“The Saint Bartholomew’s Day Massacre” by François Dubois.

Turmoil ensued after the creation of a new technology that would radically alter communication. The printing press was invented in the 1440s by Johannes Gutenberg, and with it came violent social upheaval and a loss of Church dominance. With Protestantism on the rise, catalyzed by Martin Luther’s Ninety-Five Theses and spread through mass-printing, the Catholic Church finally saw a threat to its power. They soon scrambled in fear; Pope Innocent VIII introduced censorship in 1487, requiring that the Church approve of all books before publication [2]. The Bible was prohibited to be printed in any language except Latin. Violence erupted in Western Europe as sectarian religious conflict escalated. Huguenots were slaughtered in France by Catholic mobs during the later half of the 16th century, supposed heretics were burned at the stake during the Inquisition of Spain, and the Thirty Years’ War, which was rooted in religious territorial disputes, became a full-scale European conflict by the first quarter of the 17th century.

Perhaps most importantly of all, the bloodshed Europe experienced after the introduction of the printing press tells us of the power of ideas. The Catholic Church was relatively left unchecked in its power and prestige before Gutenburg’s revolutionary invention. Now that ideas could spread more efficiently, dissent was brewing within Church dominion. In retrospect, the persistent efforts of the Catholic Church extend far beyond the religiosity they were attempting to control; they were the representatives of state power during the Middle Ages. During the height of Catholic rule, individual nations were fragmented and lacked governmental oversight in any meaningful degree. Domestic policy was open, and governance was mostly left to Catholic elites within the appointed hierarchy. The spread of a new communicative medium, the printing press, threatened the Church in its power. Its efforts to preserve its authoritarian hold was under the guise of preserving Catholicism, but that was the populist sentiment to stir peasantry support rather than the actual motivation. The Church still functioned as any other state apparatus; As a rule, the free flow of ideas is always antithetical to centralized power. The Catholic Church was no exception in this regard. It scrambled to secure its power just as any other power structure ultimately does when it feels threatened.

The Original Printing Press.

Catholic control would continue to diminish as the decades went on. The Enlightenment questioned the very nature of divine rule, and nationalism began to fully flourish after the Greek War for Independence, eventually replacing Christian “unity” with nationalist fervor. The printing press, and its quick dissemination, would consequently spark national, linguistic, and cultural unity amongst regional peoples which would form the basis for nationhood. Professor Benedict Anderson analyzes this phenomenon in his book, “Imagined Communities,” in which he cites the spread of nationalism to, what he calls, print-capitalism. The profit incentive to increase circulation by print-masters was so strong that they soon abandoned Latin as the standard, and adopted regional languages to facilitate sales [3]. Soon, regional ties began to emerge as individuals began to relate to one another by their language and dialects, which soon evolved into nationalism and the modern nation-state. More generally, this spurred the beginnings of the modern market and facilitated trade amongst commoners. The Catholic Church now found the land it once controlled severely cut, as regions began forming their own respective governmental structures based on linguo-ethnic commonalities, eventually replacing Catholic dominion by state control in their respective regions. It was over, the Catholic Church finally lost its iron grip. A new epoch had emerged.

III. Reaching Modernity

“Modernity” is characterized by all the gadgetry we enjoy today. Television, radios, and telephones have all advanced our communicative capabilities and have allowed us to be in tune with each other and issues beyond our immediate setting. Recent developments, however, have transcended these inventions and have surpassed them in capacity. The Internet just could be the most remarkable and revolutionary creation of the modern era. Characterized by globalized communication, easy access, and plentiful information — the Internet has created an aura of data that has perhaps exceeded the human ability to indulge in it all. The social impact has been unequivocally exceptional. Spurring social movements in the Middle East, facilitating transparency in governance, and instigating awareness and understanding of worldly phenomena, the Internet has created an atmosphere rich of progressive potentiality and knowledge. It has brought an entirely new dimension to the validity of “spontaneous order.” The Internet, it seems, was created out of pure spontaneity; its branches being a natural development when left to its own means.

The Icon of the Declaration of Internet Freedom.

One of the largest problems in any society is the distribution of information. Generally speaking, whoever controls the influx of academic instruction ultimately holds the populace by the handles. Slowly, as humanity has progressed from each new communicative development, this centralization of information has drastically decreased. The commoners were now able to read, to write, and to engage in discourse — to a limited degree. With the advent of the Internet, this entire dynamic has been turned on its head. In its purest form, the Internet is the democratization of information. Relatively, anyone can comment and discuss issues if they have access. Rather than being restricted to academic elites, such topics have been moved from the institutional setting to the populist pool of discussion. Credentials, at least on the Internet, have become largely defunct.

In its current form, Internet discussion is in its infancy. With the fallacious claims and unsubstantiated arguments that frequent comment threads, we must realize that recent developments are still fundamentally in its early stages. The discussion has been handed to the people, for all the delve into, and it now must be absorbed likewise. Never before has there been such an explosion of knowledge given to the masses, and it can only be expected that its dealings will take several decades to fully take root. The so-called “Internet Generation” will, predictably, adapt to such changes and become used to its functions once they come of age.

Of course, as such changes begin to surface, questions begin to arise. Speculations have been made that the Internet has made us supposedly “dumber” [4]. These Neo-Luddite criticisms bear resemblance to Socrates’s hesitations during the transition to the written word — we are losing a crucial component of our memory, we will only realize superficiality, and our attention will be deluded, it is said. The same archaic arguments are resurfacing, unsurprisingly. In another interesting parallel, the governmental organizations of the modern world are in a frenzy over the Internet’s potential for conflict, just as the Church was when it was threatened. In an effort to curb imaginary terrorism, legislation such as ACTA has been constantly brought to the table to address the issues of the cyber-terrorism, patent law, and threats to domestic tranquility [5]. These resolutions have always come with a human face, promising safety and making clear its supposed necessity. Underneath this persona is the real intention; the facilitation of information is a threat to corporate and state power. Monopolization of power is in the interest of those within the dominion structure, and any clash of opinion is seen at ends with normalcy. The Internet has brought this conflict to the forefront. The struggle between those that wish to constrain information and those that hope to free it has become an acute contention in the modern world. We can only hope the institutions that wish to exhibit this control crumble before the conflict escalates. Freedom comes at a price; and it must be defended likewise.

***

“The Critics Need  a Reboot. The Internet Hasn’t Led Us Into a New Dark Age.”

“The Impact of Print” 

Some more information on Professor Benedict Anderson and his work, “Imagined Communities.”

“The Death of Marat” by Jacques-Louis David.

Art is a complex phenomenon that has frequented philosophic circles since the days of Socrates. Scrambling to pinpoint a concise definition, thinkers have attempted to encapsulate objective meanings of aesthetics in an effort to fully understand what constitutes ‘beauty.’

The issue is that art has no distinguishable intrinsic value of its own; it as good as the audience deems it to be. Whether the audience is a group of commoners or a collection of art critics, works of artistic value have to substantiate their worth through harsh criticism — only thereafter falling into the category of real praiseworthy ‘art.’ This interpretation of art is valid in many respects, but it must also be realized that art must serve a function. It is certainly not purely subjective, since it derives its status from collective admiration, and it must portray an universally relevant idea, to capture the audience.

My goal here is not to differentiate between what is ‘art,’ and what is not, as that is an exercise in futility, and entertaining that point is relatively useless. Rather, the question should be phrased: “What constitutes good or proper works of art?”

“Want it? Enter” by Vladimir Mayakovsky.

The struggle for humanization involves articulating our consciousness, our fears and dispositions, into a medium that is accessible and unifying. This medium is art. Art should portray an ecumenical sentiment and should be a statement on the environment we inhabit. Rather than uselessly capture the banality of alienated industrial life, its function is to distance ourselves from mechanization and uniformity. It should introduce spontaneity, commentary, and subtle discontent where our own lives do not. Art should function as a medium in which we use to escape alienation. By association, this means that art is, by definition, antithetical to restraint and modern conditioning. It seeks to escape it, to realize human potentiality outside the bounds of current mechanisms. By need rather than choice, it must function outside these bounds because it expresses, by its very nature, an ideal. A work that is produced within the confines of modern production would hardly be revealing, since it would be restricted to only portraying feelings that are already realized. The struggle is to bring out conditions that elevate these sentiments, which requires working outside the confines of modern alienated labor and life, to highlight the potential of bettering our current condition and status. It is by this token, true ‘art’ is not conservative — it is, by necessity, progressive in its idealism and commentary. The Greeks, perhaps the first real admirers of beauty, understood this quite well, creating sculptures and paintings of the ‘perfect’ form and physique. They were attempting to capture an ideal distant of their own lives, and thus were in the tradition of real artistry.

However, there are social means that pervert and downgrade art and bring it back into the restrictive confines of bourgeois industrial life. Profit, as a general rule, distorts its true function. Art cannot act as an escape if it crafted within the model of mass-production. It loses its individuality, the heart of its meaning, if it is created in bulk by groups driven by monetary gain. It also loses its ability to depict anything outside the contemporary, becoming a self-congratulatory trivial blanket statement that praises the lifestyle it is a part of, rather than criticizing and dissociating itself from it. The problems of artistry is heavily intertwined with the general struggle of humanization. It is a core component of reflection. The function it assumes, and how well it communicates it, is what differentiates the good from the bad, the masterworks from the mediocre.  Serving as an escape from alienation, art takes on a crucial form in human development. Without it, out inner emotions would be bound to the present, with no way of articulating what we wish to become. It is in this way art is an important realization of what it means to be “human,” and a stimulus for progressive insight and change — be it in the mind or in action.

The beliefs of Western liberal society are at a fundamental crossroads. In one direction, lies secular humanism — at the other, lies ancient Judeo-Christian heritage and its supposed claim of relevance. Most individuals walk a very fine line between the two; holding onto the cultural implications of religion, while also not minding its declining involvement in government. Belief acts as a mediator which holds this delicate balance together.

Belief, in and of itself, is a obligatory view. It is a tenet you live your life by, and it has profound implications on your social psychology and the general organization of a civilization. It would be foolish to discredit the influence of religiosity in the West, in spirit and in practice. However, belief can function as a sort of ideological trapSimply put, acting on a belief is not equivalent to actually believing it. Philosopher Slavoj Zizek provides us with a story to illustrate this point, in which he tells us the tale of physicist Niels Bohr.

“A well-known anecdote about Niels Bohr illustrates the same idea. Surprised at seeing a horseshoe above the door of Bohr’s country house, a visiting scientist said he didn’t believe that horseshoes kept evil spirits out of the house, to which Bohr answered: ‘Neither do I; I have it there because I was told that it works just as well if one doesn’t believe in it!” [1] 

In an excellent passage, Zizek essentially explains the function of belief in modern society. Although individuals may personally not believe an ideology, they act as if they do because they take it others believe. In fear of reprisals, they then live as if the belief is theirs. But there is a twist: what if the other individuals do not believe it either? With this, an entire belief system is build upon the existence of non-belief among individuals. I take religion to be in this same stride, functioning as a belief in a sea of disillusioned disciples.

Such a statement is hardly revealing to the standard American Christian household. The father takes his son to Church, to educate his child on Christian values. The father, himself, was pressured to do so by his own parents. They would be disappointed if he raised his children without such a pretense. The father, himself, does not believe, but acts as if he believes to give a proper impression on his parents. The child lacks the belief also, but to not disappoint his father, he refuses to tell him. Instead, he acts as if he believes. Here, we have a situation of two non-believers, paradoxically imposing a belief on one another. Would it not be another twist of irony to say the father’s parents do not believe, just as the father and the son do not? This belief is likewise solidified, passed through familial relationships, and built upon a structure of non-belief — giving those trapped within this dilemma the illusion of a belief that is absent from the individual’s own choosing, being imposed on them by the technicalities of human relationships.

This is the death of God. The death of God is not external invasion unto the Christian church hierarchy. It is not an attack from outside the prayer circles — it is within them. It is when God as an entity becomes irrelevant to the actual substance of belief, being replaced by a complex foundation of non-belief. In Europe, trends of non-belief are stronger than in the United States. According to surveys by the Financial Times/Harris Poll, only 27% of individuals living in France truthfully believe in a Christian God or Supreme deity. This is contrasted with 73% of those in the United States [2]. Bearing in mind the different histories of European and American ancestry, I take it that such a large disparity between religiosity is largely due to the culture of the United States. Religious disbelief is looked down upon, even persecuted, in American media and society — denigrated in excessively negative terms. The question is, how many of the religious belief structures in the United States are founded on fear of consequences? Potentially, very many, I would say.

However, the implications extend further than Zizek’s story on ideology. Equally important are those that believe (for cultural reasons generally), but live their lives as if they do not. Done through ritualistic ends, their religious ideology becomes a routine rather than a philosophy of action. For many Western Christians, this is the reality. They find themselves lofting to church on some Sundays, and then vehemently arguing over whether we should say “Merry Christmas” during the holidays, and fighting to preserve prayer before football games [3]. The extent of Christian ideology in American culture has largely become a gimmick of cultural preservation more than anything else, serving as the last backlash of a decaying social phenomenon.

Christian ideology makes many universal claims. It promotes objective truth and meaning, a belief system that is dogmatic and said to be true by its disciples. They have this bastion of knowledge, the key to God’s judgement and mercy, that is said to be the absolute truth. And yet they live their lives as if this is hidden, only resurrecting (excuse the pun) it when socially beneficial. If an individual held such truth of the universe, would they not devote their entire lives if they believed so strongly it was true, rather than bickering over trivialities on cable television? The charade of these religious charlatans defending “Judeo-Christian America” is a testament to the hypocrisy of the ideology in the hearts of those that follow it. True belief would not frequent itself in discussions on media sensationalism, in an attempt to keep what always has been in American society; it would prepare, and act, in the interests of God and rely on his judgements. Perhaps if they took God’s objective truth to its fullest conclusion, they would sit and pray rather than rely on themselves. If they are so convinced of their beliefs, they would be equally be convinced God would give them a hand.

The death of God does not involve the elimination of religion, nor does it involve the tearing down of religious institutions. It involves the hollowing out of religion by its believers. It makes God into a centerpiece of disbelief, propped by complex interlocked relationships and cultural enforcement. A belief propped by non-belief, it finds itself as the comfort to those that fear the destruction of their religious and cultural identity. It finds itself as the poster-child of reactionary backlash, the broken center of the exaggerated dichotomy of secularism and religiosity, and the illusionary opponent of civil institutions by religious disciples that lack the belief themselves. During the height of Catholic ascendancy, the belief was not so fractured. Prayer was seen as a powerful tool; the Devil was a real distinguishable threat. We have long abandoned such views, despite what is heard in Evangelical circles (I can assure you there would be little hesitation for them to take human action over prayer if their own lives were in peril). Let’s be frank, God is dead –The emperor has no clothes on, we are looking straight at him, but we are too naive to admit it.

Bernie Madoff — the con, the criminal, the fraud, and the scum of the corporate establishment. These were the titles given to this corrupt financier, but above all, he was said to simply be a “bad egg” in a basket of well-intentioned entrepreneurs and “job creators.”

However, despite these claims, Madoff’s case is not unique. Madoff’s real crime was that he stepped outside the circle of appropriate corporate conduct, whose edge tends to gravitate farther and farther away from lawfulness as income rises. The reality of wealth privilege within the institutions that are publicly seen as ‘just’ is a causality of a system that rewards excess. Most shocking, however, is how the personal endeavors of these individuals clash with their fraudulent actions. Madoff, perhaps, is the epitome of such a phenomenon. Although stealing billions of dollars, he was also a devoted philanthropist. His largest beneficiary was the Picower Foundation, which allocated the funds to organizations such the Boy Scouts of America and the Children’s Aid Society. NY Times reports the funds as:

* 2007 — $23,424,401 (See the 2007 Form 990 filed by the Foundation with the Internal Revenue Service.)
* 2006 — $20,184,183 (See the Form 990.)
* 2005 — $27,662,893 (See the Form 990.)

In total, $958 million was donated to the Picower Foundation.

Other charities were involved, and were almost entirely dependent on Madofff’s funds. As reported by the NY Times, some of them included:

  • $145 million to the Carl & Ruth Shapiro Family Foundation
  • $20 million to Tufts University
  • $18 million to the Jewish Community Foundation of Los Angeles
  • $19 million to the Madoff Family Foundation
  • $90 million to the Hadassah, the Women’s Zionist Organization
  • $100 – $125 million to Yeshiva University

These are incredible amounts of money, so abuse comes to no surprise; but is it not an anomaly that the worst white-collar criminal in history was also one of the ‘greatest’ philanthropists, by modern standards? Acting as a perverse indulgence, charity might not be as chivalrous of an act as socially understood. Seen as a mechanism of redemption, this behavior is typical in this category of criminal activity. Bernard Ebbers, convicted in 2005 of similar crimes, showed the same phenomenon, having donated over $100 million dollars to charity over the course of ten years. Corporations are no exception; Enron was also a known giver to charity,

Enron CEO Kenneth Lay exemplified the company’s philanthropy, endowing several professorships at the University of Houston and Rice University, while the company itself was known for its generous gifts to arts groups, scholarship funds, and the Texas Medical Center.

Such behavior, interestingly enough, correlates with the religious attitude seen when the Catholic Church held immense power in Europe during the Middle Ages. In an effort to ‘save’ those in Purgatory, having commited sins on Earth, priests charged individuals sums of money for indulgences, or remissions, to free or limit the time their loved ones would be trapped in this supernatural lingo. Priests, making huge individual profits, attempted to justify their accumulations through Church-sanctioned actions. In effect, they stole with one hand and ‘saved’ with the other.

In a modern twist, corporate crime is looking  for that same metaphysical ‘salvation,’ and they certainly found it in charity. Functioning as an egoist drive, this behavior only highlights the disparity of behavior within certain classes of the social strata. Little rationality can be viewed amongst those that accumulate such large reserves of finance power, as they scramble to find redemption in a sea of fraud and narcissism. It is this crude revelation that illustrates the paradox of corporate conduct — as long as you appear charitable, what is done behind closed doors is forgivable. Or so the twisted mindset goes.

***

More info on the “Paradox of Fraud and Philanthropy” 

Just as a quick note, I’ve gotten to moving most of my old posts from Blogspot onto this medium. So if any of you are wondering “hey, look at all these new posts!” they’re simply writings that have been accumulating over the last few months on my old blog. Be sure to check them out, I managed to set them by date accordingly as they originally were — but please forgive me if there are any formatting issues, I’ll be fixing them in the upcoming days.

When analyzing debt and economic growth, usually only government debts are examined. They are seen as a corollary to economic crises, devaluation of currencies, and government defaults — and while I’m not going to dispute or discuss these claims here in this post, perhaps on a later day, I will say that they are misleading trends of analyses in relation to the current financial crisis. There is another ‘kind’ of debt that is up for discussion and more pertinent to the crisis of 2007 — credit market debt, which consists of domestic non-financial sectors (household debt, business/corporate debt, and government debt) and domestic financial sector debt.

This explosion of credit began around the time of the institution of ‘Reaganomics,’ where individuals took to lending and spending over saving despite stagnant wages. 

A more detailed look of the trend since 2002, with its peak. The shaded area depicts the length of the recession.

However, the above graphs show the total credit market debt. Broken down, household (consumer) credit debt depicts the same trend.

What does all this mean? Fundamentally, this means that the expansive economic growth of the previous three decades were on shaky footing to begin with, likely leading to the global economic collapse that followed. The impact of the credit boom since the 1980s is described in an article by the research institute Center for American Progress (CAP) by Christian E. Weller. He writes:

“The debt is highest among the middle class. Middle-income families before the crisis had a debt-to-income ratio of 155.4 percent in 2007, the last year for which data are available, for families with incomes between $62,000 and $100,000, which constituted the fourth quintile of income in our nation in 2007. This ratio is higher than for any other income group. Families in the top 20 percent of income (with incomes above $100,000) had a ratio of debt to income of 123.6 percent, and families in the third quintile (with incomes between $39,100 and $62,000) owed 130.7 percent of their income. Households in the bottom 40 percent of the income distribution (with incomes below $39,100 in 2007) owed well below 100 percent of their income.”

Shocking as it is, this is the not the first time such a credit upsurge occurred. There was a similar phenomenon that occurred before the Great Depression of the 30s. Samuel Brittan, in his review of Richard Duncan’s ‘The New Depression: The Breakdown of the Paper Money Economy,’ writes:

“It is certainly striking how both the 1929 Wall Street crash and the 2007-08 financial crisis were preceded by a huge credit explosion. Credit market debt as proportion of US gross domestic product jumped from about 160 per cent in the mid-1920s to 260 per cent in 1929-30. It then fell sharply in the 1930s to its original position. Later it surged ahead in two upswings after 1980 to reach 350 per cent of GDP in 2008.

 

This analysis of crises in relation to credit market debt is attributed to economist Irving Fisher, and his ideas were largely ignored in favor of mainstream Keynesian view of economic crises, which argued that they were caused by an insufficiency of aggregate demand. Since the recent economic crash of 2007, Fisher’s ideas have enjoyed a resurgence in economic thought. His theory on debt deflation has been of significant fascination in the heterodox Post-Keynesian school of economics, and is now beginning to enter the mainstream. Economist Paul Krugman discusses Fisher’s ideas in one of his posts on his blog “Conscious of a Liberal” in the NY Times — below is the graphic taken from the article (with added information).        

Since the total credit market debt owed has been stagnant since late 2009, reaching its ‘peak,’ and if GDP steadily keeps rising, it is likely that debt deflation will occur all the same as it did during the Great Depression. However, the issue of private debt and its hindrance on the consumer is still an issue — and if spending is ever to increase significantly, the issue of wages and consumer debt must be addressed.

***

– An analysis of the total credit market debt by Crestmont Research.

The joke, as Zizek tells it, goes along these lines:

A man is convinced he is a grain of seed. He is quickly taken to a mental institution where the doctor eventually convinces him that he not a grain of seed; he is a man. He is then supposedly cured and is permitted to leave the hospital. However, once he steps outside, he immediately rushes back in trembling with fear. “There is a chicken outside,” the man says “and he is going to eat me.” The doctor tells him, “Come now, you know very well you not a grain of seed, but a man.” “You and I surely know that,” the man tells him, “but does the chicken know?”

This just tells us the nature of psychoanalytic study — it not enough to convince the truth to the patient, but one must also be convinced that others assume that same truth. It is this struggle of truths that encapsulates the psychiatric field, which attempts to normalize individuals who have accepted a reality different than that of one’s peers.

In 1893, Frederick Jackson Turner presented his landmark essay The Significance of the Frontier in American History to a gathering of academics at the World’s Columbian Exposition. Turner, in his thesis, argued that the unique American frontier experience shaped the United States’ development and created a distinct culture and political condition. In essence, the frontier was responsible for molding the American character into what it is.

While his thesis certainly stands true, the “Old West” also brought with it an economic anomaly — a differentiating aspect that made the United States’ economic upbringing particularly strange. From its colonial origins and throughout the 1800s, the U.S economy was consistently plagued with shortages of labor. These shortages would influence the development of slavery in the South, where plantation owners find it necessary to import more slaves to sustain their agricultural output. These shortages would also be the reason for the influx of immigrants throughout the 1800s, who where subject to extreme prejudice from nativists once some forms of unemployment actually became evident.

The above graph depicts estimates made by the Bureau of Labor Statistics. However, they are relatively high due to the impossibility of knowing the actual levels of unemployment. Little surveying was done, regional statistics were not kept, and much of the American population was self-employed. This makes assessing the unemployment rate during this period of exceptional American growth difficult. And further complications arise when youth employment is added into the calculations —  which customarily started the from age of 10 in most areas. Since not all households required their children to work, making fully accurate estimates is nearly impossible.

However, given the growth of American industry during the 1800s, basic assumptions can be made. For one, the inventiveness of the U.S industrial economy can be properly explained if the labor shortages are taken into account. Because of the lack of labor in the United States, industrial capitalists had to rely on new technology to be able to increase their output and balance the lack of laborers. From this predicament, the American System of Manufacturing, as it was called, was developed. Because of its efficiency, it was revered amongst industrialists in Europe. The most important contribution being — the creation of interchangeable parts. This allowed industry to drastically increase their output and keep costs to a minimum. This also coincided with the high degree of mechanization that was starting to take root in the United States with the beginnings of the first Industrial Revolution.

Much of this technological advancement was also a product of the contention between agricultural and industrial regions during the United States’ great economic expansion. Although these clashing interests date far back to colonial times, the creation of the General Land Office  in 1812 was a turning point. This independent federal agency was responsible for distributing and surveying public domain land in the largely unexplored territories of the United States. Two laws in particular addresses the rationing of these lands — the Preemption Act of 1841 and the Homestead Act. The former was passed to ration pieces of the uncultivated territory at a price. Up to 160 acres could be purchased at a time, and at very low prices. It was done to encourage those already occupying federal lands to purchase them. The Homestead Act, first enacted in 1862, was similar in its intent. Its aim was giving applicants roughly 160 acres of land free of charge west of the Mississippi River. Now, northern industrialists not only had to deal with labor shortages — they also had to satisfy their workers enough so they would not opportunistically leave and go westward.

The frontier experience did much more than cultivate the unexplored land westward; it intensified the shortages of labor in the United States. This scarcity created an inventive industrial sector that had to compensate by developing new technology, which would ultimately lead the United States to the economic dominance it enjoys today. Economist Richard Wolff, in a few of his lectures and writings, theorizes that it was this remarkable condition that created a very different experience for those living in the United States.

“What distinguishes the United States from almost every other capitalist experiment is that from 1820 to 1970, as best we can tell from the statistics we have, the amount of money an average worker earned kept rising decade after decade. This is measured in “real wages,” which means the money you earn compared to the prices you have to pay. That’s remarkable. There’s probably no other capitalist system that has delivered to its working class that kind of 150-year history. It produced in the U.S. the expectation that every generation would live better than the one before it, that if you worked hard, you could deliver a higher standard of living to your kids.”

Frankly, Wolff’s analysis makes sense. Rising wages kept the worker class’s morale high, and attracted immigrants — it also served as an incentive for working people to stay as laborers rather than receive land and move westward.  So, fundamentally speaking, American employers experienced competition in the labor market for two specific reasons. One, the federal land programs provided incentives for workers to move westward and entrepreneurs had to provide reasons for them to stay and work in the form of higher wages. And second, since the labor supply was constantly in high demand, workers were not easily replaceable. This implicitly forced firms to increase their wages, to attract laborers to their respective industries.

In 2006, Michael Lind published an article in the Financial Times titled “A Labour Shortage Can be a Blessing,” which indirectly supports Wolff’s thesis on wages. He writes:

“In the ageing nations of the first world, the benefits of a labour shortage, in the form of higher productivity growth and higher wages, might outweigh the costs. Where labour is scarce and expensive, businesses have an incentive to invest in labour-saving technology, which boosts productivity growth by enabling fewer workers to produce more. It is no accident that the industrial revolution began in countries where workers were relatively few and had legal rights, rather than in serf societies where people were cheaper than machines.”

In order to validate Lind’s and Wolff’s claims, two specific economic topics must be properly historically analyzed. The first one being — is there evidence for such a labor shortage, and if so, how severe was it? 

Given the estimates made by the Bureau of Labor Statistics, it would be safe to assume that unemployment was not a major issue during the 1800s. When youth employment is taken into consideration, the estimates become very inflated, since the labor pool was so large. However, beside macroeconomic analysis, there are specific scenarios which shows that such a dilemma in production was indeed persistent in the United States during the 19th century. The PBS television series “American Experience” gives one particular scenario during the construction of railroads in the 1860s that validates this assumption.

“In early 1865 the Central Pacific had work enough for 4,000 men. Yet contractor Charles Crocker barely managed to hold onto 800 laborers at any given time. Most of the early workers were Irish immigrants. Railroad work was hard, and management was chaotic, leading to a high attrition rate. The Central Pacific management puzzled over how it could attract and retain a work force up to the enormous task. In keeping with prejudices of the day, some Central Pacific officials believed that Irishmen were inclined to spend their wages on liquor, and that the Chinese were also unreliable. Yet, due to the critical shortage, Crocker suggested that reconsideration be given to hiring Chinese…”

Historian Rickie Lazzerini portrays a similar issue in Cincinnati, Ohio during the beginning of the 1800s.

“…the busy industries created a constant and chronic labor shortage in Cincinnati during the first half of the 19th century. This labor shortage drew a stream of Irish and German immigrants who provided cheap labor for the growing industries.”

The second question that must be asked is — was there actually a persistent increase in wages during the 1800s? 

To properly answer this question is immensely complex, since such little data is available. However, there exists one specific academic paper on the subject that addresses this question and the one posed prior. In 1960, economist Stanley Lebergott authored a chapter addressing wages in 19th century United States in a full volume called “Trends in the American Economy in the Nineteenth Century” published by the Conference on Research in Income and Wealth. The chapter itself was titled “Wages Trends, 1800 – 1900.” He writes:

“Associated with the enormous size of these establishments was the
need to draw employees from some distance away. Local labor supplies
were nowhere near adequate. One result was the black “slaver’s wagon”
of New England tradition, recruiting labor for the mills. The other was
the distinctly higher wage rate paid by such mills in order to attract
labor from other towns and states. Humanitarian inclinations and the
requirements of labor supply went hand in hand. Thus while hundreds
of small plants in New York, in Maine, and in Rhode Island paid 30 to
33 cents a day to women and girls, the Lowell mills generally paid
50 cents” [451].

Regions that lacked adequate quantities of labor had to rely on larger wages to attract workers from afar. However, apart from the industrial north of the United States, farm wages also increased — perhaps signifying a competitive rift between the agricultural sectors and the industrial ones.

Professor Lebergott, later in his analysis, then provides the full wage computations that he was able to calculate given individual data and trends recorded by local media. He combined the data he acquired on a state by state basis, starting locally and then branching out to create a national average. Also note, the drop in wages between 1818 – 1830 he attributes to “the close of the Napoleonic Wars and the end of the non-importation agreement.”

Based on economist Stanley Lebergott’s analysis, Richard D. Wolff’s assertions are validated; the United States, for the most part, did enjoy increasing real wages throughout the 19th century. Even more so, it goes further in proving Michael Lind’s claim that shortages of labor can indeed cause wage increases and heighten technological innovation. It is very likely that the combined frontier experience and shortages in the production processes created a unique variant of capitalism that was unique to the United States. It gave American households the confidence that if they worked harder, they would earn a better living. It also gave to them the optimism that their children would enjoy a better standard of living.

This unprecedented century of growth and success also had often overlooked impact on the American psyche. Because of the inflated expectations, it instilled a unique mentality amongst working class Americans. As John Steinbeck put it, the poor don’t see themselves as victims — but rather as “temporarily-embarrassed millionaires.” It is this aspect of the American psyche that has allowed the broken system to flourish in the decades since the persistent stagnation of wages of the 1970s. Admitting the issue is just to difficult, for some; if we believe enough, the American dream just might become real again, as it was for those traveling out West to find riches and fortunes. In retrospect, the sooner working class Americans awake from this fantasy, the sooner they will realize that times have changed — and not in their favor.

*** 
– A lecture where Wolff discusses the frontier experience and 19th century wage increases.
– Some statistics and fact on U.S economic growth during this time period.
– A decent article on this topic from the Wall Street Journal (you need a subscription to view it).
YesterYear Once More

Life as it was reported back then

Victor Serge's Ghost

"One must range oneself actively against everything that diminishes man, and involve oneself in all struggles which tend to liberate and enlarge him"

Collecting Russian Art

20th century Russian art and its uniqueness

Communist League Tampa

proletarians of the world, unite!

Mosul Eye

To Put Mosul on the Global Map

Uglytruth-Thailand

Thai politics

Yanis Varoufakis

THOUGHTS FOR THE POST-2008 WORLD

Fractal Ontology

refracting theory: politics, cybernetics, philosophy

The Disorder Of Things

For the Relentless Criticism of All Existing Conditions Since 2010

Valentino's blog

A blog about visual arts (well, mostly...)

Negative Catallactics

institutional political economy, social ontology, neopragmatism, critical theory

rheomode

a research practice working at the intersection of architecture, technology, art and ecological pedagogy

Paths to Utopia

scattered reflections on politics for our time

Evvycology

It's Evelyn, It's Ecology, all in one convenient package

Idiot Joy Showland

This is why I hate intellectuals

All that is Solid for Glenn Rikowski

All that is Solid ... is a radical blog that seeks to promote a future beyond capital's social universe. "All that is solid melts into air" (Karl Marx and Friedrich Engels, 'The Communist Manifesto', 1848).

Symptomatic Commentary

Notes, Interviews, and Commentary on Art, Education, Poetics, and Culture

The View East

Central and Eastern Europe, Past and Present.

Uneven And Combined Development

theorising the international

Bezbozhnik - безбожник

On anti-religious propaganda in the early USSR, with some disgressions on the Russian Soul.

Nationalism Studies

Monitoring the Changing World

Old Woman on a Bicycle

My Photography, Mostly

Sráid Marx

An Irish Marxist Blog

Pedagogy & the Inhumanities

Pedagogic nihilism fights a windmill battle against international capital

Peter Marcuse's Blog

Critical planning and other thoughts

communists in situ

leberwurst proletariat

People and Nature

Some socialist ideas about society, the earth and their interaction

Posthegemony

Something always escapes!

New Historical Express

(Formerly Hatful of History)

RedneckRevolt Blog

Dave Strano | Anti-fascist | Anti-racist | Gunslinger

Words As Intervention

Anthropological Reflections

"And you shall teach this to your children"

A Jewish family's journey through Palestine

BINARYTHIS

EVERYTHING YOU ALWAYS WANTED TO KNOW ABOUT GENDER BUT WERE TOO AFRAID TO ASK

The Renaissance Mathematicus

Just another WordPress.com weblog

GraecoMuse

ἓν οἶδα ὅτι οὐδὲν οἶδα