The string below is reproduced from a Facebook discussion of Jackson Lears's review of two books (by Sally Jacobs and Janny Scott) that appears in the London Review of Books:
[Opening entry from me, Tim Lacy] Check out the review below [above]. And here's the commentary I added at my profile page: "Most of this review is a complicated, informative look at Obama's family history. At the end Lears hits you with some lefty pessimism about where Obama's presidency is headed. Fine. The last year has inoculated most of us from that. But in the very last line---Boom!---Lears implies that Obama would be willing to draw us into a war with China. ...Wow. ...There's pessimism, and then there's pessimism. Lears thinks that Barry has inherited his father's arrogance, and is willing to apply it Bush-43-style to our foreign policy." ...Am I the only one that sees this a bit far out?
Comments:
John Haas "... while the president dispatches US troops to Australia and the secretary of state to Burma."
The horror.
11 hours ago · Like
------------------------------------------------
Ben Alpers
To be fair to Lears, his argument about China is that we would never accept Chinese troops in Venezuela as we expect the Chinese to accept US troops in Asia. Obama, Lears concludes, is insisting on "the open door" not war with China, though he does suggest that the policy risks danger: "The open door for US involvement in Asia, flung wide in Japan’s face a century ago, is now reopened in China’s. One can only imagine the American reaction, were China to make a similar move in Venezuela or Colombia. Obama’s recoil from disappointment may turn out to endanger us all." I don't read Lears as saying anything controversial about what Obama is doing here, though one might well question Lear's psychological explanation for it as well as his sense that Obama's policy is fraught with danger for the US and the world.
10 hours ago · Like
------------------------------------------------
Ben Alpers How has the last year inoculated us against lefty pessimism about where Obama's presidency is headed? Or, rather, how has it so inoculated you?
10 hours ago · Like
------------------------------------------------
James Livingston
The paranoid style in American political biography, rendered now as psychological reduction to the ruined dreams of the father? This piece is of course a poignant measure of the academic Left's profound disillusionment with Obama. And the antidote--not the cure--is, of course, James Kloppenberg's long march through the intellectual history of the president, beginning in earnest with the legal realism and pragmatism learned at, uh, Harvard. But you can reach certain of Lears's conclusions by another, less reductionist path, and that would entail only this knowledge: like most academic leftists who believe the electorate has been in thrall to the Right since Reagan, Obama reads the country as center-right, and therefore believes he must compromise with the addled agenda of the Republican Party. In fact the country is center-left, and so this attitude of compromise with the Right is mostly unnecessary (see, to begin with, the Pew Poll Andrew Hartman cites). But notice: the academic Left wanted Obama to overrule what it took to be the right-center majority, in the name of truth, justice, and the real American way. The president, being a politician who studied Lincoln closely, doesn't believe he can ignore or flout public opinion, the practical embodiment of consent, so he's bound to disappoint those who, like Dick Cheney, don't care about this predicate of democracy. And notice this too: Obama is in essential agreement with the academic Left on the benighted state of the majority's opinion; he just can't get around it by declaring it stupid.
10 hours ago · Like
------------------------------------------------
Tim Lacy Ben: The lefty pessimism has been consistent enough---steady and at a low-enough level--for those who pay attention to not be fazed by yet another appearance of it. I have grown quite used to it. I suggested others may share my sense, but I don't guarantee it.
10 hours ago · Like
------------------------------------------------
Tim Lacy Ben: On your first comment, I don't see Obama as someone to recoil from disappointment in that fashion. I say this as a psychological comment, but it derives from 2+ years of watching him (remotely of course) react politically to things.
10 hours ago · Like
------------------------------------------------
Tim Lacy Jim: Thanks for putting into words a portion of my reaction, namely that this piece measures the academic left's disillusionment with Obama. And I want to be clear---I share some of it. I don't agree, however, that only the academic left wanted Obama to overrule the perceived center-right majority---a lot of rational, centrist, and moderately progressive folks wanted him to work harder against that perceived group.
10 hours ago · Like
------------------------------------------------
Ben Alpers Tim: I think we agree in our assessment of Lears' psychological explanation. To me it seems unnecessarily complicated. There are things that have surprised me a bit about the Obama administration (e.g. its tepid environmental record), but its foreign policy is pretty much exactly what I expected based on what Obama said throughout the 2008 campaign.
10 hours ago · Unlike · 1
------------------------------------------------
James Livingston
When I say overrule, I mean disregard and forge ahead, public opinion be damned, ala Cheney, unitary executive and all that. If you believe in public opinion as the embodiment of consent, you can't ignore it. If you read that opinion the way the Left, academic or not, does--as the expression of a population duped by the Right, ala Thos Frank--you're stuck either with compromise or with the dictatorship of the A students, those who know better than you do what's best for you.
10 hours ago · Unlike · 1
------------------------------------------------
Ben Alpers
Jim: I agree about the implications of Frank's argument, but I'm less convinced that the Left (academic or otherwise) universally accepts it. There are just as many who insist that the public is actually to the left of the Democrats. On at least a handful of issues--a healthcare public option, ending US involvement in Afghanistan, and taxing the wealthy are three examples--polls suggest that they are...though I think this optimism about public opinion can lead one into political deadends just as Frank's pessimism can. But especially since 2008, I've heard the argument that our political system has been captured by economic elites and is unresponsive to public opinion more frequently on the left than Frank-like arguments that a majority of the public have been duped by the right.
10 hours ago via mobile · Unlike · 1
------------------------------------------------
Varad Mehta From a recent Gallup poll: 42% of Americans describe themselves as "conservative," 37% as "moderate," and only 19% as "liberal." The center-right sure adds up to a lot more than the center-left. http://www.gallup.com/poll/151814/Americans-Huntsman-Romney-Paul-Closest-Ideologically.aspx
8 hours ago · Like
------------------------------------------------
John Haas That depends on what those "moderates" mean by the term. Many no doubt think "liberal" is an extreme designation. Many of those self-described "conservatives" also happen to generally support the New Deal.
8 hours ago · Unlike · 2
------------------------------------------------
James Livingston
The NORC at Chicago and the SRC at Michigan have been for years asking these so-called conservatives what they want, and they have invariably said--until June of this year--that they want more public spending on health and education. So it all depends on what you mean by "conservative." Newt Gingrich was, and is, right, most self-professed conservatives are actually liberals who don't trust or understand the people who act on "liberal" principles. Irving Kristol was also right in suggesting, back in 1978, that so-called liberals in the US had evolved into social democrats of the European kind.
8 hours ago · Unlike · 1
------------------------------------------------
John Haas
Yes to JL. We do have some "Manchester liberals" in this country--libertarians and those that lean in that direction. We have very few real conservatives--if by "conservative" you mean those who want to conserve ("good") values and life-ways, and who see threats to those coming as much from the market as from "big government." Our "liberals" are more social democrats, and are rooted as much in the progressivism of Lincoln and (T) Roosevelt as they are in anything else. If we want American labels for American predilections, we might better see contemporary American "conservatives" as essentially Jacksonians, and "liberals" as Whigs. Like the Jacksonians, American conservatives are not afraid of executive power (as long as their guy has it) and they're not shy about big government when it's associated with war. Government spending designed to benefit everyone--or, even worse, demographic groups not their own--they hate, however, along with a national bank/federal reserve, national roads/green energy, and red and brown folk hanging around on land they desire . . .
8 hours ago · Like
------------------------------------------------
Let's continue the discussion here! (or is it ?) - TL
Sabtu, 31 Desember 2011
Rabu, 28 Desember 2011
African Americans' desire and more on the racial protocol
I am returning to thinking about the racial protocol, which was begun here.The original quote that introduced that phrase to me is:
This contrasts starkly with Michael West's and William Martin's argument
that the black international “has a single defining characteristic: struggle.” This struggle is born of consciousness and the dream of a “circle of universal emancipation, unbroken in space and time” (From Toussaint to Tupac: The Black International Since the Age of Revolution, 2009). To me, this suggests that African Americans can be wholly understood through "the struggle." Or at least primarily understood. That is insufficient to understanding the lived experience of African Americans.
Turning to Tate's book that gave rise to the term racial protocol, Psychoanalysis and Black Novels: Desire and the Protocols of Race (1998), I think she offers a more nuancedperspective , one that does not neglect race and the struggle, but puts it into conversation with individual personality. She writes specifically about novels, but I think it can be broadened to many other forms of African American writings.
She warns,
travelers . All three are important to understanding how and why Derricotte acted.
On my original post, Tim suggested that I was trying to do "research for research's sake"--understanding black people's lives for the pure satisfaction of understanding. I think, though, that adding in personal desire-- the internal dialogue of black people (to the extent that we can know it through the veil of dissemblance)--to understandings of political agency, we can more fully assess the strengths and weaknesses of social movements. At the same time, acknowledging and researching the internal lives of African Americans gives us a more nuanced perspectives into the lived experiences of black people and understand when and where race matters by acknowledging that sometimes it matters greatly and sometimes it doesn't.
Let me end with a final quote from Tate:
"The literary theorist Claudia Tate developed the term 'racial protocol' for the assumption that African Americans' experiences can be reduced to racial politics and that individual subjectivity carries little importance. As a result of the racial protocol, much writing about African Americans focuses entirely on racial struggle and not on the human experiences that would move the analysis beyond a two-dimensional representation of African Americans' lives."
--Anastasia Curwood, Stormy Weather: Middle-Class African American Marriages between the Two World Wars
This contrasts starkly with Michael West's and William Martin's argument
that the black international “has a single defining characteristic: struggle.” This struggle is born of consciousness and the dream of a “circle of universal emancipation, unbroken in space and time” (From Toussaint to Tupac: The Black International Since the Age of Revolution, 2009). To me, this suggests that African Americans can be wholly understood through "the struggle." Or at least primarily understood. That is insufficient to understanding the lived experience of African Americans.
Turning to Tate's book that gave rise to the term racial protocol, Psychoanalysis and Black Novels: Desire and the Protocols of Race (1998), I think she offers a more nuanced
"The black text mediates two broad categories of experience: one is historically racialized and regulated by African American cultural performance; the other is the individual and subjective experience of personal desire signified in language."
She warns,
"If we persist in reductively defining black subjectivity as political agency, we will continue to overlook the force of desire in black texts as well as in the lives of African Americans."For my work, this means that I explore relationships between people, whether or not they influenced the individuals' understanding of "the struggle," starting with as much as I can understand about an intellectual's childhood. I also acknowledge similarities between blacks and whites--in other words, if all you see about Juliette Derricotte is the way she lectured against racism, you miss her internal dialogue (happily available through rich letters to her family) and you miss the ways that she unconsciously replicated the discourse of white colonial
On my original post, Tim suggested that I was trying to do "research for research's sake"--understanding black people's lives for the pure satisfaction of understanding. I think, though, that adding in personal desire-- the internal dialogue of black people (to the extent that we can know it through the veil of dissemblance)--to understandings of political agency, we can more fully assess the strengths and weaknesses of social movements. At the same time, acknowledging and researching the internal lives of African Americans gives us a more nuanced perspectives into the lived experiences of black people and understand when and where race matters by acknowledging that sometimes it matters greatly and sometimes it doesn't.
Let me end with a final quote from Tate:
"Certainly, race matters. It matters precisely because in the United States 'race remains a salient source of the fantasies and allegiances that shape our ways of reading' all types of social experiences (Abel, 'Black Writing,' 497). These racial fantasies and allegiances have historiclaly conditioned all social exchanges, and they continue to do so. Indeed, the racial conventions of the United States seem to have sentenced black subjects to protest forever the very deficiencies that white subjects presumably do not posessess. Racism allows white sujbects generally to assume that they have 'fully developed, complex, multi-layered personalit[ies[' (Prager, 'Self Reflection[s],' 357). By contrast, racism condemns black subjects to a Manichean conflict between their public performance of an essentialized, homogenous blackness, which is largely a by-product of white 'ideological formations' of racial difference (Althusser, 'Freud,' 219), on the one hand, and a private performance of individual personality, on the other." ...
"Whether we realized it or not, we all mediate in different ways the hegemonic effects of white male power with whatever authority we personally claim."
"These novels tell other stories about the desire of black subjects that do not fit the Western hierarchical paradigm of race as exclusion, vulnerability, and deficiency. These works depict what I call a 'surplus,' a defining characteristic not generally associated with African American personality and culture."
Selasa, 27 Desember 2011
Ribuffo, "President James A. Garfield Had a Great Personality" (Personality and the Self Panel, Part III)
Dear Readers: As a special holiday season treat, I give you one of the more interesting panels from our recent conference--"Personality and the Self in Twentieth-Century American Social Thought."' See the first paper by Dave Varel here. The second paper by Dave Steigerwald is here. Below are the comments by Leo Ribuffo.
PRESIDENT JAMES A. GARFIELD HAD A GREAT PERSONALITY
Leo P. Ribuffo, The George Washington University
In the generous spirit of S-USIH, this is less a comment in the AHA/OAH “gotcha” sense than some reflections on two interesting articles. My first reflection is that both of these essays deal with what might be called the self-absorbed era in the conceptualization of the self—and all deal primarily with middle class people or above in a rich world power during a relatively short span of time, the past 120 yrs or so. Accordingly, choosing a conception of the self was to an increasing degree voluntary, especially after the culturally normative “American Way of Life” of the Great Depression yielded to the looser notion of “life styles” in the 1960s and 1970s. This is the era, as David Varel stresses (following Warren Susman's classic essay), when, amid visions of affluence, an ascetic emphasis on “character” yielded to a “culture of personality” befitting a “culture of consumption.”
Without totally discounting the now standard notion that the search for the self in some sense escalated during the modern era, whenever that began, let me suggest that it had a longer lineage, was not confined to rich “Western” countries, and often involved what William James called forced options. Consider the following hypothetical situations:
A speaker in 331 B. C. E. Persia. “Believe it or not, guys, Darius III just lost to Alexander the Great. We’ve got to decide how Hellenized we’re going to become.”
Fast forward to the Indian subcontinent in the seventh century C. E. “Hey, guys, there’s this new religion going around called Islam. It doesn’t have a caste system. Sounds pretty good to me.”
Fast forward again to the sixteenth century—to a place our history department colleagues call early modern Europe. “Hey buddy, does the wine in church turn into Christ’s blood or is it just a symbol? Decide fast; we’re piling the kindling around the stake.”
And across the ocean in Peru: “Look, guys, I know the Spanish conquerors have really powerful weapons and are trying to win our hearts and minds with paintings of the Apostles as Indians, but don’t we owe it to our Inca ancestors to join Tupac Amaru’s revolt?”
Even for the prosperous United States. (by world standards) choices about self were in play for more than a century before the era of self-absorption began in the late nineteenth century. We can see this behavior in many “keywords.” In addition to the ubiquitous “character,” we have for instance: republican virtue, honor, patriot, true woman, born again Christian, and manliness (preferably self-made). At the same time there were negative selves that should be avoided or (in the Darwinian worst cases) could not be avoided—undeserving poor, rebel, feeble minded, racial mongrel, and gook.
As Susman acknowledged, such notions did not disappear even as the “culture of personality” came to dominate the Zeitgeist. For instance, self-made manliness survived from Henry Clay through Booker T. Washington to Malcolm X and the Nixon White House, honor persisted from the Hamilton-Burr duel to Paul Goodman’s Growing Up Absurd, gook echoed from the Philippine War to the Vietnam War, and derision of the undeserving poor affected politics from Theodore Roosevelt's Square Deal to Bill Clinton’s signing of welfare reform, so-called, in 1996.
Nor should we forget the enormous legacy of Romanticism with its cult of the hero, which popularized the self-absorbed search for the self long before this disposition became professionalized. Despite the countless gospel of success guides published by Russell Conwell, Edward Bok, Garfield, et al, what red blooded American boy would choose to clerk in that startup company Carnegie Steel instead of riding with General George Custer? At least until June 25, 1876.
And if we want a more complicated symbol (or modal personality if you prefer), although James Garfield wrote one of the classic tracts about achieving success through character, he did have a great personality even before there was a whole “culture of personality,” a fact confirmed by his phrenologist, by his rapid political rise, and by the three women madly in love with him during his early twenties.
David Steigerwald takes us from the early days of the professionalized search for the self to the 1970s. Steigerwald begins by bringing us back to the first heyday of guides to success, variously defined, in this world and the next, and nicely places this search in the context of a longer debate about free will and determinism.
Thus we return to the question that vexed scholars three decades ago in the heyday of the academic study of the gospel of success--such scholars as Susman, Donald Meyer, John Cawelti, Irwin Wylie, Richard Huber, and Lawrence Chenoweth --Is William James responsible for Norman Vincent Peale? This question is a lot of fun, along with its kin: Is Marx responsible for Stalin, is Rousseau responsible for Timothy Leary, and what would Jesus do? Steigerwald gets it right in this instance, writing that James with his “famous open-mindedness” would have found some merit in Charles Reich’s The Greening of America.
Yet Steigerwald leads us astray when he dismisses the medical side of nineteenth century positive thinking as “self-evidently unscientific.” These theories were considered science by many Americans just as much as eugenicist warnings against racial mongrelization and the homeopathic medicine preferred by President Garfield. So too, later on, with Abraham Maslow cultivating his peak experiences and Wilhelm Reich absorbing the intergalactic libido in his orgone box.
The issue of “agency” also needs a closer look (“deconstruction” if a trendier word makes readers feel smarter). As currently bandied about, agency means two related but separate questions. First, can “the people” in some sense think for themselves or are they just easily manipulated dimwits, a notion already on the rise in American social science before there was a Frankfurt School even in Frankfurt, as Edward Purcell showed in his brilliant book The Crisis of Democratic Theory. As Purcell also showed, this question influenced the interwar debate between behaviorists and humanists in the social sciences.
Second, even if “the people” can think for themselves, do they have enough power--agency--to change the Zeitgeist let alone the social order? In the broad sweep of things, my sense is that people do think for themselves when they are moved to think, but that they tend to limit their thoughts and feelings to what seems possible. Hence most educated women went along with the feminine mystique in the 1950s, there is no socialism in the United States, and only a minority of Incas joined Tupac Amaru’s revolt.
Steigerwald sees two great eras of agency affirmation, and both seem to coincide with periods of major social change and political flux. The odd exception is the 1930s, and here Steigerwald’s Google search may have led him astray. Guides to success flourished during the Great Depression, none more so than Dale Carnegie’s How to Win Friends and Influence People. If there was a drop off in references to free will, perhaps this was because a critical mass of “the people” temporarily had both the will and the freedom to change things significantly (by American standards).
Varel takes us forward to Henry Murray, a self-described William Jamesian and one of the creators of “humanistic psychology”--as Murray called the field as early as 1930--along with Gordon Allport, Carl Rogers, John Dollard, and Abraham Maslow.
While agreeing that we need more than an “internalist” examination of Murray’s ideas, I am skeptical of Varel’s main choice of social-intellectual context. From Richard Hofstadter through Christopher Lasch to Jackson Lears, historians have noted the importance of personal crises about the meaning of life for modern thinkers, crises summed up in the key word “weightlessness.”
Murray doesn’t fit very well. As a youth he stuttered, felt rejected by his mother, and suffered from sexual repression but he is a long way from being a Jamesian “sick soul” and a modal personality for an age of “weightlessness.” Until his early twenties, Murray’s main crisis seems to have been guilt about his role in a rebellion against his Harvard crew coach that may have resulted in a loss to Yale. As we used to say in working class New Jersey before I encountered the psychiatric mode of denigration, the young Murray was a rich spoiled jerk.
In his early thirties Murray did suffer what he called a “profound affectional crisis” that involved immersion in Romantic literature, enthusiasm for Carl Jung, and lust for Christiana Morgan. Except perhaps for the Jungian infatuation, John Stuart Mill and Harriet Taylor would have understood ninety years earlier.
This affectional crisis did influence Murray’s work, including his senior authorship of Explorations in Personality, but so too did a broad assortment of ideas. Although Varel can only sketch the early years, we do need to appreciate what a juicy life Murray lived. By his early forties his circle included Alfred North Whitehead, Lewis Mumford, Conrad Aiken, Eugene O’Neill, Archibald MacLeish, Joseph Schumpeter, and Paul Robeson.
Varel neatly summarizes Explorations in Personality but we should appreciate, too, what an incredible mish mash the book is, a combination of empiricism, insight, jargon, empathy, and elite insularity. For instance, Murray in effect regrets that his Catholic subjects, adhering to their church's "rationalized fantasy system," are "blissfully" less neurotic than their Protestant and Jewish counterparts. Certainly Murray's work had the potential to nudge psychology in various directions. The TAT became a tool for sorting out corporate executives, as William Whyte satirized in The Organization Man. Murray personally influenced Talcott Parsons as well as Kenneth Keniston and Erik Erikson.
If we are looking for sweeping contexts, we might say that both Explorations in Personality and Murray himself in the late 1930s illustrate tenacious American optimism. According to Murray at that time, Freud’s Civilization and Its Discontents exuded “black despair.”
Since Murray lived well into his nineties, he can be used as a symbol or modal personality for many intellectual trends. After the United States entered World War II he discovered evil, informed the world of its existence in his writings, and administered TATs for the Office of Strategic Services--“great fun.” Murray testified for the defense at the second Alger Hiss trial, diagnosed Whittaker Chambers as a “psychopathic personality” on the basis of his writings, and took pride in coming off better on the witness stand than Cornell psychiatrist Carl Binger. Joining in the post-World War II disenchantment with “the people” this rich spoiled jerk complained that soldiers, students, and his own research assistants were becoming uppity. Ever adaptable in self-absorption, however, Murray enjoyed an acid trip with Timothy Leary.
As we move into the 1960s and 1970s, Steigerwald sees the denouement of the descent from Jamesian giants to Maslovian pygmies. Although guides to success variously defined still proliferate in all sorts of media, the late 1970s marked the end of a phase in the discussion of the self. It had been a triumphant phase marked by psychological interpretations in venues as significant as George Kennan’s Cold War tract “The Sources of Soviet Conduct,” the United States Supreme Court decision in Brown v Board of Education, and Justice Harry Blackmun's majority opinion in Roe v Wade.
Maslow, Reich, and others criticized by Steigerwald, a good Laschian, have never been my cup of chamomile tea, and I have been trying all of my professional life to bury the phrase “paranoid style in American politics.” But as a Susmanite I have always had a soft spot for the gentler positive thinkers. Many people would be better off listening to Maslow than listening to Prozac. And with the resurgence of “economic man,” construed with a stunted conception of rationality that would surprise Adam Smith, I am softening further. Come the revolution, perhaps we could sentence the authors of Freakonomics and the policy wonks at the American Enterprise Institute to a few weeks in a hot tub with Charlie Reich.
PRESIDENT JAMES A. GARFIELD HAD A GREAT PERSONALITY
Leo P. Ribuffo, The George Washington University
In the generous spirit of S-USIH, this is less a comment in the AHA/OAH “gotcha” sense than some reflections on two interesting articles. My first reflection is that both of these essays deal with what might be called the self-absorbed era in the conceptualization of the self—and all deal primarily with middle class people or above in a rich world power during a relatively short span of time, the past 120 yrs or so. Accordingly, choosing a conception of the self was to an increasing degree voluntary, especially after the culturally normative “American Way of Life” of the Great Depression yielded to the looser notion of “life styles” in the 1960s and 1970s. This is the era, as David Varel stresses (following Warren Susman's classic essay), when, amid visions of affluence, an ascetic emphasis on “character” yielded to a “culture of personality” befitting a “culture of consumption.”
Without totally discounting the now standard notion that the search for the self in some sense escalated during the modern era, whenever that began, let me suggest that it had a longer lineage, was not confined to rich “Western” countries, and often involved what William James called forced options. Consider the following hypothetical situations:
A speaker in 331 B. C. E. Persia. “Believe it or not, guys, Darius III just lost to Alexander the Great. We’ve got to decide how Hellenized we’re going to become.”
Fast forward to the Indian subcontinent in the seventh century C. E. “Hey, guys, there’s this new religion going around called Islam. It doesn’t have a caste system. Sounds pretty good to me.”
Fast forward again to the sixteenth century—to a place our history department colleagues call early modern Europe. “Hey buddy, does the wine in church turn into Christ’s blood or is it just a symbol? Decide fast; we’re piling the kindling around the stake.”
And across the ocean in Peru: “Look, guys, I know the Spanish conquerors have really powerful weapons and are trying to win our hearts and minds with paintings of the Apostles as Indians, but don’t we owe it to our Inca ancestors to join Tupac Amaru’s revolt?”
Even for the prosperous United States. (by world standards) choices about self were in play for more than a century before the era of self-absorption began in the late nineteenth century. We can see this behavior in many “keywords.” In addition to the ubiquitous “character,” we have for instance: republican virtue, honor, patriot, true woman, born again Christian, and manliness (preferably self-made). At the same time there were negative selves that should be avoided or (in the Darwinian worst cases) could not be avoided—undeserving poor, rebel, feeble minded, racial mongrel, and gook.
As Susman acknowledged, such notions did not disappear even as the “culture of personality” came to dominate the Zeitgeist. For instance, self-made manliness survived from Henry Clay through Booker T. Washington to Malcolm X and the Nixon White House, honor persisted from the Hamilton-Burr duel to Paul Goodman’s Growing Up Absurd, gook echoed from the Philippine War to the Vietnam War, and derision of the undeserving poor affected politics from Theodore Roosevelt's Square Deal to Bill Clinton’s signing of welfare reform, so-called, in 1996.
Nor should we forget the enormous legacy of Romanticism with its cult of the hero, which popularized the self-absorbed search for the self long before this disposition became professionalized. Despite the countless gospel of success guides published by Russell Conwell, Edward Bok, Garfield, et al, what red blooded American boy would choose to clerk in that startup company Carnegie Steel instead of riding with General George Custer? At least until June 25, 1876.
And if we want a more complicated symbol (or modal personality if you prefer), although James Garfield wrote one of the classic tracts about achieving success through character, he did have a great personality even before there was a whole “culture of personality,” a fact confirmed by his phrenologist, by his rapid political rise, and by the three women madly in love with him during his early twenties.
David Steigerwald takes us from the early days of the professionalized search for the self to the 1970s. Steigerwald begins by bringing us back to the first heyday of guides to success, variously defined, in this world and the next, and nicely places this search in the context of a longer debate about free will and determinism.
Thus we return to the question that vexed scholars three decades ago in the heyday of the academic study of the gospel of success--such scholars as Susman, Donald Meyer, John Cawelti, Irwin Wylie, Richard Huber, and Lawrence Chenoweth --Is William James responsible for Norman Vincent Peale? This question is a lot of fun, along with its kin: Is Marx responsible for Stalin, is Rousseau responsible for Timothy Leary, and what would Jesus do? Steigerwald gets it right in this instance, writing that James with his “famous open-mindedness” would have found some merit in Charles Reich’s The Greening of America.
Yet Steigerwald leads us astray when he dismisses the medical side of nineteenth century positive thinking as “self-evidently unscientific.” These theories were considered science by many Americans just as much as eugenicist warnings against racial mongrelization and the homeopathic medicine preferred by President Garfield. So too, later on, with Abraham Maslow cultivating his peak experiences and Wilhelm Reich absorbing the intergalactic libido in his orgone box.
The issue of “agency” also needs a closer look (“deconstruction” if a trendier word makes readers feel smarter). As currently bandied about, agency means two related but separate questions. First, can “the people” in some sense think for themselves or are they just easily manipulated dimwits, a notion already on the rise in American social science before there was a Frankfurt School even in Frankfurt, as Edward Purcell showed in his brilliant book The Crisis of Democratic Theory. As Purcell also showed, this question influenced the interwar debate between behaviorists and humanists in the social sciences.
Second, even if “the people” can think for themselves, do they have enough power--agency--to change the Zeitgeist let alone the social order? In the broad sweep of things, my sense is that people do think for themselves when they are moved to think, but that they tend to limit their thoughts and feelings to what seems possible. Hence most educated women went along with the feminine mystique in the 1950s, there is no socialism in the United States, and only a minority of Incas joined Tupac Amaru’s revolt.
Steigerwald sees two great eras of agency affirmation, and both seem to coincide with periods of major social change and political flux. The odd exception is the 1930s, and here Steigerwald’s Google search may have led him astray. Guides to success flourished during the Great Depression, none more so than Dale Carnegie’s How to Win Friends and Influence People. If there was a drop off in references to free will, perhaps this was because a critical mass of “the people” temporarily had both the will and the freedom to change things significantly (by American standards).
Varel takes us forward to Henry Murray, a self-described William Jamesian and one of the creators of “humanistic psychology”--as Murray called the field as early as 1930--along with Gordon Allport, Carl Rogers, John Dollard, and Abraham Maslow.
While agreeing that we need more than an “internalist” examination of Murray’s ideas, I am skeptical of Varel’s main choice of social-intellectual context. From Richard Hofstadter through Christopher Lasch to Jackson Lears, historians have noted the importance of personal crises about the meaning of life for modern thinkers, crises summed up in the key word “weightlessness.”
Murray doesn’t fit very well. As a youth he stuttered, felt rejected by his mother, and suffered from sexual repression but he is a long way from being a Jamesian “sick soul” and a modal personality for an age of “weightlessness.” Until his early twenties, Murray’s main crisis seems to have been guilt about his role in a rebellion against his Harvard crew coach that may have resulted in a loss to Yale. As we used to say in working class New Jersey before I encountered the psychiatric mode of denigration, the young Murray was a rich spoiled jerk.
In his early thirties Murray did suffer what he called a “profound affectional crisis” that involved immersion in Romantic literature, enthusiasm for Carl Jung, and lust for Christiana Morgan. Except perhaps for the Jungian infatuation, John Stuart Mill and Harriet Taylor would have understood ninety years earlier.
This affectional crisis did influence Murray’s work, including his senior authorship of Explorations in Personality, but so too did a broad assortment of ideas. Although Varel can only sketch the early years, we do need to appreciate what a juicy life Murray lived. By his early forties his circle included Alfred North Whitehead, Lewis Mumford, Conrad Aiken, Eugene O’Neill, Archibald MacLeish, Joseph Schumpeter, and Paul Robeson.
Varel neatly summarizes Explorations in Personality but we should appreciate, too, what an incredible mish mash the book is, a combination of empiricism, insight, jargon, empathy, and elite insularity. For instance, Murray in effect regrets that his Catholic subjects, adhering to their church's "rationalized fantasy system," are "blissfully" less neurotic than their Protestant and Jewish counterparts. Certainly Murray's work had the potential to nudge psychology in various directions. The TAT became a tool for sorting out corporate executives, as William Whyte satirized in The Organization Man. Murray personally influenced Talcott Parsons as well as Kenneth Keniston and Erik Erikson.
If we are looking for sweeping contexts, we might say that both Explorations in Personality and Murray himself in the late 1930s illustrate tenacious American optimism. According to Murray at that time, Freud’s Civilization and Its Discontents exuded “black despair.”
Since Murray lived well into his nineties, he can be used as a symbol or modal personality for many intellectual trends. After the United States entered World War II he discovered evil, informed the world of its existence in his writings, and administered TATs for the Office of Strategic Services--“great fun.” Murray testified for the defense at the second Alger Hiss trial, diagnosed Whittaker Chambers as a “psychopathic personality” on the basis of his writings, and took pride in coming off better on the witness stand than Cornell psychiatrist Carl Binger. Joining in the post-World War II disenchantment with “the people” this rich spoiled jerk complained that soldiers, students, and his own research assistants were becoming uppity. Ever adaptable in self-absorption, however, Murray enjoyed an acid trip with Timothy Leary.
As we move into the 1960s and 1970s, Steigerwald sees the denouement of the descent from Jamesian giants to Maslovian pygmies. Although guides to success variously defined still proliferate in all sorts of media, the late 1970s marked the end of a phase in the discussion of the self. It had been a triumphant phase marked by psychological interpretations in venues as significant as George Kennan’s Cold War tract “The Sources of Soviet Conduct,” the United States Supreme Court decision in Brown v Board of Education, and Justice Harry Blackmun's majority opinion in Roe v Wade.
Maslow, Reich, and others criticized by Steigerwald, a good Laschian, have never been my cup of chamomile tea, and I have been trying all of my professional life to bury the phrase “paranoid style in American politics.” But as a Susmanite I have always had a soft spot for the gentler positive thinkers. Many people would be better off listening to Maslow than listening to Prozac. And with the resurgence of “economic man,” construed with a stunted conception of rationality that would surprise Adam Smith, I am softening further. Come the revolution, perhaps we could sentence the authors of Freakonomics and the policy wonks at the American Enterprise Institute to a few weeks in a hot tub with Charlie Reich.
Steigerwald on "The Willful Self" (Personality and the Self Panel, Part II)
Dear Readers: As a special holiday season treat, I give you one of the more interesting panels from our recent conference--"Personality and the Self in Twentieth-Century American Social Thought"' See the first paper by Dave Varel here. This paper is by Dave Steigerwald. Comments by Leo Ribuffo will follow.
“Hollo! I must lie here no longer”:
Versions of the Willful Self from the Gilded Age to the Me Decade
by David Steigerwald, The Ohio State University
As one or two of you may be aware, I’ve made a bit of a living over the last few years criticizing the concept of individual agency in postwar America, especially in its application to consumerism. Typically there it includes claims that consumers exercise some measure of decisive power over the marketplace when they make idiosyncratic choices about either what they purchase or how they interpret the goods they buy. Choice is good, this line of reasoning seems to go, and because it provides so much of it, contemporary consumerism must also be good. Because versions of this line of thought came to pervade a good deal of writing about consumerism from the 1980s on, it seemed to me worth poking a few sticks at. At its most serious and most fruitful, the consumer-as-agent argument was a necessary counter to Frankfurt School cultural determinism, that stifling intellectual blanket laying upon those who began writing in the 1970s and 1980s. [1}
Yet in criticizing the concept of consumer agency, I’m afraid that I left the impression that I was defending Frankfurt, which is only partly true. Adorno and Horkheimer’s most concise summary of their view of the radio listener (a paraphrase of Henry Ford, it seems to me), that the listener wants what they’re going to get anyway, still strikes me as sound. Still, I had this nagging fear that someone would say to me what a defender of free-will Methodism in the 1890s wrote in criticism of predestination: “Why hold on to [this belief] as with a death-grasp ruled by Calvin’s dead hand from his very grave, trying to soften its asperities, and still keep mumbling the decrees as Roman priests do the mass in an unknown tongue, patching new pieces of truth to old garments of error worn in the Dark Ages?” [2]
To skirt just that possibility, I’ve widened the inquiry into the doctrine of choice in a book on alienation and affluence in postwar America, which I’m now finishing. In it, I’m arguing that the doctrine of choice become the antidote to alienation not just in consumer culture, but as a means by which individuals might think of themselves as successfully negotiating through an age of automated labor, bureaucratic regimentation, political powerlessness, and a profound shift in values that issued from the evaporation of the Protestant ethic. “Choice” became the universal default, in part because it was easy-to-hand and in perfect harmony with consumer capitalism. But it also spoke to the essential social psychology of alienation: that is, the pervasive sense of individual estrangement and isolation. Even if only an invocation, the belief that individual choices are both freely rendered and individually efficacious can be enough to lift the pallor of alienation for any given person. To the extent that the late-20th century political economy promised choices across the board—not just in soaps and shampoos, but in churches, communities however ephemeral or “virtual,” and technologies that provided private access to all—it effectively institutionalized such efficacy. And all this goes a long way toward explaining why we’re not alienated any more.
Having arrived at a via media all my own, it makes sense to me to admit, and follow up on, what should be obvious: that a debate over agency, or the efficacy of choice, is nothing new. While of course it is drawn out of the clash between the old Marxism and the New Left, it may be fruitful to see it as part of an even longer historical trajectory. We should see it, accordingly, as another episode in the old division between determinism and free will that runs back, in American letters, to the antinomian critique of Calvinism, and that intensified between 1880 and 1920 in the important debate between Darwinists and the theologically inclined—between, as one Darwin partisan put it in 1892, the “intellectual measles” of religion and “scientific certitude.” [3] A cursory look at the literature tells me that these forty years will take up the bulk of a book on the subject of free will. But a quick glance at the Google NGRAM—one of my favorite new research tools—shows that the use of “free will” in modern American writing had two peaks: a double-humped one that began around 1870, dipped at the turn of the century, then accelerated through the Great War; the second erupting sometime around 1960 and running perhaps a decade or so. And the possible connections between these two periods, as well as the different uses and meanings of the ideas deployed, provide an intriguing and natural parameter for such a study.
The first question, then, should ask what structural similarities the Gilded Age shared with the 1960s and 1970s, or what I’ll call the mid-postwar period. What comes immediately my mind is that in both periods, fundamental socio-economic change apparently generated distinct social-psychological maladies: neurasthenia/hysteria, in the Gilded Age; alienation in the mid-postwar period.
These two forms of psycho-social maladjustment, moreover, themselves shared similarities. Both were commonly accounted for as reactions against the warp speed of modernity. Train schedules, factory whistles, and the clock were superimposed on nineteenth-century people physiologically attuned to the leisurely pace of the natural world, while automation and mass communications generated “future shock” in the latter case. Both nervous exhaustion and alienation seemed to have weighed most heavily on sensitive young adults.
I have always been impressed, for example, by how clearly the sentiments in the Port Huron Statement echoed Jane Addams’s self-analysis in that wonderful essay, “The Subjective Necessity for Social Settlements.” Addams, of course, knew a thing or two about damaged youth, having been one herself. And in that essay she described the personal debilitations, the spiritual enervation, that plagued educated young adults who found themselves rendered useless by modernity’s material well-being. Whereas the generation before them was absorbed in the “starvation struggle” against the frontier, Addams’s generation had been elevated into comfort and educated presumably in order to take on a purposeful life, only to be deprived of meaningful outlets for their abundant vitalities. This situation was particularly intense for young women like Addams, whose typical fate was to graduate backward into the “family claim.” In the passage that has always caught my eye, Addams described with her usual clarity the quiet agony of her peers:
We have in America a fast-growing number of cultivated young people who have no recognized outlet for their active faculties. They hear constantly of the great social maladjustment, but no way is provided for them to change it, and their uselessness hangs about them heavily. Huxley declares that the sense of uselessness results in atrophy of function. These young people have had advantages of college, of European travel, and of economic study, but they are sustaining this shock of inaction. . . . Many of them dissipate their energies in so-called enjoyment. . . . Many are buried beneath mere mental accumulation with lowered vitality and discontent. . . . This young life, so sincere in its emotion and good phrases and yet so undirected, seems to me as pitiful as the other great mass of destitute lives. . . . Our young people feel nervously the need of putting theory into action, and respond quickly to the Settlement form of activity. [4]
Some sixty years later, Paul Goodman described almost the exact same problem as “growing up absurd,” and it is no stretch at all to think that Addams would have had enormous sympathy for the young people who in 1962 began their Port Huron Statement with the rallying cry: “We are people of this generation, bred in at least modest comfort, housed now in universities, looking uncomfortably to the world we inherit.”
Every bit of Addams’s language in the “Subjective Necessity” points us to another similarity in the distress that reached across time: both nervous disorder and alienation were maladies of the self, and because of that, they recommended at least some measure of subjective assertion. This is precisely Addams’s point. Earnest young people cut off from the starvation struggle of the urban masses could recover their physical vitality and sustain selfhood through settlement house work. Let me suggest that those alienated young people who launched Students for a Democratic Society and scoured the social landscape in search of objects for their political energies were doing the exact same thing.
Rather than making that argument here, I want to muck around in what I’m thinking of as a prior, more distinctly subjective reaction to psycho-social maladjustment. Both Addams and her peers and Tom Hayden and his alleviated their distress by throwing themselves into the public arena with the intention of changing objective conditions, and it is my view that such an effort was undoubtedly the healthiest, most meaningful antidote. But it raises the question of whether it is possible to overcome personal agony through purely subjective assertion, through something we could profitably think of as “the will.” If indeed one can successfully will oneself to mental well-being, if indeed the subjective is sufficient, then perhaps those who put great stock in individual agency are on solid ground. If the subjective will can, in a tangible way, affect the practical conditions of one’s life, then self-assertion is not merely subjective; it has some objective consequence. This, I take it, is what people mean by “agency.”
To get at this question, we should begin not with Jane Addams but with her contemporary, William James. James’s engagement in the debate over human volition was both important and typical of him, one of the important building blocks, as our good friend James Kloppenberg taught us, in the via media of American pragmatism. In Uncertain Victory, Kloppenberg described the “nature of the will” as “among the most vexing problems confronting late nineteenth-century thinkers.” Broadly speaking, that problem pitted the partisans of the physical sciences, “who dismissed religious arguments for free will as wishful thinking,” and the theologically inclined, who obliged themselves to believe in the complete independence of the individual human being. The automaton squared off against the autonomous, we might say. Clearly, these were just the broad parameters of a discourse that contained many variations—the intellectual mud-wrestling between Huxley and his scandalized critics; a revival of the debate over Calvinism among some American Christians; the growing importance of Freud; and, not least, Schopenhauer and Nietzsche. These variations indicated that high stakes were on the table, and that, combined with the irresolvable impasse between the two antagonistic positions, made the question of the will irresistible for American pragmatists. [5]
As Kloppenberg argues, the philosophers of the middle way crafted a philosophy of voluntary action that characteristically insisted on measuring any claims against experience. By the time they had arrived at a satisfactory position, their theory of voluntary action held that people were capable of choosing to sustain a thought and were therefore aware of the freedom to select between competing options; that the individual selects both what sensory data to act on and how to do so; that thought and sensation were mediated into action through volition; and that free will, though it clearly exists, is nonetheless constrained by social, and therefore historical, context. [6]
But what particularly attracts me to James was, as Kloppenberg also notes, that he immersed himself in the question of the will out of “the crucible of personal anxiety.” [7] The philosophical stakes aside, the personal capacity to will oneself, in James’s case, to some mental equilibrium that permitted engagement with the world was at the core of his philosophy of volition. Had James meekly accepted Huxley, he would have withered away. Instead, he climbed out of his debilitating mental exhaustion by translating into action one option from among a set of options. In a passage on the “psychology of volition” in Principles of Psychology, he offered a veiled description of his own experience. In trying to prove that ideas could be translated into action by suggesting that many ideas also inhibit action, James offered the example of arising out of bed on a cold morning. One knows that one must get up and face the day. “But still the warm couch feels too delicious, the cold outside too cruel, and resolution faints away and postpones itself again and again.” One mental urge throws itself against another; the imperatives compete. Extrapolating from “my own experience,” James asked: How do people ever raise themselves? “The idea flashes across us, ‘Hollo! I must lie here no longer.” [8]
Jackson Lears noted some time ago that James’s recovery was hardly so dramatic, that he recovered only gradually and then only after taking up his teaching career at Harvard. That said, what interests me is the connection between physical activity and mental health—a connection, incidentally, that Jane Addams also acknowledged. [9] We know today that physical activity is an important ingredient in the successful treatment of depression, and it begins with the Jamesian act of will—simply getting out bed and facing the day. More to the point, such an act of will, whatever its value to the philosophy of voluntary action, has real empirical value. It is possible to see in, and therefore to measure through, the improvement of the subject’s mental disposition the extent to which the subjective assertion of the self alters objective conditions. It is an example of the efficacy of the will.
If anything, James was overly insistent on the physiological origins of the will, at least in that long section on the subject in Principles of Psychology. Partly, his medical training explains his preoccupation with the “kinaesthetic idea.” But it is obvious that he was driven to scrutinize determinism more aggressively than its opposite, religious feeling. If I understand him correctly, James’s main intention in this bit of writing was to distance the source of sensation from the physical movement that is sensation’s effect. The simple distinction between involuntary and voluntary movements—the former were primary “functions of our organism”; the latter “secondary” ones—was his point of departure. But the distinction was not absolute, because in certain cases the body was capable of learning through motor memory from the first involuntary reaction to a particular sensation. The body learned other movements as well and at some point selected which action to undertake in response. (One example: The child who starts at the roar of a train as he stands on the platform does so involuntarily at first experience; thereafter, the same sensation might produce a similar response, but maybe not.) While this argument qualified the distinction between involuntary and voluntary movement, James used the point to separate out sensation from the response to sensation. “In reflex action, “and in its emotional expression,” he wrote, “the movements which are the effects are in no manner contained by anticipation in the stimuli which are their cause. The latter are subjective sensations and objective perceptions, which do not in the slightest degree resemble or prefigure the movements.” [10] Whereas determinism presumed a direct identity between sensation and movement, James insisted on a continuum that linked sensation, reflection, and action. By opening up room between the spark of motor movement and the actual movement itself, James crafted out space for volition, for the capacity of one to choose between impulses, and therefore for indeterminacy, or what he referred to elsewhere as “chance.” And chance, he knew, was the enemy of determinism. [11]
James always recognized the religious implications of this matter and understood perfectly well that the Huxleyan uproar was a resumption of the debate over free will. As he told Harvard Divinity students in a 1884 talk, while “common opinion” might be “that the juice has ages ago been pressed out of the free-will controversy, . . . I know of no subject less worn out, or in which inventive genius has a better chance of breaking open new ground.” [12] He joined the issue most clearly in “Reflex Action and Theism,” an address given to Unitarian ministers in 1881, whom he praised for their efforts to assimilate contemporary science into their world views. (He also praised them, by the way, for their rejection of Calvinism: “A God who gives so little scope to love, a predestination which takes from endeavor all its zest with all its fruit, are irrational conceptions, because they say to our most cherished prayers, There is no object for you.” [13]) James’s intention here, however, was to disabuse his open-minded brethren of any flirtation with the “doctrine of reflex action” then dominant in physiology and psychology, which dogmatically insisted that all “acts” are mere “discharges from the nervous centres” in response to external stimuli. The adherents to this theory, James claimed, were quite sure that it dealt the “coup de grace to the superstition of God.” The fallacy of the determinists, as anyone’s experience easily confirmed, was that the “real order of the world” was so overwhelmingly chaotic that individuals had to sort out what to respond to and what to ignore, and those decisions issued from “our subjective interests.” James maintained that subjective interests were given play in “the conceiving or theorizing faculty—the mind’s middle department,” and that they were independent of external stimuli. That middle department “is a transformer of the world of our impressions into a totally different world, . . . and the transformation is effected in the interests of our volitional nature.” [14] James reassured his theistic listeners that the middle department’s job was to adjust sensations until it mastered the chaos of the objective world. And that, he figured, was really what theology was about.
For James, then, the will connected the metaphysical and the phenomenological and was the bridge between subjective and objective realities. More than that, though: it was the activating agent in transforming belief into action, or abstract faith into faith revealed and affirmed. So it’s not too much to claim that James’s famous open-mindedness toward religious conviction grew out of his engagement with the free-will debate. If his main intention in “The Will to Believe” was to defend “our right to adopt a believing attitude in religious matters,” as he put it, what justified that right was belief’s place in the continuum of action. True, the believer could not will the existence of God. But our volitional nature could nonetheless translate belief in such a way as to “help create the fact” that belief sought out by making the faithful “better off even now.” In other words, the practical, objective effects of religious conviction concluded the continuum whereby the subjective was translated into an objective effect. [15]
This same formulation explains James’s interest in the “mind-cure” movement that proliferated after 1890. Self-evidently unscientific, mind-cure had to be understood as a religious movement, akin, he wrote, to Lutheranism or Methodism. Like earlier evangelical faiths, mind-cure banished original sin, which was probably enough in itself to appeal to James. Its practitioners harbored no “contrite hearts”; they were already “one with the Divine without any miracle of grace.” Much of what he read from mind-cure authors baffled him. But James took note of how the audacity of the advice had struck a vibrant chord among the public. “The leaders of the movement,” he observed, “have had an intuitive belief in the all-saving power of healthy-minded attitudes as such, in the conquering efficacy of courage, hope, and trust, and a correlative contempt for doubt, fear, and worry.” They cleverly packaged their sunny nostrums as “cures” and thereby trespassed on the turf of medical science. But the only sensible way to measure a “cure” was through its results, which in the case of mind-cure seemed to be promising. “The blind have been made to see, the halt to walk; life-long invalids have had their health restored. The moral fruits have been no less remarkable. The deliberate adoption of a healthy-minded attitude has proved possible to many who never supposed they had it in them; regeneration of character has gone on an extensive scale; and cheerfulness has been restored to countless homes.” The movement’s “practical fruits,” its “palpable experiential results,” were enough to warrant respect. [16] Here again, as with religious conviction, the will created the fact.
At the considerable risk of drawing up a genealogy that links pigmies to giants, it might be an interesting exercise to apply the Jamesian formula to the psychology of the self that blossomed in the late 1960s and 1970s. It is the case that the cult of the self, that great obsession of the Me Decade, evoked a fair amount of writing about the will—hence the second N-Gram hump. There was every bit as much charlatanry and snake-oil in the Seventies Era cult of the self as in the mind-cure days of James. But following James’s example, we are compelled to take seriously results, regardless of the means.
Humanistic psychology, for instance, rested on the subjective will. Leslie Farber, Viktor Frankl, and Rollo May all wrote on the subject. The Esalen Institute sanctioned such books as Roberto Assaglioli’s The Act of Will, which recommended the not unreasonable development of a “skillful will” as a psychological cog-wheel that could keep elemental instincts and behavioral impulses in alignment. [17] Abraham Maslow’s self-actualized person might be understood as a version of James’s once-born: free of illusions, guilt, shame, or anxieties, self-actualized people were driven to be what “they must be,” to realize their idiosyncratic potential and “become everything that one is capable of becoming.” [18]
The question, though, is whether self-actualization was a self-conscious achievement that bespoke the assertion of will, or whether it was a mental state to which one evolved as one moved through the needs hierarchy. Maslow was vague on this count. His motivation theory revolved around the satisfaction of needs, where will was largely a means to an end. Self-actualized people often did things without any motivation whatsoever; “expressive” actions, such as appreciating a great work of art, were intrinsically good and ends in themselves. Moreover, Maslow spoke of the self-actualized at times as though they were finished products, “fully evolved and authentic” people whose “needs” were all satisfied. Yet in his later writing he insisted that personal “growth” was a life-long process, “a never-ending series,” he wrote, “of free choice situations” where one assumes that “free choice” meant “the wisest choice.” Will must have mattered, except that “choice” was substituting for will. [19]
Let me suggest that perhaps what Maslow was doing here was updating conceptions to fit a consumer society, where choice, rather than freedom, was the defining virtue. In his psychology, choice was to have something of the purpose as James’s volition: it was a means of self-emancipation from the defining social-psychological burden of his day. Efficacious choice was a means for addressing alienation. And in my view, the energy invested in the self during the 1970s was a widespread effort to do just that.
If we can use Jane Addams and William James as examples of people who willed themselves to mental equilibrium and lived to write about it, can we locate any examples of people who “chose” to overcome alienation and lived to write about it? I actually think there are many of them, including the bulk of those from the self-help movement, Seventies era feminists such as Gloria Steinem, and psychoanalysts such as Heinz Kohut spring to mind. One of the more peculiar cases was that of Nathaniel Branden, Ayn Rand’s acolyte and lover, who turned himself into a self-help guru after their falling out.
I had, in fact, planned to conclude my remarks today by examining how Branden’s “new psychology” gave him an intellectual escape route from his personal enslavement to Rand into something that seems like self-assertive respectability. But I figured that even the slightest implication that William James and Ayn Rand were somehow distant relatives would be treated in this group as a form of heresy. So let me to Charles Reich.
Yes, I mean that Charles Reich.
Some of you may remember Reich, as the Yale law professor who authored the best-selling The Greening of America. I won’t belabor that book; properly much-ridiculed at the time, it’s an easy target for criticism. It will do to note that he depicted “youth” engaged in creating “Consciousness III” as fundamentally alienated from the deadening society of Consciousness II, and that the blessings of the economy of abundance held out the possibility of a “nonartificial and nonalienated” way of life, if only the appropriate values would take hold. [20]
Because Greening was such a flash-in-the-pan, such a quickly dated book, not much attention was paid to Reich’s next book, an autobiographical account of his coming out as a gay man. As he recounted the story, his life was one long string of unrelenting misery. Coming of age in a Cold War society that “dominates self,” he was denied “autonomy”; alienated, he was deprived of “self-knowledge.” He spent his early career in high places. He clerked for Justice Hugo Black and befriended William O. Douglas before landing a job at a prominent DC law firm. But he was agonizingly lonely. Stuck in Washington, “a city of loneliness,” condemned to starched collars and stuffy corporate lunches, Reich was incapable of forging decent friendships, much less enduring relationships. Taking a position at Yale in 1960 was but a tiny step from the button-down world of Washington, given the university’s solid place in the establishment. [21]
In New Haven, he settled into the cloistered world of the eccentric bachelor professor until a young man he had met only once insisted he visit Berkeley. There, during the Summer of Love, Reich discovered the “new consciousness,” and it changed his life. “More than any place I had ever seen,” he wrote, “Berkeley was populated by people who seemed to be doing what they chose to do, rather than what they had to do. . . . Berkeley culture was a proclamation of the freedom to choose.” From that point on, Reich described his life as a steady shedding of the stifled, repressed self that had always been him: first in adopting a new teaching style and closer, more informal relationships to his students; and finally to a full coming out in San Francisco. His was hardly an unusual story for a gay man of his particular age. What draws our attention here is that Reich’s emphasis was not so much on repressed sexuality as on his sense of a broader alienation, which was dispelled not only by finally coming to terms with his sexual orientation but through the assertion of will, cast as choice. “Alienation isolates each of us in a separate prison cell,” he wrote, but he came to see that “the way out” was to find “within us the ability to change.” “Growth and change opened people to dimensions of themselves which alienation had banished from awareness,” he wrote. [22]
What can one say but more power to him? James would have been pleased to see that choice, in this case, was efficacious in liberating a person from anguish.
We can, then, find many parallels between the two periods, not least a confidence in the efficacy of the assertive sense. The Calvinist in me remains suspicious though, and suspects that something is lost when free will becomes transmuted into choice. If nothing else, the latter seems to indicate that no great philosophical stakes remained by 1970. Indeed, what was the alternative to “choice” at that point? It would be prudent to keep in mind that James himself never thought that one could will happiness, anymore than the believer could will the actual existence of God. Efficacy, to him, really was nothing more triumphant than forcing oneself to get out of bed on a cold morning and begin life’s struggles. Even if it were just a matter of using resonant language to accord with consumer society, “choice” trivializes its own origins. Free-will philosophy was rooted in the claim that the truly important choices that a human being faced were between good and evil, and, further, that the very definition of freedom lay in having to make such choices. It was rooted, in other words, in the presumption that human beings are frail, if not inherently flawed, creatures. This reminds us, perhaps, that choice doesn’t carry much gravity unless Calvin’s hand is there, at least threatening to maintain its “death-grip.”
-----------
Endnotes
1. For an example of my position, as well as a dose of the relevant literature, see David Steigerwald, “All Hail the Republic of Choice: Consumer History as Contemporary Thought,” Journal of American History 93 (September 2006), 385-403.
2. T. M. Griffith, “The Methodist Doctrine of Free Will,” Methodist Review 10 (1894), 560.
3. Henry Blanchamp, “Thoughts of a Human Automaton,” The Eclectic Magazine of Foreign Literature 55 (May 1892), 600.
4. Jane Addams, Twenty Years at Hull-House (1910), 121-22.
5. James Kloppenberg, Uncertain Victory: Social Democracy and Progressivism in European and American Thought, 1870-1920 (Oxford, 1986), 79-80.
6. Ibid., 85.
7. Ibid., 80.
8. William James, Principles of Psychology, vol. 2 (New York, 1907), 524-25.
9. T. J. Jackson Lears, “William James,” The Wilson Quarterly 11 (Autumn 1987), 89-90. Addams understood the yearning for public engagement as partly instinctual, a biological inheritance from the primitives: “We all bear traces of the starvation struggle which for so long made up the life of the race. Our very organism holds memories and glimpses of that long life of our ancestors which still goes on among so many of our contemporaries. . . . We have all had longings for a fuller life which should include the use of these faculties. These longings are the physical complement of the "Intimations of Immortality" on which no ode has yet been written.” Addams, Twenty Years at Hull-House, 118.
10. James, Principle of Psychology, II: 494.
11. William James, “The Dilemma of Determinism,” in The Will to Believe and Other Essays in Popular Philosophy (New York, 1923), 153.
12. Ibid., 145.
13. William James, “Reflex Action and Theism,” in The Will to Believe, 126.
14. Ibid., 113, 115, 117-19.
15. William James, “The Will to Believe,” in ibid., 1, 9, 27. Also Patrick K. Dooley, “The Nature of Belief: The Proper Context for James' "The Will to Believe,’" Transactions of the Charles S. Peirce Society 8 (Summer 1972) 141-50.
16. William James, The Varieties of Religious Experience (New York, 1936), 92, 99, 93-94. See also Donald F. Duclow, “William James, Mind-Cure, and the Religion of Healthy-Mindedness,” Journal of Religion and Health 41 (Spring 2002), 45-56; and Jennifer Welchman, “’The Will to Believe’: and the Ethics of Self-Experimentation,” Transactions of the Charles S. Peirce Society 42 (Spring 2006), 229-241.
17. Leslie Farber, The Ways of the Will: Selected Essays (New York, 2000); Viktor Frankl, The Will to Meaning: Foundations and Applications of Logotherapy (New York, 1988); Rollo May, Love and Will (New York, 1969); and Roberto Assagioli, The Act of Will (New York, 1973).
18. Abraham Maslow, Toward a Psychology of Being 2nd ed. (New York, 1968), 141-42, 11-12; Abraham Maslow, Motivation and Personality, 3rd ed. (New York, 1987), 7, 131-36, 22.
19. Maslow, Motivation and Personality, 70-71; Maslow, Toward a Psychology of Being, 16, 47-48, 45.
20. Reich, The Greening of America: How the Youth Revolution is Trying to Make America Livable (New York, 1970), 24-26.
21. Charles A. Reich, The Sorcerer of Bolinas Reef (New York, 1976), 3-4, 9, 63.
22. Ibid., 99, 117, 10, 101.
“Hollo! I must lie here no longer”:
Versions of the Willful Self from the Gilded Age to the Me Decade
by David Steigerwald, The Ohio State University
As one or two of you may be aware, I’ve made a bit of a living over the last few years criticizing the concept of individual agency in postwar America, especially in its application to consumerism. Typically there it includes claims that consumers exercise some measure of decisive power over the marketplace when they make idiosyncratic choices about either what they purchase or how they interpret the goods they buy. Choice is good, this line of reasoning seems to go, and because it provides so much of it, contemporary consumerism must also be good. Because versions of this line of thought came to pervade a good deal of writing about consumerism from the 1980s on, it seemed to me worth poking a few sticks at. At its most serious and most fruitful, the consumer-as-agent argument was a necessary counter to Frankfurt School cultural determinism, that stifling intellectual blanket laying upon those who began writing in the 1970s and 1980s. [1}
Yet in criticizing the concept of consumer agency, I’m afraid that I left the impression that I was defending Frankfurt, which is only partly true. Adorno and Horkheimer’s most concise summary of their view of the radio listener (a paraphrase of Henry Ford, it seems to me), that the listener wants what they’re going to get anyway, still strikes me as sound. Still, I had this nagging fear that someone would say to me what a defender of free-will Methodism in the 1890s wrote in criticism of predestination: “Why hold on to [this belief] as with a death-grasp ruled by Calvin’s dead hand from his very grave, trying to soften its asperities, and still keep mumbling the decrees as Roman priests do the mass in an unknown tongue, patching new pieces of truth to old garments of error worn in the Dark Ages?” [2]
To skirt just that possibility, I’ve widened the inquiry into the doctrine of choice in a book on alienation and affluence in postwar America, which I’m now finishing. In it, I’m arguing that the doctrine of choice become the antidote to alienation not just in consumer culture, but as a means by which individuals might think of themselves as successfully negotiating through an age of automated labor, bureaucratic regimentation, political powerlessness, and a profound shift in values that issued from the evaporation of the Protestant ethic. “Choice” became the universal default, in part because it was easy-to-hand and in perfect harmony with consumer capitalism. But it also spoke to the essential social psychology of alienation: that is, the pervasive sense of individual estrangement and isolation. Even if only an invocation, the belief that individual choices are both freely rendered and individually efficacious can be enough to lift the pallor of alienation for any given person. To the extent that the late-20th century political economy promised choices across the board—not just in soaps and shampoos, but in churches, communities however ephemeral or “virtual,” and technologies that provided private access to all—it effectively institutionalized such efficacy. And all this goes a long way toward explaining why we’re not alienated any more.
Having arrived at a via media all my own, it makes sense to me to admit, and follow up on, what should be obvious: that a debate over agency, or the efficacy of choice, is nothing new. While of course it is drawn out of the clash between the old Marxism and the New Left, it may be fruitful to see it as part of an even longer historical trajectory. We should see it, accordingly, as another episode in the old division between determinism and free will that runs back, in American letters, to the antinomian critique of Calvinism, and that intensified between 1880 and 1920 in the important debate between Darwinists and the theologically inclined—between, as one Darwin partisan put it in 1892, the “intellectual measles” of religion and “scientific certitude.” [3] A cursory look at the literature tells me that these forty years will take up the bulk of a book on the subject of free will. But a quick glance at the Google NGRAM—one of my favorite new research tools—shows that the use of “free will” in modern American writing had two peaks: a double-humped one that began around 1870, dipped at the turn of the century, then accelerated through the Great War; the second erupting sometime around 1960 and running perhaps a decade or so. And the possible connections between these two periods, as well as the different uses and meanings of the ideas deployed, provide an intriguing and natural parameter for such a study.
The first question, then, should ask what structural similarities the Gilded Age shared with the 1960s and 1970s, or what I’ll call the mid-postwar period. What comes immediately my mind is that in both periods, fundamental socio-economic change apparently generated distinct social-psychological maladies: neurasthenia/hysteria, in the Gilded Age; alienation in the mid-postwar period.
These two forms of psycho-social maladjustment, moreover, themselves shared similarities. Both were commonly accounted for as reactions against the warp speed of modernity. Train schedules, factory whistles, and the clock were superimposed on nineteenth-century people physiologically attuned to the leisurely pace of the natural world, while automation and mass communications generated “future shock” in the latter case. Both nervous exhaustion and alienation seemed to have weighed most heavily on sensitive young adults.
I have always been impressed, for example, by how clearly the sentiments in the Port Huron Statement echoed Jane Addams’s self-analysis in that wonderful essay, “The Subjective Necessity for Social Settlements.” Addams, of course, knew a thing or two about damaged youth, having been one herself. And in that essay she described the personal debilitations, the spiritual enervation, that plagued educated young adults who found themselves rendered useless by modernity’s material well-being. Whereas the generation before them was absorbed in the “starvation struggle” against the frontier, Addams’s generation had been elevated into comfort and educated presumably in order to take on a purposeful life, only to be deprived of meaningful outlets for their abundant vitalities. This situation was particularly intense for young women like Addams, whose typical fate was to graduate backward into the “family claim.” In the passage that has always caught my eye, Addams described with her usual clarity the quiet agony of her peers:
We have in America a fast-growing number of cultivated young people who have no recognized outlet for their active faculties. They hear constantly of the great social maladjustment, but no way is provided for them to change it, and their uselessness hangs about them heavily. Huxley declares that the sense of uselessness results in atrophy of function. These young people have had advantages of college, of European travel, and of economic study, but they are sustaining this shock of inaction. . . . Many of them dissipate their energies in so-called enjoyment. . . . Many are buried beneath mere mental accumulation with lowered vitality and discontent. . . . This young life, so sincere in its emotion and good phrases and yet so undirected, seems to me as pitiful as the other great mass of destitute lives. . . . Our young people feel nervously the need of putting theory into action, and respond quickly to the Settlement form of activity. [4]
Some sixty years later, Paul Goodman described almost the exact same problem as “growing up absurd,” and it is no stretch at all to think that Addams would have had enormous sympathy for the young people who in 1962 began their Port Huron Statement with the rallying cry: “We are people of this generation, bred in at least modest comfort, housed now in universities, looking uncomfortably to the world we inherit.”
Every bit of Addams’s language in the “Subjective Necessity” points us to another similarity in the distress that reached across time: both nervous disorder and alienation were maladies of the self, and because of that, they recommended at least some measure of subjective assertion. This is precisely Addams’s point. Earnest young people cut off from the starvation struggle of the urban masses could recover their physical vitality and sustain selfhood through settlement house work. Let me suggest that those alienated young people who launched Students for a Democratic Society and scoured the social landscape in search of objects for their political energies were doing the exact same thing.
Rather than making that argument here, I want to muck around in what I’m thinking of as a prior, more distinctly subjective reaction to psycho-social maladjustment. Both Addams and her peers and Tom Hayden and his alleviated their distress by throwing themselves into the public arena with the intention of changing objective conditions, and it is my view that such an effort was undoubtedly the healthiest, most meaningful antidote. But it raises the question of whether it is possible to overcome personal agony through purely subjective assertion, through something we could profitably think of as “the will.” If indeed one can successfully will oneself to mental well-being, if indeed the subjective is sufficient, then perhaps those who put great stock in individual agency are on solid ground. If the subjective will can, in a tangible way, affect the practical conditions of one’s life, then self-assertion is not merely subjective; it has some objective consequence. This, I take it, is what people mean by “agency.”
To get at this question, we should begin not with Jane Addams but with her contemporary, William James. James’s engagement in the debate over human volition was both important and typical of him, one of the important building blocks, as our good friend James Kloppenberg taught us, in the via media of American pragmatism. In Uncertain Victory, Kloppenberg described the “nature of the will” as “among the most vexing problems confronting late nineteenth-century thinkers.” Broadly speaking, that problem pitted the partisans of the physical sciences, “who dismissed religious arguments for free will as wishful thinking,” and the theologically inclined, who obliged themselves to believe in the complete independence of the individual human being. The automaton squared off against the autonomous, we might say. Clearly, these were just the broad parameters of a discourse that contained many variations—the intellectual mud-wrestling between Huxley and his scandalized critics; a revival of the debate over Calvinism among some American Christians; the growing importance of Freud; and, not least, Schopenhauer and Nietzsche. These variations indicated that high stakes were on the table, and that, combined with the irresolvable impasse between the two antagonistic positions, made the question of the will irresistible for American pragmatists. [5]
As Kloppenberg argues, the philosophers of the middle way crafted a philosophy of voluntary action that characteristically insisted on measuring any claims against experience. By the time they had arrived at a satisfactory position, their theory of voluntary action held that people were capable of choosing to sustain a thought and were therefore aware of the freedom to select between competing options; that the individual selects both what sensory data to act on and how to do so; that thought and sensation were mediated into action through volition; and that free will, though it clearly exists, is nonetheless constrained by social, and therefore historical, context. [6]
But what particularly attracts me to James was, as Kloppenberg also notes, that he immersed himself in the question of the will out of “the crucible of personal anxiety.” [7] The philosophical stakes aside, the personal capacity to will oneself, in James’s case, to some mental equilibrium that permitted engagement with the world was at the core of his philosophy of volition. Had James meekly accepted Huxley, he would have withered away. Instead, he climbed out of his debilitating mental exhaustion by translating into action one option from among a set of options. In a passage on the “psychology of volition” in Principles of Psychology, he offered a veiled description of his own experience. In trying to prove that ideas could be translated into action by suggesting that many ideas also inhibit action, James offered the example of arising out of bed on a cold morning. One knows that one must get up and face the day. “But still the warm couch feels too delicious, the cold outside too cruel, and resolution faints away and postpones itself again and again.” One mental urge throws itself against another; the imperatives compete. Extrapolating from “my own experience,” James asked: How do people ever raise themselves? “The idea flashes across us, ‘Hollo! I must lie here no longer.” [8]
Jackson Lears noted some time ago that James’s recovery was hardly so dramatic, that he recovered only gradually and then only after taking up his teaching career at Harvard. That said, what interests me is the connection between physical activity and mental health—a connection, incidentally, that Jane Addams also acknowledged. [9] We know today that physical activity is an important ingredient in the successful treatment of depression, and it begins with the Jamesian act of will—simply getting out bed and facing the day. More to the point, such an act of will, whatever its value to the philosophy of voluntary action, has real empirical value. It is possible to see in, and therefore to measure through, the improvement of the subject’s mental disposition the extent to which the subjective assertion of the self alters objective conditions. It is an example of the efficacy of the will.
If anything, James was overly insistent on the physiological origins of the will, at least in that long section on the subject in Principles of Psychology. Partly, his medical training explains his preoccupation with the “kinaesthetic idea.” But it is obvious that he was driven to scrutinize determinism more aggressively than its opposite, religious feeling. If I understand him correctly, James’s main intention in this bit of writing was to distance the source of sensation from the physical movement that is sensation’s effect. The simple distinction between involuntary and voluntary movements—the former were primary “functions of our organism”; the latter “secondary” ones—was his point of departure. But the distinction was not absolute, because in certain cases the body was capable of learning through motor memory from the first involuntary reaction to a particular sensation. The body learned other movements as well and at some point selected which action to undertake in response. (One example: The child who starts at the roar of a train as he stands on the platform does so involuntarily at first experience; thereafter, the same sensation might produce a similar response, but maybe not.) While this argument qualified the distinction between involuntary and voluntary movement, James used the point to separate out sensation from the response to sensation. “In reflex action, “and in its emotional expression,” he wrote, “the movements which are the effects are in no manner contained by anticipation in the stimuli which are their cause. The latter are subjective sensations and objective perceptions, which do not in the slightest degree resemble or prefigure the movements.” [10] Whereas determinism presumed a direct identity between sensation and movement, James insisted on a continuum that linked sensation, reflection, and action. By opening up room between the spark of motor movement and the actual movement itself, James crafted out space for volition, for the capacity of one to choose between impulses, and therefore for indeterminacy, or what he referred to elsewhere as “chance.” And chance, he knew, was the enemy of determinism. [11]
James always recognized the religious implications of this matter and understood perfectly well that the Huxleyan uproar was a resumption of the debate over free will. As he told Harvard Divinity students in a 1884 talk, while “common opinion” might be “that the juice has ages ago been pressed out of the free-will controversy, . . . I know of no subject less worn out, or in which inventive genius has a better chance of breaking open new ground.” [12] He joined the issue most clearly in “Reflex Action and Theism,” an address given to Unitarian ministers in 1881, whom he praised for their efforts to assimilate contemporary science into their world views. (He also praised them, by the way, for their rejection of Calvinism: “A God who gives so little scope to love, a predestination which takes from endeavor all its zest with all its fruit, are irrational conceptions, because they say to our most cherished prayers, There is no object for you.” [13]) James’s intention here, however, was to disabuse his open-minded brethren of any flirtation with the “doctrine of reflex action” then dominant in physiology and psychology, which dogmatically insisted that all “acts” are mere “discharges from the nervous centres” in response to external stimuli. The adherents to this theory, James claimed, were quite sure that it dealt the “coup de grace to the superstition of God.” The fallacy of the determinists, as anyone’s experience easily confirmed, was that the “real order of the world” was so overwhelmingly chaotic that individuals had to sort out what to respond to and what to ignore, and those decisions issued from “our subjective interests.” James maintained that subjective interests were given play in “the conceiving or theorizing faculty—the mind’s middle department,” and that they were independent of external stimuli. That middle department “is a transformer of the world of our impressions into a totally different world, . . . and the transformation is effected in the interests of our volitional nature.” [14] James reassured his theistic listeners that the middle department’s job was to adjust sensations until it mastered the chaos of the objective world. And that, he figured, was really what theology was about.
For James, then, the will connected the metaphysical and the phenomenological and was the bridge between subjective and objective realities. More than that, though: it was the activating agent in transforming belief into action, or abstract faith into faith revealed and affirmed. So it’s not too much to claim that James’s famous open-mindedness toward religious conviction grew out of his engagement with the free-will debate. If his main intention in “The Will to Believe” was to defend “our right to adopt a believing attitude in religious matters,” as he put it, what justified that right was belief’s place in the continuum of action. True, the believer could not will the existence of God. But our volitional nature could nonetheless translate belief in such a way as to “help create the fact” that belief sought out by making the faithful “better off even now.” In other words, the practical, objective effects of religious conviction concluded the continuum whereby the subjective was translated into an objective effect. [15]
This same formulation explains James’s interest in the “mind-cure” movement that proliferated after 1890. Self-evidently unscientific, mind-cure had to be understood as a religious movement, akin, he wrote, to Lutheranism or Methodism. Like earlier evangelical faiths, mind-cure banished original sin, which was probably enough in itself to appeal to James. Its practitioners harbored no “contrite hearts”; they were already “one with the Divine without any miracle of grace.” Much of what he read from mind-cure authors baffled him. But James took note of how the audacity of the advice had struck a vibrant chord among the public. “The leaders of the movement,” he observed, “have had an intuitive belief in the all-saving power of healthy-minded attitudes as such, in the conquering efficacy of courage, hope, and trust, and a correlative contempt for doubt, fear, and worry.” They cleverly packaged their sunny nostrums as “cures” and thereby trespassed on the turf of medical science. But the only sensible way to measure a “cure” was through its results, which in the case of mind-cure seemed to be promising. “The blind have been made to see, the halt to walk; life-long invalids have had their health restored. The moral fruits have been no less remarkable. The deliberate adoption of a healthy-minded attitude has proved possible to many who never supposed they had it in them; regeneration of character has gone on an extensive scale; and cheerfulness has been restored to countless homes.” The movement’s “practical fruits,” its “palpable experiential results,” were enough to warrant respect. [16] Here again, as with religious conviction, the will created the fact.
At the considerable risk of drawing up a genealogy that links pigmies to giants, it might be an interesting exercise to apply the Jamesian formula to the psychology of the self that blossomed in the late 1960s and 1970s. It is the case that the cult of the self, that great obsession of the Me Decade, evoked a fair amount of writing about the will—hence the second N-Gram hump. There was every bit as much charlatanry and snake-oil in the Seventies Era cult of the self as in the mind-cure days of James. But following James’s example, we are compelled to take seriously results, regardless of the means.
Humanistic psychology, for instance, rested on the subjective will. Leslie Farber, Viktor Frankl, and Rollo May all wrote on the subject. The Esalen Institute sanctioned such books as Roberto Assaglioli’s The Act of Will, which recommended the not unreasonable development of a “skillful will” as a psychological cog-wheel that could keep elemental instincts and behavioral impulses in alignment. [17] Abraham Maslow’s self-actualized person might be understood as a version of James’s once-born: free of illusions, guilt, shame, or anxieties, self-actualized people were driven to be what “they must be,” to realize their idiosyncratic potential and “become everything that one is capable of becoming.” [18]
The question, though, is whether self-actualization was a self-conscious achievement that bespoke the assertion of will, or whether it was a mental state to which one evolved as one moved through the needs hierarchy. Maslow was vague on this count. His motivation theory revolved around the satisfaction of needs, where will was largely a means to an end. Self-actualized people often did things without any motivation whatsoever; “expressive” actions, such as appreciating a great work of art, were intrinsically good and ends in themselves. Moreover, Maslow spoke of the self-actualized at times as though they were finished products, “fully evolved and authentic” people whose “needs” were all satisfied. Yet in his later writing he insisted that personal “growth” was a life-long process, “a never-ending series,” he wrote, “of free choice situations” where one assumes that “free choice” meant “the wisest choice.” Will must have mattered, except that “choice” was substituting for will. [19]
Let me suggest that perhaps what Maslow was doing here was updating conceptions to fit a consumer society, where choice, rather than freedom, was the defining virtue. In his psychology, choice was to have something of the purpose as James’s volition: it was a means of self-emancipation from the defining social-psychological burden of his day. Efficacious choice was a means for addressing alienation. And in my view, the energy invested in the self during the 1970s was a widespread effort to do just that.
If we can use Jane Addams and William James as examples of people who willed themselves to mental equilibrium and lived to write about it, can we locate any examples of people who “chose” to overcome alienation and lived to write about it? I actually think there are many of them, including the bulk of those from the self-help movement, Seventies era feminists such as Gloria Steinem, and psychoanalysts such as Heinz Kohut spring to mind. One of the more peculiar cases was that of Nathaniel Branden, Ayn Rand’s acolyte and lover, who turned himself into a self-help guru after their falling out.
I had, in fact, planned to conclude my remarks today by examining how Branden’s “new psychology” gave him an intellectual escape route from his personal enslavement to Rand into something that seems like self-assertive respectability. But I figured that even the slightest implication that William James and Ayn Rand were somehow distant relatives would be treated in this group as a form of heresy. So let me to Charles Reich.
Yes, I mean that Charles Reich.
Some of you may remember Reich, as the Yale law professor who authored the best-selling The Greening of America. I won’t belabor that book; properly much-ridiculed at the time, it’s an easy target for criticism. It will do to note that he depicted “youth” engaged in creating “Consciousness III” as fundamentally alienated from the deadening society of Consciousness II, and that the blessings of the economy of abundance held out the possibility of a “nonartificial and nonalienated” way of life, if only the appropriate values would take hold. [20]
Because Greening was such a flash-in-the-pan, such a quickly dated book, not much attention was paid to Reich’s next book, an autobiographical account of his coming out as a gay man. As he recounted the story, his life was one long string of unrelenting misery. Coming of age in a Cold War society that “dominates self,” he was denied “autonomy”; alienated, he was deprived of “self-knowledge.” He spent his early career in high places. He clerked for Justice Hugo Black and befriended William O. Douglas before landing a job at a prominent DC law firm. But he was agonizingly lonely. Stuck in Washington, “a city of loneliness,” condemned to starched collars and stuffy corporate lunches, Reich was incapable of forging decent friendships, much less enduring relationships. Taking a position at Yale in 1960 was but a tiny step from the button-down world of Washington, given the university’s solid place in the establishment. [21]
In New Haven, he settled into the cloistered world of the eccentric bachelor professor until a young man he had met only once insisted he visit Berkeley. There, during the Summer of Love, Reich discovered the “new consciousness,” and it changed his life. “More than any place I had ever seen,” he wrote, “Berkeley was populated by people who seemed to be doing what they chose to do, rather than what they had to do. . . . Berkeley culture was a proclamation of the freedom to choose.” From that point on, Reich described his life as a steady shedding of the stifled, repressed self that had always been him: first in adopting a new teaching style and closer, more informal relationships to his students; and finally to a full coming out in San Francisco. His was hardly an unusual story for a gay man of his particular age. What draws our attention here is that Reich’s emphasis was not so much on repressed sexuality as on his sense of a broader alienation, which was dispelled not only by finally coming to terms with his sexual orientation but through the assertion of will, cast as choice. “Alienation isolates each of us in a separate prison cell,” he wrote, but he came to see that “the way out” was to find “within us the ability to change.” “Growth and change opened people to dimensions of themselves which alienation had banished from awareness,” he wrote. [22]
What can one say but more power to him? James would have been pleased to see that choice, in this case, was efficacious in liberating a person from anguish.
We can, then, find many parallels between the two periods, not least a confidence in the efficacy of the assertive sense. The Calvinist in me remains suspicious though, and suspects that something is lost when free will becomes transmuted into choice. If nothing else, the latter seems to indicate that no great philosophical stakes remained by 1970. Indeed, what was the alternative to “choice” at that point? It would be prudent to keep in mind that James himself never thought that one could will happiness, anymore than the believer could will the actual existence of God. Efficacy, to him, really was nothing more triumphant than forcing oneself to get out of bed on a cold morning and begin life’s struggles. Even if it were just a matter of using resonant language to accord with consumer society, “choice” trivializes its own origins. Free-will philosophy was rooted in the claim that the truly important choices that a human being faced were between good and evil, and, further, that the very definition of freedom lay in having to make such choices. It was rooted, in other words, in the presumption that human beings are frail, if not inherently flawed, creatures. This reminds us, perhaps, that choice doesn’t carry much gravity unless Calvin’s hand is there, at least threatening to maintain its “death-grip.”
-----------
Endnotes
1. For an example of my position, as well as a dose of the relevant literature, see David Steigerwald, “All Hail the Republic of Choice: Consumer History as Contemporary Thought,” Journal of American History 93 (September 2006), 385-403.
2. T. M. Griffith, “The Methodist Doctrine of Free Will,” Methodist Review 10 (1894), 560.
3. Henry Blanchamp, “Thoughts of a Human Automaton,” The Eclectic Magazine of Foreign Literature 55 (May 1892), 600.
4. Jane Addams, Twenty Years at Hull-House (1910), 121-22.
5. James Kloppenberg, Uncertain Victory: Social Democracy and Progressivism in European and American Thought, 1870-1920 (Oxford, 1986), 79-80.
6. Ibid., 85.
7. Ibid., 80.
8. William James, Principles of Psychology, vol. 2 (New York, 1907), 524-25.
9. T. J. Jackson Lears, “William James,” The Wilson Quarterly 11 (Autumn 1987), 89-90. Addams understood the yearning for public engagement as partly instinctual, a biological inheritance from the primitives: “We all bear traces of the starvation struggle which for so long made up the life of the race. Our very organism holds memories and glimpses of that long life of our ancestors which still goes on among so many of our contemporaries. . . . We have all had longings for a fuller life which should include the use of these faculties. These longings are the physical complement of the "Intimations of Immortality" on which no ode has yet been written.” Addams, Twenty Years at Hull-House, 118.
10. James, Principle of Psychology, II: 494.
11. William James, “The Dilemma of Determinism,” in The Will to Believe and Other Essays in Popular Philosophy (New York, 1923), 153.
12. Ibid., 145.
13. William James, “Reflex Action and Theism,” in The Will to Believe, 126.
14. Ibid., 113, 115, 117-19.
15. William James, “The Will to Believe,” in ibid., 1, 9, 27. Also Patrick K. Dooley, “The Nature of Belief: The Proper Context for James' "The Will to Believe,’" Transactions of the Charles S. Peirce Society 8 (Summer 1972) 141-50.
16. William James, The Varieties of Religious Experience (New York, 1936), 92, 99, 93-94. See also Donald F. Duclow, “William James, Mind-Cure, and the Religion of Healthy-Mindedness,” Journal of Religion and Health 41 (Spring 2002), 45-56; and Jennifer Welchman, “’The Will to Believe’: and the Ethics of Self-Experimentation,” Transactions of the Charles S. Peirce Society 42 (Spring 2006), 229-241.
17. Leslie Farber, The Ways of the Will: Selected Essays (New York, 2000); Viktor Frankl, The Will to Meaning: Foundations and Applications of Logotherapy (New York, 1988); Rollo May, Love and Will (New York, 1969); and Roberto Assagioli, The Act of Will (New York, 1973).
18. Abraham Maslow, Toward a Psychology of Being 2nd ed. (New York, 1968), 141-42, 11-12; Abraham Maslow, Motivation and Personality, 3rd ed. (New York, 1987), 7, 131-36, 22.
19. Maslow, Motivation and Personality, 70-71; Maslow, Toward a Psychology of Being, 16, 47-48, 45.
20. Reich, The Greening of America: How the Youth Revolution is Trying to Make America Livable (New York, 1970), 24-26.
21. Charles A. Reich, The Sorcerer of Bolinas Reef (New York, 1976), 3-4, 9, 63.
22. Ibid., 99, 117, 10, 101.
Langganan:
Postingan (Atom)