Tampilkan postingan dengan label Daniel Patrick Moynihan. Tampilkan semua postingan
Tampilkan postingan dengan label Daniel Patrick Moynihan. Tampilkan semua postingan

Kamis, 02 Juni 2011

“When the Zulus Produce a Tolstoy We Will Read Him”: Charles Taylor and the Politics of Recognition


In his excellent recent Dædalus article, “Racial Liberalism, the Moynihan Report & the Dædalus Project on ‘The Negro American,’” Daniel Geary (author of Radical Ambition, the superb biography of C. Wright Mills, which I reviewed here) concludes his analysis with a cogent reference to the post-1960s shift in racial liberalism. He writes that “public criticism of the Moynihan Report emerged from an increasing disenchantment with the core assumptions of racial liberalism”—or, at least, racial liberalism as it stood in 1965, the year Moynihan authored his infamous government report on The Negro Family.

Left-leaning critics of that form of racial liberalism, who would go on to pioneer another form, otherwise known as multiculturalism, “came to reject the common sociological view that African American culture was a pathological distortion of white American culture and that blacks should have to conform to white values in order to achieve equality.” In short, the Moynihan Report predated the cultural turn taken by post-60s racial liberalism, anticipated by Ralph Ellison in his critique of liberal sociology circa 1965: “The sociology is loaded… The concepts which are brought to bear are usually based on those of white, middle-class, Protestant values and life style.” Ellison pointed towards a new form of racial liberalism: one that tended to valuate, and even celebrate, minority cultural forms.


For Geary’s purposes (he’s currently writing a book on the Moynihan Report), multiculturalism—or, the post-60s cultural turn taken by racial liberalism—merely serves as a bookend to the sociological liberalism that informed Moynihan and his cohort. But for my purposes—in my efforts to write a book on the culture wars—multiculturalism is that which needs full explanation, historical and epistemological. My research thus leads me to Charles Taylor’s indispensable long essay, “The Politics of Recognition,” the centerpiece of a book edited by Amy Gutmann, entitled, simply, Multiculturalism. (Following the suggestions of several readers, I’m in the midst of a deep engagement with Charles Taylor. Next week I hope to blog on his monumental A Secular Age—as if a blog post can do justice to a 900-page book!)

Taylor locates the source of what he calls “the politics of recognition,” which are more commonly referred to as “identity politics,” in the idea that “nonrecognition or misrecognition can inflict harm, can be a form of oppression, imprisoning someone in a false, distorted, and reduced mode of living.” Ideas about identity, culture, and history thus take on added political consequence, since it is assumed that oppressed people are conditioned to the view of themselves assigned by their oppressors, such that even when legal and other tangible barriers to equality are removed, psychological inhibitors remain. Although Taylor’s essay is mostly in the realm of abstract ideas—he’s a philosopher, not an intellectual historian—my research confirms his notions: concerns about self-loathing in relation to cultural identity drove the liberation politics of Black Power advocates like Stokely Carmichael and Chicano Power activists such as Corky Gonzales (check out my essay on the latter, in relation to the Arizona anti-ethnic studies law, here).

Taylor recognizes that perhaps the key thinker in this transition to thinking about misrecognition as oppression is Franz Fanon, “whose influential The Wretched of the Earth argued that the major weapon of the colonizers was the imposition of their image of the colonized on the subjugated people. These latter, in order to be free, must first of all purge themselves of these depreciating self-images.” Taylor contends that “the notion that there is a struggle for a changed self-image, which takes place both within the subjugated and against the dominator, has been very widely applied. The idea has become crucial to certain strands of feminism, and is also a very important element in the contemporary debate about multiculturalism.”

Fanon is not always considered a practitioner of identity politics or a theorist of multiculturalism. His revolutionary credentials tend to militate against such labels, since, in an age of multinational capitalism, only deluded multicultural theorists and their caustic conservative critics tend to think of multiculturalism as revolutionary. In a recent New Left Review retrospective ("Reading Fanon in the 21st Century") Immanuel Wallerstein argues that Franz Fanon’s 1952 book Black Skin, White Masks—recirculated in the 1980s and 1990s as part of the multicultural canon (the irony in coupling these two words together is not lost on me)—was “in no way a call to identity politics.” Wallerstein has a point, since Fanon concluded the book on a universal note, distancing himself from any particular conception of blackness: “The Negro is not. Any more than a white man.”

In his 1963 The Wretched of the Earth, perhaps the seminal anticolonial text, Fanon was critical of black nationalist celebrations of ancient African civilization. But he recognized such racialized expressions of history and culture as necessary first steps in severing ties with colonial power, especially since the representatives of Western Civilization “have never ceased to set up white culture to fill the gap left by the absence of other cultures.” In other words, cultural identity mattered to the larger struggle, and was integral to Fanon’s work, which accentuated “the experience of a black man thrown into a white world.” Fanon’s position resembled Richard Wright’s well-known response to the Bandung Conference, where he called for a “reluctant nationalism” that “need not remain valid for decades to come.” Both Fanon and Wright joined the Second Congress of the Negro Writers and Artists in Rome in 1959, dedicated to the “peoples without a culture,” a mission aligned with one of Fanon’s more famous dictums: “The plunge into the chasm of the past is the condition and the course of freedom.” In any case, it is obviously too easy to separate Fanon from identity politics.

Relating this back to Geary’s stated historical trajectory, it becomes clear that the break from the High Racial Liberalism of mid-century social thought, perhaps most famously represented by Gunnar Myrdal’s An American Dilemma: The Negro Problem and Modern Democracy (1944), to the multicultural liberalism of the 1980s and 1990s, was not so clean. Continuity is an important factor in the sense that thinkers of both eras were conscious of “damage,” the trope documented by Daryl Michael Scott in his important book, Contempt and Pity: Social Policy and the Image of the Damaged Black Psyche: 1880-1996 (which should be read alongside Alice O’Connor’s equally important Poverty Knowledge, which I discuss here). The difference was in how to treat damage. Mid-century social scientists such as Moynihan implied that assimilation to white-middle-class values was the ticket to salvation. Even black sociologist Kenneth Clark assumed as much, evident in his famous “doll study” that informed Brown v. Board, where black children were observed reacting positively to white dolls and negatively to black dolls. Multiculturalists from Fanon onwards believed that damage had to be overcome by acting against the dominant culture. In Fanon’s case, meeting colonial violence with anticolonial violence was the means of atonement. In the case of most American multiculturalists, the key to overcoming damage was in valuating their own culture as apart from the dominant culture. Thus, it became increasingly common in the 1980s and 1990s for predominantly black school districts to implement an Afrocentric curriculum.

Since the boundaries between historical eras of racial liberalism are blurry, it is helpful that Taylor takes a long view on the politics of recognition in relating them to the modern history of democratization. He writes: “What has come about with the modern age is not the need for recognition but the conditions in which the attempt to be recognized can fail. That is why the need is now acknowledged for the first time. In premodern times, people didn’t speak of ‘identity’ and ‘recognition’—not because people didn’t have (what we call) identities, or because these didn’t depend on recognition, but rather because these were then too unproblematic to be thematized as such.” In other words, in feudal times everyone had an identity—lord or peasant, master or slave—but because social mobility didn’t exist, identity (though it wasn’t called such) was static and unproblematic. In modern liberal democracies, where everyone is assumed to have equal status in theory—but, crucially, not in practice—identity matters because it relates to one’s ability or inability to climb social ladders and achieve “equal” status. (This reminds me of Orwell’s famous Animal Farm dictum that “some animals are more equal than others.”) A person’s position is tied up in their identity, especially, multiculturalists argue, in relation to racial and gendered identities. I would argue that identification is one of the most important forms of modern capitalistic classification but that, ironically, as identity politics became the dominant form of racial liberalism, what might be termed postmodern capitalistic classification took hold and, though class polarization grew starker, racial and gendered identities grew less adhesive.

Though Taylor subsumes the politics of recognition under liberalism broadly conceived, he admits that such politics seem illiberal to more traditional liberals who understand politics to be about the defense of universal rights accrued to the individual without relation to group affiliation. Of course, the standard critique of universalism applies here: particular groups, namely, men of European descent, achieve and maintain power in the name of universal rights that supposedly apply to all. Such criticism has been the theoretical backbone of efforts to reform higher education since the 1960s, in terms of actual and symbolic representation, or, affirmative action and multiculturalism. As Taylor writes: “Where the politics of universal dignity fought for forms of nondiscrimination that were quite ‘blind’ to the ways in which citizens differ, the politics of difference often redefines nondiscrimination as requiring that we make these distinctions the basis of preferential treatment.”

Of course, the politics of difference is not grounded in monolithic epistemological positioning. Whereas many multiculturalists challenge the dominant culture by valuing their own minority cultures and histories, some others theorize that there is no such thing as cultural value. These differences are seen in the 1980s and 1990s attempts to revise the traditional canon. Some such attempts, such as the successful revision of the core freshman reading list achieved at Stanford University in 1987-1988, lead by the Stanford Black Student Union, were implicitly based on the notion that all cultures were intrinsically valuable and that judgments to the contrary were based on prejudice or ill-will. But other academic activists, invoking what Taylor calls “half-baked neo-Nietzschean theories,” sought to undermine the very premise of a canon. “Deriving frequently from Foucault or Derrida, they claim that all judgments of worth are based on standards that are ultimately imposed by and further entrench structures of power.”

Taylor thus confirms one of the central points made by Francois Cusset in his provocative French Theory: How Foucault, Derrida, Deleuze, & Co. Transformed the Intellectual Life of the United States (which I recently blogged about here): that some American scholars misappropriated French theory in ways useful to American academic politics. “If Derrida or Foucault deconstructed the concept of objectivity,” Cusset writes, “the Americans would draw on those theories not for a reflection on the figural power of language or on discursive constructions, but for a more concrete political conclusion: objectivity is synonymous with ‘subjectivity of the white male.’” But as Taylor correctly points out: such argumentation backs the neo-Nietzscheans into a conceptual corner. The so-called “canon wars” were animated by a demand for respect—a politics of recognition. But if evaluations of cultural worth are mere expressions of power, then there is no objective ground upon which to respect or even recognize a minority culture.

Right-wing culture warriors made similar recognitions in their analyses of a nihilistic academic left. For example, in Telling the Truth, her 1995 summary of all that she had learned about the academy while directing the National Endowment of the Humanities, Lynn Cheney scorned the academic fashion of reading power and hierarchy into everything, including canonical texts. “The humanities are about more than politics, about more than social power,” she argued. “What gives them their abiding worth are truths that, transcending accidents of class, race, and gender, speak to us all.” Similarly, in his 1991 bestseller Illiberal Education, Dinesh D’Souza wrote that the so-called “victim’s revolutionaries” who had taken over college campuses in the wake of the 60s had joined forces with a new literary relativism, schooled in Foucault and Derrida. “Because the old notion of neutral standards corresponded with a white male faculty regime at American universities,” he intoned, “minority and feminist scholars have grown increasingly attached to the au courant scholarship, which promises to dismantle and subvert these old authoritative structures."

Taylor rejects the neo-Nietzschean form of identity politics in favor of that which values minority cultures. Though he is skeptical that all cultures are objectively equal—morally or aesthetically—he believes that we should approach learning about other cultures as if this were so. He concludes his essay: “There is perhaps a moral issue here. We only need a sense of our own limited part in the whole human story to accept the presumption [that other cultures have merit and value]. It is only arrogance, or some analogous moral failing, that can deprive us of this. But what the presumption requires of us is not the peremptory and inauthentic judgments of equal value, but a willingness to be open to comparative cultural study of the kind that must replace our horizons in the resulting fusions.” In other words, we need to be open to having our ethnocentric views challenged.

Framing humanistic learning as such, Taylor vehemently opposes the racial arrogance of the statement supposedly uttered by Saul Bellow: “When the Zulus produce a Tolstoy we will read him.” Taylor works against Bellow’s supposed stricture not only on the grounds that the Zulus might have produced a Tolstoy that is yet to be discovered, but also from the standpoint that Zulu culture might evaluate merit differently and that we would benefit from learning their evaluative system. This is what he means by replacing our narrow horizons with a vision formed from cultural fusion, what he calls the dialogic process.

“When the Zulus Produce a Tolstoy We Will Read Him”: Charles Taylor and the Politics of Recognition


In his excellent recent Dædalus article, “Racial Liberalism, the Moynihan Report & the Dædalus Project on ‘The Negro American,’” Daniel Geary (author of Radical Ambition, the superb biography of C. Wright Mills, which I reviewed here) concludes his analysis with a cogent reference to the post-1960s shift in racial liberalism. He writes that “public criticism of the Moynihan Report emerged from an increasing disenchantment with the core assumptions of racial liberalism”—or, at least, racial liberalism as it stood in 1965, the year Moynihan authored his infamous government report on The Negro Family.

Left-leaning critics of that form of racial liberalism, who would go on to pioneer another form, otherwise known as multiculturalism, “came to reject the common sociological view that African American culture was a pathological distortion of white American culture and that blacks should have to conform to white values in order to achieve equality.” In short, the Moynihan Report predated the cultural turn taken by post-60s racial liberalism, anticipated by Ralph Ellison in his critique of liberal sociology circa 1965: “The sociology is loaded… The concepts which are brought to bear are usually based on those of white, middle-class, Protestant values and life style.” Ellison pointed towards a new form of racial liberalism: one that tended to valuate, and even celebrate, minority cultural forms.


For Geary’s purposes (he’s currently writing a book on the Moynihan Report), multiculturalism—or, the post-60s cultural turn taken by racial liberalism—merely serves as a bookend to the sociological liberalism that informed Moynihan and his cohort. But for my purposes—in my efforts to write a book on the culture wars—multiculturalism is that which needs full explanation, historical and epistemological. My research thus leads me to Charles Taylor’s indispensable long essay, “The Politics of Recognition,” the centerpiece of a book edited by Amy Gutmann, entitled, simply, Multiculturalism. (Following the suggestions of several readers, I’m in the midst of a deep engagement with Charles Taylor. Next week I hope to blog on his monumental A Secular Age—as if a blog post can do justice to a 900-page book!)

Taylor locates the source of what he calls “the politics of recognition,” which are more commonly referred to as “identity politics,” in the idea that “nonrecognition or misrecognition can inflict harm, can be a form of oppression, imprisoning someone in a false, distorted, and reduced mode of living.” Ideas about identity, culture, and history thus take on added political consequence, since it is assumed that oppressed people are conditioned to the view of themselves assigned by their oppressors, such that even when legal and other tangible barriers to equality are removed, psychological inhibitors remain. Although Taylor’s essay is mostly in the realm of abstract ideas—he’s a philosopher, not an intellectual historian—my research confirms his notions: concerns about self-loathing in relation to cultural identity drove the liberation politics of Black Power advocates like Stokely Carmichael and Chicano Power activists such as Corky Gonzales (check out my essay on the latter, in relation to the Arizona anti-ethnic studies law, here).

Taylor recognizes that perhaps the key thinker in this transition to thinking about misrecognition as oppression is Franz Fanon, “whose influential The Wretched of the Earth argued that the major weapon of the colonizers was the imposition of their image of the colonized on the subjugated people. These latter, in order to be free, must first of all purge themselves of these depreciating self-images.” Taylor contends that “the notion that there is a struggle for a changed self-image, which takes place both within the subjugated and against the dominator, has been very widely applied. The idea has become crucial to certain strands of feminism, and is also a very important element in the contemporary debate about multiculturalism.”

Fanon is not always considered a practitioner of identity politics or a theorist of multiculturalism. His revolutionary credentials tend to militate against such labels, since, in an age of multinational capitalism, only deluded multicultural theorists and their caustic conservative critics tend to think of multiculturalism as revolutionary. In a recent New Left Review retrospective ("Reading Fanon in the 21st Century") Immanuel Wallerstein argues that Franz Fanon’s 1952 book Black Skin, White Masks—recirculated in the 1980s and 1990s as part of the multicultural canon (the irony in coupling these two words together is not lost on me)—was “in no way a call to identity politics.” Wallerstein has a point, since Fanon concluded the book on a universal note, distancing himself from any particular conception of blackness: “The Negro is not. Any more than a white man.”

In his 1963 The Wretched of the Earth, perhaps the seminal anticolonial text, Fanon was critical of black nationalist celebrations of ancient African civilization. But he recognized such racialized expressions of history and culture as necessary first steps in severing ties with colonial power, especially since the representatives of Western Civilization “have never ceased to set up white culture to fill the gap left by the absence of other cultures.” In other words, cultural identity mattered to the larger struggle, and was integral to Fanon’s work, which accentuated “the experience of a black man thrown into a white world.” Fanon’s position resembled Richard Wright’s well-known response to the Bandung Conference, where he called for a “reluctant nationalism” that “need not remain valid for decades to come.” Both Fanon and Wright joined the Second Congress of the Negro Writers and Artists in Rome in 1959, dedicated to the “peoples without a culture,” a mission aligned with one of Fanon’s more famous dictums: “The plunge into the chasm of the past is the condition and the course of freedom.” In any case, it is obviously too easy to separate Fanon from identity politics.

Relating this back to Geary’s stated historical trajectory, it becomes clear that the break from the High Racial Liberalism of mid-century social thought, perhaps most famously represented by Gunnar Myrdal’s An American Dilemma: The Negro Problem and Modern Democracy (1944), to the multicultural liberalism of the 1980s and 1990s, was not so clean. Continuity is an important factor in the sense that thinkers of both eras were conscious of “damage,” the trope documented by Daryl Michael Scott in his important book, Contempt and Pity: Social Policy and the Image of the Damaged Black Psyche: 1880-1996 (which should be read alongside Alice O’Connor’s equally important Poverty Knowledge, which I discuss here). The difference was in how to treat damage. Mid-century social scientists such as Moynihan implied that assimilation to white-middle-class values was the ticket to salvation. Even black sociologist Kenneth Clark assumed as much, evident in his famous “doll study” that informed Brown v. Board, where black children were observed reacting positively to white dolls and negatively to black dolls. Multiculturalists from Fanon onwards believed that damage had to be overcome by acting against the dominant culture. In Fanon’s case, meeting colonial violence with anticolonial violence was the means of atonement. In the case of most American multiculturalists, the key to overcoming damage was in valuating their own culture as apart from the dominant culture. Thus, it became increasingly common in the 1980s and 1990s for predominantly black school districts to implement an Afrocentric curriculum.

Since the boundaries between historical eras of racial liberalism are blurry, it is helpful that Taylor takes a long view on the politics of recognition in relating them to the modern history of democratization. He writes: “What has come about with the modern age is not the need for recognition but the conditions in which the attempt to be recognized can fail. That is why the need is now acknowledged for the first time. In premodern times, people didn’t speak of ‘identity’ and ‘recognition’—not because people didn’t have (what we call) identities, or because these didn’t depend on recognition, but rather because these were then too unproblematic to be thematized as such.” In other words, in feudal times everyone had an identity—lord or peasant, master or slave—but because social mobility didn’t exist, identity (though it wasn’t called such) was static and unproblematic. In modern liberal democracies, where everyone is assumed to have equal status in theory—but, crucially, not in practice—identity matters because it relates to one’s ability or inability to climb social ladders and achieve “equal” status. (This reminds me of Orwell’s famous Animal Farm dictum that “some animals are more equal than others.”) A person’s position is tied up in their identity, especially, multiculturalists argue, in relation to racial and gendered identities. I would argue that identification is one of the most important forms of modern capitalistic classification but that, ironically, as identity politics became the dominant form of racial liberalism, what might be termed postmodern capitalistic classification took hold and, though class polarization grew starker, racial and gendered identities grew less adhesive.

Though Taylor subsumes the politics of recognition under liberalism broadly conceived, he admits that such politics seem illiberal to more traditional liberals who understand politics to be about the defense of universal rights accrued to the individual without relation to group affiliation. Of course, the standard critique of universalism applies here: particular groups, namely, men of European descent, achieve and maintain power in the name of universal rights that supposedly apply to all. Such criticism has been the theoretical backbone of efforts to reform higher education since the 1960s, in terms of actual and symbolic representation, or, affirmative action and multiculturalism. As Taylor writes: “Where the politics of universal dignity fought for forms of nondiscrimination that were quite ‘blind’ to the ways in which citizens differ, the politics of difference often redefines nondiscrimination as requiring that we make these distinctions the basis of preferential treatment.”

Of course, the politics of difference is not grounded in monolithic epistemological positioning. Whereas many multiculturalists challenge the dominant culture by valuing their own minority cultures and histories, some others theorize that there is no such thing as cultural value. These differences are seen in the 1980s and 1990s attempts to revise the traditional canon. Some such attempts, such as the successful revision of the core freshman reading list achieved at Stanford University in 1987-1988, lead by the Stanford Black Student Union, were implicitly based on the notion that all cultures were intrinsically valuable and that judgments to the contrary were based on prejudice or ill-will. But other academic activists, invoking what Taylor calls “half-baked neo-Nietzschean theories,” sought to undermine the very premise of a canon. “Deriving frequently from Foucault or Derrida, they claim that all judgments of worth are based on standards that are ultimately imposed by and further entrench structures of power.”

Taylor thus confirms one of the central points made by Francois Cusset in his provocative French Theory: How Foucault, Derrida, Deleuze, & Co. Transformed the Intellectual Life of the United States (which I recently blogged about here): that some American scholars misappropriated French theory in ways useful to American academic politics. “If Derrida or Foucault deconstructed the concept of objectivity,” Cusset writes, “the Americans would draw on those theories not for a reflection on the figural power of language or on discursive constructions, but for a more concrete political conclusion: objectivity is synonymous with ‘subjectivity of the white male.’” But as Taylor correctly points out: such argumentation backs the neo-Nietzscheans into a conceptual corner. The so-called “canon wars” were animated by a demand for respect—a politics of recognition. But if evaluations of cultural worth are mere expressions of power, then there is no objective ground upon which to respect or even recognize a minority culture.

Right-wing culture warriors made similar recognitions in their analyses of a nihilistic academic left. For example, in Telling the Truth, her 1995 summary of all that she had learned about the academy while directing the National Endowment of the Humanities, Lynn Cheney scorned the academic fashion of reading power and hierarchy into everything, including canonical texts. “The humanities are about more than politics, about more than social power,” she argued. “What gives them their abiding worth are truths that, transcending accidents of class, race, and gender, speak to us all.” Similarly, in his 1991 bestseller Illiberal Education, Dinesh D’Souza wrote that the so-called “victim’s revolutionaries” who had taken over college campuses in the wake of the 60s had joined forces with a new literary relativism, schooled in Foucault and Derrida. “Because the old notion of neutral standards corresponded with a white male faculty regime at American universities,” he intoned, “minority and feminist scholars have grown increasingly attached to the au courant scholarship, which promises to dismantle and subvert these old authoritative structures."

Taylor rejects the neo-Nietzschean form of identity politics in favor of that which values minority cultures. Though he is skeptical that all cultures are objectively equal—morally or aesthetically—he believes that we should approach learning about other cultures as if this were so. He concludes his essay: “There is perhaps a moral issue here. We only need a sense of our own limited part in the whole human story to accept the presumption [that other cultures have merit and value]. It is only arrogance, or some analogous moral failing, that can deprive us of this. But what the presumption requires of us is not the peremptory and inauthentic judgments of equal value, but a willingness to be open to comparative cultural study of the kind that must replace our horizons in the resulting fusions.” In other words, we need to be open to having our ethnocentric views challenged.

Framing humanistic learning as such, Taylor vehemently opposes the racial arrogance of the statement supposedly uttered by Saul Bellow: “When the Zulus produce a Tolstoy we will read him.” Taylor works against Bellow’s supposed stricture not only on the grounds that the Zulus might have produced a Tolstoy that is yet to be discovered, but also from the standpoint that Zulu culture might evaluate merit differently and that we would benefit from learning their evaluative system. This is what he means by replacing our narrow horizons with a vision formed from cultural fusion, what he calls the dialogic process.

Sabtu, 29 Januari 2011

The Neocon Take on the "New Class"


My most recent post on Daniel Bell, and how his form of thinking about the so-called "new class," brought comments, especially from Tim, asking for clarification. Here goes (briefly):

Out of their political repositioning in the late 1960s and 1970s, neoconservatives developed a critical theory (co-opted from anti-Stalinist thinking) about a so-called “new class” of intellectuals, broadly defined to include all professionals tasked with manipulating language—although more narrowly applied to humanists and social scientists. Members of this “new class,” so the theory went, had turned their backs on the society to which they owed their high-ranking status. A private memorandum written by Daniel Patrick Moynihan for his boss President Nixon in 1970 exemplified this withering mode of criticism: “No doubt there is a struggle going on in this country of the kind the Germans used to call a Kulturkampf. The adversary culture which dominates almost all channels of information transfer and opinion formation has never been stronger, and as best I can tell it has come near silencing the representatives of traditional America.”

The central reason the neoconservative “new class” theory was so plausible is because the university credential system had become the principal gateway to the professional world, a sorting mechanism for white-collar hierarchy. The numbers tell the story: in 1960, there were about 3.5 million Americans enrolled in universities; by 1970, this number had more than doubled to around 7.5 million, as the size of faculties grew proportionally. Historian James Livingston nicely relates this demographic explosion on the nation’s college campuses to the culture wars, or to what he generally describes as the “debates about the promise of American life.” “By the 1970s,” Livingston contends, “the principal residence of that promise was widely assumed to be the new ‘meritocracy’ enabled by universal access to higher education.” To this extent, class resentment aimed at intellectuals made sense, in a misplaced sort of way, since intellectuals indeed held the levers to any given individual’s future economic stability.*

----------------------
* See James Livingston, The World Turned Inside Out: American Thought and Culture at the End of the 20th Century (New York: Rowman and Littlefield, 2009). (Or my review of that book, and his response.) Eric Hobsbawm also relates the growing importance of a university education to the redirection of class resentment against “toffs of one kind or another—intellectuals, liberal elites, people who are putting it over on us.” Eric Hobsbawm, “Interview: World Distempers,” New Left Review 61 (Jan/Feb 2010), 135.

The Neocon Take on the "New Class"


My most recent post on Daniel Bell, and how his form of thinking about the so-called "new class," brought comments, especially from Tim, asking for clarification. Here goes (briefly):

Out of their political repositioning in the late 1960s and 1970s, neoconservatives developed a critical theory (co-opted from anti-Stalinist thinking) about a so-called “new class” of intellectuals, broadly defined to include all professionals tasked with manipulating language—although more narrowly applied to humanists and social scientists. Members of this “new class,” so the theory went, had turned their backs on the society to which they owed their high-ranking status. A private memorandum written by Daniel Patrick Moynihan for his boss President Nixon in 1970 exemplified this withering mode of criticism: “No doubt there is a struggle going on in this country of the kind the Germans used to call a Kulturkampf. The adversary culture which dominates almost all channels of information transfer and opinion formation has never been stronger, and as best I can tell it has come near silencing the representatives of traditional America.”

The central reason the neoconservative “new class” theory was so plausible is because the university credential system had become the principal gateway to the professional world, a sorting mechanism for white-collar hierarchy. The numbers tell the story: in 1960, there were about 3.5 million Americans enrolled in universities; by 1970, this number had more than doubled to around 7.5 million, as the size of faculties grew proportionally. Historian James Livingston nicely relates this demographic explosion on the nation’s college campuses to the culture wars, or to what he generally describes as the “debates about the promise of American life.” “By the 1970s,” Livingston contends, “the principal residence of that promise was widely assumed to be the new ‘meritocracy’ enabled by universal access to higher education.” To this extent, class resentment aimed at intellectuals made sense, in a misplaced sort of way, since intellectuals indeed held the levers to any given individual’s future economic stability.*

----------------------
* See James Livingston, The World Turned Inside Out: American Thought and Culture at the End of the 20th Century (New York: Rowman and Littlefield, 2009). (Or my review of that book, and his response.) Eric Hobsbawm also relates the growing importance of a university education to the redirection of class resentment against “toffs of one kind or another—intellectuals, liberal elites, people who are putting it over on us.” Eric Hobsbawm, “Interview: World Distempers,” New Left Review 61 (Jan/Feb 2010), 135.

Jumat, 19 November 2010

“New Class” Thinking and Historiography




I am curious what our readers make of “new class” thinking. Is it a legitimate way of theorizing about the place of intellectuals in our postmodern, postindustrial society? Or is it anti-intellectual nonsense, an updated version of Julian Benda’s La Trahison des Clercs?

In my research on the culture wars, I credit the popularization of “new class” thinking to the neoconservatives. Out of their political repositioning, they developed a critical theory about a so-called “new class” of intellectuals, broadly defined to include all professionals tasked with manipulating language—although more narrowly applied to humanists and social scientists. Most members of this “new class,” so the theory went, had turned their backs on the society to which they owed their high-ranking status. A private memorandum written by Daniel Patrick Moynihan for his boss President Nixon in 1970 exemplified this withering mode of criticism: “No doubt there is a struggle going on in this country of the kind the Germans used to call a Kulturkampf. The adversary culture which dominates almost all channels of information transfer and opinion formation has never been stronger, and as best I can tell it has come near silencing the representatives of traditional America.”

In this sense, “new class” thinking seems more ideological than analytical, consistent with the anti-intellectual histrionics of Rush Limbaugh and Glen Beck, amusingly portrayed by David Bromwich in his recent New York Review of Books piece, “The Rebel Germ.” Bromwich describes how Limbaugh mockingly portrays Democrats as the party of wimpish intellectuals—updating the egghead label applied to early Cold War era Democrats like Adlai Stevenson and his followers—“composed of superannuated aristocrats [and] pretentious arrivistes...” If this is the sole meaning of the “new class,” then there’s nothing much new about it. But plenty of historians and other intellectuals think the concept analytically useful.

Take Christopher Lasch’s biographer Eric Miller, who buys into Alvin Gouldner’s argument about the “new class,” made in his 1982 book, The Future of Intellectuals and the Rise of the New Class. In a passage cited by Miller to explain Lasch’s early theories about society, Gouldner writes that a “new class” intellectual “desacralizes authority-claims and facilitates challenges to definitions of social reality made by traditional authorities...” Miller continues, in his own disapproving words:

The unintended effect of this way of seeing was on the one hand to diminish the individual’s sense of agency while on the other to empower the individual to think of herself as subject to no authority beyond the particular, socially constructed, historically contingent institutions of her own circumstance. In the end, social-science-inspired historiography only buttressed a vision of the world in which humans, while shaped by powerful social structures, were morally on their own and finally responsible to no authority higher than their own. The atomizing, antinomian tendencies latent in this individualistic ideology did not bode well for communitarian political hopes—such as those that were fueling Lasch’s nascent historical work and political vision (72-73).


In other words, paradoxically, the early Lasch, and “new class” thinkers like him, laid the groundwork for postmodernism in their dismissal of traditional structures of intellectual authority. I say paradoxically because the later, communitarian Lasch worked so hard to put the pieces of traditional structures of intellectual authority back together. I also say paradoxically because the later Lasch used “new class” thinking in both its political and analytical senses, especially in The True and Only Heaven (1991) and The Revolt of the Elites (1995).

So which is it? (I ask somewhat rhetorically, knowing one does not preclude the other.)

“New Class” Thinking and Historiography




I am curious what our readers make of “new class” thinking. Is it a legitimate way of theorizing about the place of intellectuals in our postmodern, postindustrial society? Or is it anti-intellectual nonsense, an updated version of Julian Benda’s La Trahison des Clercs?

In my research on the culture wars, I credit the popularization of “new class” thinking to the neoconservatives. Out of their political repositioning, they developed a critical theory about a so-called “new class” of intellectuals, broadly defined to include all professionals tasked with manipulating language—although more narrowly applied to humanists and social scientists. Most members of this “new class,” so the theory went, had turned their backs on the society to which they owed their high-ranking status. A private memorandum written by Daniel Patrick Moynihan for his boss President Nixon in 1970 exemplified this withering mode of criticism: “No doubt there is a struggle going on in this country of the kind the Germans used to call a Kulturkampf. The adversary culture which dominates almost all channels of information transfer and opinion formation has never been stronger, and as best I can tell it has come near silencing the representatives of traditional America.”

In this sense, “new class” thinking seems more ideological than analytical, consistent with the anti-intellectual histrionics of Rush Limbaugh and Glen Beck, amusingly portrayed by David Bromwich in his recent New York Review of Books piece, “The Rebel Germ.” Bromwich describes how Limbaugh mockingly portrays Democrats as the party of wimpish intellectuals—updating the egghead label applied to early Cold War era Democrats like Adlai Stevenson and his followers—“composed of superannuated aristocrats [and] pretentious arrivistes...” If this is the sole meaning of the “new class,” then there’s nothing much new about it. But plenty of historians and other intellectuals think the concept analytically useful.

Take Christopher Lasch’s biographer Eric Miller, who buys into Alvin Gouldner’s argument about the “new class,” made in his 1982 book, The Future of Intellectuals and the Rise of the New Class. In a passage cited by Miller to explain Lasch’s early theories about society, Gouldner writes that a “new class” intellectual “desacralizes authority-claims and facilitates challenges to definitions of social reality made by traditional authorities...” Miller continues, in his own disapproving words:

The unintended effect of this way of seeing was on the one hand to diminish the individual’s sense of agency while on the other to empower the individual to think of herself as subject to no authority beyond the particular, socially constructed, historically contingent institutions of her own circumstance. In the end, social-science-inspired historiography only buttressed a vision of the world in which humans, while shaped by powerful social structures, were morally on their own and finally responsible to no authority higher than their own. The atomizing, antinomian tendencies latent in this individualistic ideology did not bode well for communitarian political hopes—such as those that were fueling Lasch’s nascent historical work and political vision (72-73).


In other words, paradoxically, the early Lasch, and “new class” thinkers like him, laid the groundwork for postmodernism in their dismissal of traditional structures of intellectual authority. I say paradoxically because the later, communitarian Lasch worked so hard to put the pieces of traditional structures of intellectual authority back together. I also say paradoxically because the later Lasch used “new class” thinking in both its political and analytical senses, especially in The True and Only Heaven (1991) and The Revolt of the Elites (1995).

So which is it? (I ask somewhat rhetorically, knowing one does not preclude the other.)

Selasa, 16 Juni 2009

Alice O’Connor’s Poverty Knowledge: Intellectual History in Action


I am persistently interested in examples of intellectual history that relate to political history, or more specifically, that demonstrate explicit influence over policy. This is not to say that intellectual history needs such a rationale: intellectual life helps us explain a given historical context, with or without explicit reference to its political influence. But my interests tend to gravitate towards intellectual history’s relation to politics, or what might be called “intellectual history in action” (with a nod towards Kevin Mattson, author of Intellectuals in Action, about early New Left intellectuals, including C. Wright Mills and William Appleman Williams.)

Alice O’Connor’s 2001 book, Poverty Knowledge: Social Science, Social Policy, and the Poor in Twentieth-Century U.S. History, is an excellent model of intellectual history in action. She painstakingly traces how social scientific thinking on poverty—what she terms “poverty knowledge”—was shaped by policy struggles, and how it helped shape those struggles, often in ways not anticipated by poverty scholars.

O’Connor researched and wrote this book in the dark shadow of welfare reform—the Personal Responsibility and Work Opportunity Reconciliation Act, signed into law in 1996 by Bill Clinton, who made good on his promise to “end welfare as we know it.” The role intellectuals played in paving the way for Clinton’s welfare legislation acts as a microcosm of O’Connor’s larger argument: however much social scientists objected to how their knowledge was put into practice, they were complicit in policies that hurt the poor. In other words, their knowledge, intentionally or not, provided a rationale for polices that sought to remold the behavior of the poor, rather than attend to the structural inequalities of the US economy—“blame the victim” policymaking. O’Connor states it best:

“Following a well-established pattern in post-Great Society policy analysis, the Clinton administration’s poverty experts had already embraced and defined the parameters of a sweeping welfare reform featuring proposals that promised to change the behavior of poor people while paying little more than rhetorical attention to the problems of low-wage work, rising income inequality, or structural economic change, and none at all to the steadily mounting political disenfranchisement of the postindustrial working class" (3-4).

Social scientists have long debated whether culture or economy is more important in determining poverty. O’Connor traces this intellectual history, recognizing that these two modes of thinking—behavioral and structural—are not mutually exclusive. In the early twentieth century, poverty thinkers, taking their cues from the Chicago School of Sociology, fretted over growing “social disorganization” in northern cities, which they attributed to the gap between rural patterns of living, brought north by black migrants, and the grim realities of living in the industrial city. But many of these theorists saw economic policies as the solution to the supposedly degenerate culture of the ghetto dweller. In other words, job creation and higher wages would curtail bad behavior, such as alcoholism, prostitution, illegitimacy, and other vices. (Touré Reed, in his book Not Alms But Opportunity, recently reviewed here, demonstrates how such a framework shaped the Urban League.)

More recent thinkers have combined similar cultural description of the ghetto with calls for structural change, including Daniel Patrick Moynihan, in his infamous Moynihan Report (1965), and William Julius Wilson, in his widely read and controversial book, The Truly Disadvantaged: The Inner City, the Underclass, and Public Policy (1987). The two chapters of O’Connor’s book most interesting to me (Chapter 8: “Poverty’s Culture Wars”; and Chapter 10: “Dependency, the ‘Underclass,’ and a New Welfare ‘Consensus’”) deal with the wide-ranging debates that followed the publication of Moynihan and Wilson’s defining works, and how the policy world responded.

It turns out that, put into practice, Moynihan and Wilson’s calls for economic policy changes went unheeded, not surprisingly, while their descriptions of ghetto life were accentuated in the national discussion. Rather than Moynihan’s “case for national action”—the subtitle of his report—people keyed in on his description of a “tangle of pathology,” a phrase he used to describe the culture of poor black urbanites, a culture he rooted in the black, matriarchal family structure. And rather than Wilson’s calls to create jobs, raise wages, and otherwise stem the negative effects of deindustrialization, an increasingly conservative political climate led people to focus on the culture of “underclass,” the 1980s metaphor for poor black urbanites.

As O’Connor sees it, the biggest problem with the type of poverty theorizing done by the likes of Moynihan and Wilson—with a focus on the bad behavior of poor, often black, people—is that there are no left or liberal policy solutions to bad culture. Thus, the logical policy conclusion to a scholarly emphasis on ghetto behavior is that government cannot solve the problems of poverty, unless by way of authoritarian behavior modification. In fact, this is the argument made by Charles Murray, in his celebrated Losing Ground (1984). It is also the logic of Clinton (and Gingrich’s) welfare reform. There we have some of the consequences of liberal poverty knowledge.

AH

Alice O’Connor’s Poverty Knowledge: Intellectual History in Action


I am persistently interested in examples of intellectual history that relate to political history, or more specifically, that demonstrate explicit influence over policy. This is not to say that intellectual history needs such a rationale: intellectual life helps us explain a given historical context, with or without explicit reference to its political influence. But my interests tend to gravitate towards intellectual history’s relation to politics, or what might be called “intellectual history in action” (with a nod towards Kevin Mattson, author of Intellectuals in Action, about early New Left intellectuals, including C. Wright Mills and William Appleman Williams.)

Alice O’Connor’s 2001 book, Poverty Knowledge: Social Science, Social Policy, and the Poor in Twentieth-Century U.S. History, is an excellent model of intellectual history in action. She painstakingly traces how social scientific thinking on poverty—what she terms “poverty knowledge”—was shaped by policy struggles, and how it helped shape those struggles, often in ways not anticipated by poverty scholars.

O’Connor researched and wrote this book in the dark shadow of welfare reform—the Personal Responsibility and Work Opportunity Reconciliation Act, signed into law in 1996 by Bill Clinton, who made good on his promise to “end welfare as we know it.” The role intellectuals played in paving the way for Clinton’s welfare legislation acts as a microcosm of O’Connor’s larger argument: however much social scientists objected to how their knowledge was put into practice, they were complicit in policies that hurt the poor. In other words, their knowledge, intentionally or not, provided a rationale for polices that sought to remold the behavior of the poor, rather than attend to the structural inequalities of the US economy—“blame the victim” policymaking. O’Connor states it best:

“Following a well-established pattern in post-Great Society policy analysis, the Clinton administration’s poverty experts had already embraced and defined the parameters of a sweeping welfare reform featuring proposals that promised to change the behavior of poor people while paying little more than rhetorical attention to the problems of low-wage work, rising income inequality, or structural economic change, and none at all to the steadily mounting political disenfranchisement of the postindustrial working class" (3-4).

Social scientists have long debated whether culture or economy is more important in determining poverty. O’Connor traces this intellectual history, recognizing that these two modes of thinking—behavioral and structural—are not mutually exclusive. In the early twentieth century, poverty thinkers, taking their cues from the Chicago School of Sociology, fretted over growing “social disorganization” in northern cities, which they attributed to the gap between rural patterns of living, brought north by black migrants, and the grim realities of living in the industrial city. But many of these theorists saw economic policies as the solution to the supposedly degenerate culture of the ghetto dweller. In other words, job creation and higher wages would curtail bad behavior, such as alcoholism, prostitution, illegitimacy, and other vices. (Touré Reed, in his book Not Alms But Opportunity, recently reviewed here, demonstrates how such a framework shaped the Urban League.)

More recent thinkers have combined similar cultural description of the ghetto with calls for structural change, including Daniel Patrick Moynihan, in his infamous Moynihan Report (1965), and William Julius Wilson, in his widely read and controversial book, The Truly Disadvantaged: The Inner City, the Underclass, and Public Policy (1987). The two chapters of O’Connor’s book most interesting to me (Chapter 8: “Poverty’s Culture Wars”; and Chapter 10: “Dependency, the ‘Underclass,’ and a New Welfare ‘Consensus’”) deal with the wide-ranging debates that followed the publication of Moynihan and Wilson’s defining works, and how the policy world responded.

It turns out that, put into practice, Moynihan and Wilson’s calls for economic policy changes went unheeded, not surprisingly, while their descriptions of ghetto life were accentuated in the national discussion. Rather than Moynihan’s “case for national action”—the subtitle of his report—people keyed in on his description of a “tangle of pathology,” a phrase he used to describe the culture of poor black urbanites, a culture he rooted in the black, matriarchal family structure. And rather than Wilson’s calls to create jobs, raise wages, and otherwise stem the negative effects of deindustrialization, an increasingly conservative political climate led people to focus on the culture of “underclass,” the 1980s metaphor for poor black urbanites.

As O’Connor sees it, the biggest problem with the type of poverty theorizing done by the likes of Moynihan and Wilson—with a focus on the bad behavior of poor, often black, people—is that there are no left or liberal policy solutions to bad culture. Thus, the logical policy conclusion to a scholarly emphasis on ghetto behavior is that government cannot solve the problems of poverty, unless by way of authoritarian behavior modification. In fact, this is the argument made by Charles Murray, in his celebrated Losing Ground (1984). It is also the logic of Clinton (and Gingrich’s) welfare reform. There we have some of the consequences of liberal poverty knowledge.

AH