History News Network - Front Page History News Network - Front Page articles brought to you by History News Network. Sat, 21 Sep 2019 10:51:25 +0000 Sat, 21 Sep 2019 10:51:26 +0000 Zend_Feed_Writer 2 (http://framework.zend.com) https://hnn.org/site/feed Why Stephen Colbert's Late Night Monologue Effectively Recapped the Latest Democratic Debates Steve Hochstadt is a professor of history emeritus at Illinois College, who blogs for HNN and LAProgressive, and writes about Jewish refugees in Shanghai.

 

 

After greeting the crowd at Texas Southern University, Julián Castro opened the Democratic debate last week with this important insight: “There will be life after Donald Trump. But the truth is that our problems didn’t start just with Donald Trump, and we won’t solve them by embracing old ideas.” All the Democratic candidates agree on Castro’s analysis of the past, that our American problems which need to be solved have been developing for a long time. They agree that we will go on, perhaps to a bright future, after Trump is gone. The fundamental disagreements among the candidates center on Castro’s rejection of “old ideas”: how much progressive change is the right amount in this election?

 

Joe Biden represents the most moderate positions, although his ideas are hardly old. In fact, he has had to repudiate many of his old ideas during this campaign: working with segregationists in Congress was a good thing; Obamacare as it was enacted was good enough; harsh sentencing did less to control crime than to put a generation of mostly African Americans behind bars. Politicians from the 1960s have had to change many fundamental ideas, but are very bad at admitting that positions they took long ago are not right for today.

 

Castro, and many of the other candidates who appeared at the 3rd debate, as well as others who still believe they have a chance, criticize Biden, hoping to peel off the moderate Democratic voters who support him. On health care, which has taken center stage as the crucial issue of 2020, Castro magnified a minor difference with Biden, but took what has become the moderate position, arguing for the retention of private health insurance plans: “If they choose to hold on to strong, solid private health insurance, I believe they should be able to do.” He claimed to be fulfilling the legacy of Barack Obama, a key clue that he stands with the more moderate candidates.

 

At the other end of the field, Elizabeth Warren and Bernie Sanders want to eliminate private health plans entirely in favor of Medicare for All. Sanders and Warren hold up the private insurance industry for ridicule as siphoning off billions of dollars in profit. Their differences between them lie less in policy than in approach. Warren has plans for structural reform in favor of the neglected little guy, while Sanders thinks instead of a revolution against the oligarchy.

 

Many of the more moderate Democratic candidates have already fallen by the wayside: John Hickenlooper, Steve Bullock, Seth Moulton, Kirsten Gillibrand, Bill de Blasio, John Delaney. The latest poll from yesterday, like all last week’s polls, show Biden in the lead, but the very progressive Sanders and Warren combined have significantly more support. Among the rest, only Kamala Harris, Pete Buttigieg, and Beto O’Rourke consistently get more than 2%. The field is thankfully shrinking, and will gradually become more manageable. The election is still nearly 14 months away.

 

As a prelude to the actual debate, ABC chose a sentence from each candidate’s earlier speeches to play in the order that the candidates were ranked. I found it notable that all these excerpts except Biden’s (“I will be a president for every American.”) talked about “we”. Who knows how that came about? Did someone pick these clips to demonstrate the fundamental unity among all Democratic candidates? I don’t know if we’ll ever find out.

 

That message of unity is my “takeaway” from the campaign so far. The cohesion and shared values are hard to see, though. The nature of a campaign is that everyone is competing with everyone, and against everyone. The media compulsion to broadcast conflict shapes the whole process, for candidates and for us all. That was apparent in the moderators’ questions: instead of asking “what do you believe?” or “what would you do?”, they demanded discussion of disagreements.

 

To see how the media shapes our impressions of the campaign, it is instructive to see two attempts to summarize the debate in a few clips, by ABC News, as fact, and by Stephen Colbert, for laughs.

 

Right after the debate, ABC produced 4 minutes of “Moments That Mattered”. The selection was a serious exercise in media repackaging. Every heated exchange was included: Biden and Sanders arguing about health care; Castro castigating Biden about the small differences in their health care plans and about his memory, and all of the other conflicts involving Castro; Klobuchar versus Sanders about health care. Harris was shown criticizing Trump, Booker only got to talk about his early electoral failures, Buttigieg only to complain about the emphasis on conflict. The more extreme proposals were highlighted: Yang’s philanthropic offer of $1000 a month to some needy families; Beto O’Rourke saying he would take away assault rifles. Elizabeth Warren apparently did not matter to ABC and was not shown at all, because she spent her time explaining rather than attacking.

 

Stephen Colbert’s monologue later that night tells a different story, not only because he is much funnier. For 12 minutes, he used excerpts of what America had just seen to get laughs after laughs. Colbert’s principles were clear: portray every candidate truthfully, and then make fun. He made fun of Sanders’ voice, Biden’s age, Harris’s vagueness on what to call the unmasked little Wizard of Oz, and Klobuchar’s movie reference.

 

Colbert began by talking about “fireworks” and gleefully displayed a few moments of real one-on-one conflict. But by the time Colbert wound up, most of the candidates had their say about something important, even when he fantasized something funny in response. Klobuchar expressed the “existential threat” to our environment. Beto told the world he would take away assault rifles. Bernie said that Medicare for All would cost our society much less than we’re spending now. Yang made his remarkable philanthropic offer. Harris showed off her plan on how to deal with Trump – laugh at him. Biden emphasized his link to Obama. Warren got a brief moment of real American family á la Norman Rockwell, which is a staple in her campaign. Buttigieg summarized a universal, but ever ignored wisdom about our never-ending wars – don’t start them. Castro said everybody would be covered under his health plan. Only Booker was left out.

 

Age is playing a surprising role in this campaign. It certainly matters, but it’s hard to say how it matters. Laughing at old men was in lots of Colbert’s jokes about Biden and Sanders. The clip of Castro and Biden interrupting each other was about age. But Buttigieg, the youngest candidate since the beginning, said nothing disparaging about the older candidates.

 

Warren is 70, but she gets left out of the public laughter about the elderly. Maybe because her age is not apparent in what she does. But it’s notable that everybody finds her hard to criticize. That may be a hidden advantage for her campaign.

 

Maybe a difference in purpose led to these differences in reportage. Although Trump incessantly whines about the mainstream networks as “fake news” trying to defeat him, ABC was much more interested in promoting conflict as significant, who’s ahead, who’s desperate, who is nasty about whom. All the networks and all the print media try hard not to put themselves on one side or the other, even as they pick and choose what to tell us.

 

Colbert was clear about his purpose in his monologue. Toward the beginning, he called Trump a non-violent criminal. At the end, he said: “What did we get? . . . hopefully, one person who can beat Donald Trump.”

 

The news isn’t fake, but it is spun, not false, but often misleading about important things. Colbert tells obviously fake stories, but gives us a better picture of reality. Unfortunately, this election is not a laughing matter.

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/blog/154249 https://historynewsnetwork.org/blog/154249 0
Roundup Top 10!  

The State Department is weak and getting weaker. That puts us all at risk.

by Mark Edwards

We need a robust diplomatic engine at the heart of our foreign policy.

 

When Adding New States Helped the Republicans

by Heather Cox Richardson

Putting new stars on the U.S. flag has always been political. But D.C. statehood is a modest partisan ploy compared with the mass admission of underpopulated western territories—which boosts the GOP even 130 years later.

 

 

The historical profession's greatest modern scandal, two decades later

by Bill Black

Historians are criticized for not engaging with the public--and then criticized for how they engage when they do. Looming in the background is the Michael Bellesiles controversy.

 

 

The populist rewriting of Polish history is a warning to us all

by Estera Flieger

Thirty years after communism ended, Poland’s past is again being manipulated for political motives, this time at a museum in Gdańsk.

 

 

Why Democrats can’t speak for the ‘silent majority’

by Seth Blumenthal

President Trump is exactly the kind of champion the voting bloc wants.

 

 

Joe Biden isn’t the only Democrat who has blamed black America for its problems

by Marcia Chatelain

Well-meaning liberals have long failed to recognize their own role in systems of oppression.

 

 

The History of Citizenship Day Is a Reminder That Being an American Has Always Been Complicated

by S. Deborah Kang

“We welcome you,” Truman declared, “not to a narrow nationalism but to a great community based on a set of universal ideals.”

 

 

Ending the Afghan War Won’t End the Killing

by Stephanie Savell

Since 2015, casualties from explosive remnants of war and abandoned IEDs have been rising rapidly.

 

 

When Texas was the national leader in gun control

by Brennan Gardner Rivas

How the land of gunslinger mythology regulated weapons to reduce violence

 

 

There Are No Nostalgic Nazi Memorials

by Susan Neiman

Americans could learn from how drastically German society has moved away from the nadir of its history.

 

 

 

Two re-namings, two defaults. How and how not to use history and public memory at Yale

by Jim Sleeper

“The real work for a place at Yale is not about the name on the building. It’s about a deep and substantive commitment to being honest about power, structural systems of privilege and their perpetuation.”

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/article/173094 https://historynewsnetwork.org/article/173094 0
Gloria Steinem, the Women’s Movement and a Big Question

 

Gloria, A Life, the new play by Emily Mann about women’s rights activist Gloria Steinem, opened last weekend at the McCarter Theater, in Princeton, leaving a big, big question.

 

The play is not a biography of Ms. Steinem. It is not a drama, either. It is, well, an “experience.” Playwright Mann has put together a story in which Ms. Steinem, she of the famous aviator glasses, serves as the narrator for a tale about dramatic improvements in the lives of women since the late 1960s. The story of feminism is powerful. The “experience” is not only wonderful but admirable.

 

The big question, though, is where is Gloria Steinem?

 

You find out a little bit about her tentative relationship with her mentally unstable mother, her poor and struggling father, a bit about the founding of New York Magazine and Ms. Magazine, and her roles in both, and a whole lot about women’s leaders, such as Bella Abzug. You discover very little about Gloria, though. She is one of the most famous women in America, in the history of America, and yet the play does not tell you how she became so well-known and influential.

 

The story, told mostly by Ms. Steinem, played nobly by Mary McDonnell, puts her here and there in plot turns, and in the middle of hundreds of enormous photos projected onto the stage walls, but it never delves into what makes her tick. She is a great writer. She is a good speaker. She is flamboyant. She knows lots of important people. But how did all of that jell together to make her so famous? You do not find out and that is a shame.

 

There is much information, missing, too. She worked for a research company connected to the CIA, but that is not in the story.  There were a lot of people in the feminist movement who did not like her and accused her of using the movement to enhance her own glamourous image.  She was very involved in politics and was a delegate to one Democratic convention, but little is made of that. She had a lot of critics on many of her liberal views, and that is missing, too. There are large gaps of time in her life, such as her post college days, that are simply unaccounted for.

 

The play begins with Gloria graduating from Smith College in 1956 and going to work in New York City as a freelance writer in journalism, a field, at the time, run nearly completely by men. She has a hard time landing assignment until, by chance, she works for a while as a Playboy bunny in 1964 and writes a story about it. That gave her some notoriety and propelled her into the freelance writing field.

 

She winds up covering women’s’ rights protests and abortion marches. Then she blended into them and became a leader and speaker.

 

You find out little about her personal life. She had a sister, but the sister is not in the play. She had relationships with lots of men, she says, but did not get married until she was 66, to Robert Bale, the father of actor Christian Bale, and then her husband died just three years later. Why no other marriages or deep relationships? Did she have hobbies? Pets? Favorite books? Whom did she admire?

 

While all of that is a bit disappointing, the “experience” is terrific. Throughout the play, Gloria meets and works with numerus feminist leaders, such as Ms. Abzug (remember her and those fabulous hats?). Their story is, of course, a taut drama about American history (a women’s history going back to the 1840s). That story, of the marches and rallies, magazines, women’s colleges, court decisions and enormous national publicity, makes for a triumphant American story and Ms. Mann writes it well. The “experience” makes the play worth seeing.

 

Mann, who re-staged the play (Diane Paulus directed it last year in New York), gets other fine performances (in addition to Ms. McDonnell) from Patrene Murray, Brenda Withers, Gabrielle Beckford Mierka Girten, Erie Stone and Eunice Wong.

 

Young people, especially, should see the play. Women did not get where they are today by writing letters to the editor, baby – they marched in the streets and shouted from the mountaintops for it.

 

PRODUCTION:  The play is produced by the McCarter Theater in association with the American Repertory Theater at Harvard University and with special arrangement with Daryl Roth. Scenic Design:  Amy C. Rubin, Costumes: Jessica Jahn, Sound: Robert Kaplowitz and Andrea Allmond, Lighting:  Jason Lyons. The play is re-staged by Ms. Mann. It runs through October 6.

     

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/article/173071 https://historynewsnetwork.org/article/173071 0
How the Kikotan Massacre Prepared the Ground for the Arrival of the First Africans in 1619

A painting depicting the construction of a fort at Jamestown, close to Fort Comfort, from National Park Service

 

 

Reckoning with the past is never easy. We’ve seen this in the United States and the United Kingdom this summer, as British universities grapple with their connections to the wealth and human suffering resulting from transatlantic enslavement, and Americans debate the historical meaning of the 400th anniversary of the arrival of the first enslaved Africans in English North America. 

 

Commemorating the 400th anniversary of what the English colonizer John Rolfe described as the “20 and odd Negroes” (a number that was actually closer to 30) has dominated social media and the summer’s newscycle. But there’s an aspect of this commemorative activity that hasn’t received much attention. I refer specifically to the violence that occurred at Point Comfort less than a decade before the slave ship White Lion made anchor in August 1619. On that spot, a bloody event worthy of historical introspection took place: the massacre of the Kikotan Indians.That bloody event is important because it made it possible for the English to take Native lands and build Fort Henry and Fort Charles. The Kikotan massacre prepared the ground for the arrival of the first Africans in Virginia.

  

The history of English North America and what became the United States is a complex and often-violent story involving the enslavement of African peoples and the territorial dispossession and genocide of Native American communities. This is an uncomfortable history and neither the British nor Americans have fully reconciled itwith the contemporary economic, political, and social dimensions of their respective societies.

 

Most Americans don’t like to think about genocide as a foundational part of US history, while the English certainly don’t view their forebears as capable of perpetrating the mass killing of indigenous people. However, historian Jeffrey Ostler makes a compelling case for how genocide is woven into the fabric of North American history in his most recent book, Surviving Genocide. In Virginia, English colonization sparked dramatic population declines among Native American communities. While Virginia Indians numbered about 50,000 in 1607, by the early twentieth century, only a little over 2,000 remained.

 

But did the English initiate a genocide against Virginia’s Indian people? To answer this question it’s important to define genocide. The United Nations defined genocide in 1948 as “acts committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group.” Genocide can involve killing members of a group, causing “serious bodily or mental harm,” deliberately creating conditions designed to physically destroy a group “in whole or in part,” imposing measures that prevent births, and forcibly transferring children out of one group and to another. 

 

This definition describes not only the “founding” of Virginia but the course of US history and its relationship to Native America. Importantly, the genocide of Virginia Indians didn’t occur within a discrete time period and under well-established bureaucratic conditions; genocide in Virginia unfolded slowly over a period of decades.

 

The opening act in the tragedy of Native land loss, attacks on indigenous culture and language, the separation of children from families, and the physical destruction of entire communities, began in 1607 when English ships passed through the mouth of the Chesapeake Bay. The English aboard those vessels passed lands belonging to the Accomac, Nansemond, Warraskoyaak, and Kikotan (or Kecoughtan) people. These weren’t the first European ships the region’s Native people saw, but the English were different: they were determined to stay. This wasn’t good news for the Kikotan. They’d once numbered as many as 1,000, but by 1608 the English estimated that the Kikotanhad as few as 20 fighting men and perhaps a total population of no more than 60. The Kikotan had been reduced to a small community vulnerable to external attacks. Joining the Powhatan Chiefdom, albeit by force, under the leadership of Wahunsenacawh (Chief Powhatan) offered a degree of protection from both European and Native American violence and captive raids.

 

In the spring of 1608, though, the English probably didn’t seem like much of a threat to the Kikotans because the English were starving. Although the Kikotans and other Native communities provided the English with small parcels of food, in the spring of 1608 the English were on the verge of abandoning Jamestown. The colonizers were saved, however, by the arrival of supply vessels from England.

 

The English recognized they couldn’t sustain a colony that relied on supply ships from England. They needed to make changes. One of those changes was establishing trade relationships with Virginia Indians. An Englishman by the name of John Smith helped to initiate trade talks. Smith was an ambitious man determined to make a name for himself in Virginia. Unfortunately for Smith, the Kikotan “scorned” his advances to engage in trade talks, allegedly mocking him for his inability to feed himself. Smith wasn’t amused. He immediately let “fly his muskets,” whereupon the Kecoughtan “fled into the woods.”

 

Such incidents seem small and petty when viewed in isolation. However, these types of encounters grew in regularity and fueled mutual mistrust along Virginia’s Anglo-Indian frontier. 

 

That mistrust grew between 1609 and 1611 when the English made plans to build forts and establish homesteads on indigenous lands at the mouth of the Chesapeake Bay. The Kikotan need only look across the bay to see how English homesteads had started to displace Nanesmond families. English intentions were clear. Slowly, methodically, a genocide was unfolding.

 

Two factors overlapped to result in the genocide of the Kikotan people. First, English colonizers began establishing homesteads on Kikotan lands. Just as they did among the Nansemond, English land use practices were designed to sever indigenous people from their crops, sacred spaces, and homes. 

 

Second, violence played an important role in eliminating the remaining Kikotan people from their homelands at the mouth of Chesapeake Bay. In 1610, the English moved aggressively against the Kikotans. This sudden English assertiveness was in response to Kikotans aligning with neighboring indigenous tribes in opposition to the construction of English forts – including the fort that witnessed the arrival of the first Africans in Virginia. By early July, 1610, Sir Thomas Gates, the governor of Virginia, was "desyreous for to be Revendged upon the Indyans att Kekowhatan" for their opposition to English colonial expansion.

 

Colonial officials initiated a plan to “drive” the remaining “savages” from the land. The violence directed against the Kikotan people in July 1610 became known as the Kikotan massacre. The exact number of Kikotan deaths is unknown. Those who did survive the massacre fled their homelands and took refuge among neighboring indigenous communities. The Kikotan’s connection to their homeland was lost.  

 

For the Kikotans, the physical and psychological toll of the 1610 massacre were compounded by English actions in the proceeding years.  To reinforce the sense of loss that Kikotan people undoubtedly felt, the Virginia General Assembly agreed to “change the savage name of Kicowtan” to Elizabeth City in 1611. The Kecoughtan name remained to demarcate the foreshore, but in 1619 English families pushed to have the Kikotan erased from memory and the Corporation of Elizabeth City established. As the "20 and odd negroes" stepped onto Virginia should, the colonizers were writing their name over a Native landscape.

 

The English were changing the landscape that Virginia’a Indians had nurtured for as long as anyone could remember. When Wahunsenacawh died in 1618, less than a year before the White Lion set anchor at Port Comfort, Opechancanough, Chief Powhatan’s brother, took up the fight against English incursions into Powhatan homelands. 

 

Over the next two decades, violence between English colonizers and Powhatan warriors broke out in fits and starts throughout Virginia. The English, however, weren’t leaving. In 1624 Virginia was declared a royal colony and Native people continued to use violence to prevent the growing number of colonizers from squeezing them off their homelands. 

 

Virginia’s Indians were up against a determined foe. Governor Wyatt’s response to Indian resistance in the 1620s captured both the intent and determination of the English: “to root out [the Indians] from being any longer a people.”

 

Wyatt’s words are chilling. They reveal that prior to a treaty between the Powhatan and English in 1646, guerrilla-style warfare punctuated life in Virginia. So long as this fighting continued the English would take no quarter with their enemies. Native people, reduced in numbers and confined to reservations by the 1650s, suffered traumas that live on today in the stories Virginia Indian’s tell about seventeenth-century English colonizers. 

 

In remembering 1619 it’s right to reflect on the lives of the African people who disembarked from the White Lion on the traditional homelands of the Kikotans. We should also remember the loss of indigenous life in Virginia, losses that grew as the decades unfolded. We need not look too far beyond the events of 1610 and 1619 to see how the English treated Native resistance to their expansive plans for a settler colonial society supported by plantations and the exploitation of unfree labor. 

 

At the end of September, Norfolk State University in Virginia will host academics, journalists, and community members at a summit called “1619: The Making of America.” Sponsored by American Evolution, the summit will undoubtedly provide a forum for reflecting on Virginia’s past. I also hope that in trying to understand the “Making of America” we remember that English (and ultimately, United States) colonialism was (and is) built not only with the labor of stolen bodies from Africa, but the stolen lands of Native Americans. 

 

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/article/173032 https://historynewsnetwork.org/article/173032 0
The Stasi's Cold War Espionage Campaign Inside the Church

 

We remember them: the East German church peace prayer meetings that, 30 years ago, grew and grew, spilling out into the streets and setting in motion protests too large for the government to contain. That November, the protests led to the fall of the Berlin Wall. It feltlike the victory of peace prayer participants over a ruthless regime. But for four decades, the Stasi had managed to prevent that very outcome.

 

Ever since East Germany’s birth on 7 October 1949, the country’s churches had been the regime’s particular foe. Not only did they represent a worldview at odds with the atheism the regime stood for; they also had countless connections to fellow Christians abroad, including in Western countries. On top of that, one of the country’s denominations -- the Lutheran Church -- was a powerful institution, comprising not just a large part of the population but also pastors prepared to speak out against the government. When the Stasi, the Ministry for State Security, was created in 1950, keeping an eye on the country’s churches and their members became one of its central tasks.

 

How do you keep an eye on Christians to prevent them from voicing demands such as free and fair elections? The Soviet Union took a brutal approach, sending countless Christians to penal camps on trumped-up charges. The Stasi’s church department, by contrast, opted for a more cunning approach (after a relatively brief Soviet-inspired experiment). “Let them pray, let them sing, but they shouldn’t do politics,” Joachim Wiegand told me. 

 

You have probably never heard of Wiegand. That’s because he was a Stasi officer, a man operating in the shadows. During the final decade of the German Democratic Republic’s existence, he led the Stasi’s church department, called Department XX/4. He has never before been interviewed for a book; like most Stasi officers, he – probably correctly – surmises that any interviewer will misconstrue his words. To my great surprise, Wiegand agreed to be interviewed for my new book about Department XX/4’s activities, God’s Spies (published by Eerdmans on September 17). Wiegand and I spent countless hours together, discussing every detail of how Department XX/4 worked to prevent East Germany’s Christians from doing politics. After that initial period of focusing on punishment, Department XX/4 mostly relied on seduction. It got clerics – from bishops to pastors-in-training -- and other Christians to become agents.

 

Imagine the setting: a secret police agency, staffed by men with impeccable proletarian credentials but no church connections, trying to convince pastors to join the Stasi as agents. (The Stasi’s word for such undercover agents was Inoffizieller Mitarbeiter, IM.) Like other Department XX/4 officers, Wiegand -- a former farmhand who had been one of the first graduates of the Stasi’s training school -- learnt churches’ terminology and structures. He identified which pastors could be suitable recruitment targets: perhaps they were frustrated with the slow advancement of their careers; perhaps they wanted advantages such as foreign trips. Before even making the initial contact with a pastor, Wiegand and his colleagues had conducted thorough research on the potential recruit, aided by input from existing IMs.

 

When a pastor had signed on the dotted line, he reported on whichever setting he found himself in. Some pastors needed more guidance from their case officers than others as to what sort of material might be useful to the Stasi, but the result was a vast collection of reports. Pastors reported on members of their congregations, on their fellow clerics, on international faith organizations. They told Department XX/4 which decisions church leaders were planning to make; sometimes they even influenced those decisions in a Stasi-friendly direction. And all along, they had to worry about other pastor agents in their midst. Because nobody knew who else was working for the Firm, everyone might be doing so. It was a hall of mirrors. And all along, Department XX/4 collected the agents’ reports. Nothing was too small to be documented, not even the style of beards certain theology students wore: it indicated a willingness to rebel against the regime.

 

But despite their frequently gossipy reports, the pastor agents were instrumental to the survival of the German Democratic Republic. Because churches formed the country’s only semi-independent space, opposition-minded citizens of all stripes took refuge in churches’ seminaries, their environmental groups, their peace prayer meetings. If the Stasi was to prevent discontent from festering in churches around the country, its pastor agents had to keep a very close eye indeed on their fellow Christians. 

 

In God’s Spies, I follow the ecclesiastical-and-intelligence careers of four of those pastor agents. One, an academic who felt overlooked, spied for career purposes, badmouthing his peers while touting his own horn. Another became a rare pastor agent on permanent foreign assignment. A third masterfully combined a career as a pastor-and-church journalist with undercover duties; all in the name of helping his country survive. As East Germany was collapsing, an American Christian magazine published an article by the pastor, its editors clearly in the dark about his dual affiliation. A fourth pastor deviously infiltrated Western Bible-smuggling groups, preventing the books from reaching their destination and endangering the lives of the intended recipients.

 

Department XX/4 achieved great success: without its infiltration of every corner of East German Christianity, the church-led protests would likely have gained steam much earlier. Would the Berlin Wall have fallen earlier too? It is, of course, impossible to tell. But through their diligent work, the Stasi’s pastor agents – who have never before been the subject of an English-language non-fiction book – played a vital role in helping the German Democratic Republic limp along until its 40th birthday. On that day, Mikhail Gorbachev dutifully attended his fellow Socialists’ proud celebrations. Two days later, on October 9, record numbers of Leipzigers attended the peace prayer meetings in their city, then marched through the city. A month after that, the Berlin Wall fell. No snooping in the world could have saved a country as unwanted by its citizens as East Germany.

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/article/173040 https://historynewsnetwork.org/article/173040 0
Trump—A Wannabe Dictator in Training

Last week, Trump tweeted a "Trump 2024" sign. 

 

 

In a view shared by many, it is easy to believe that what Donald Trump really wants is not to be president of the country, but dictator of it. 

 

Indeed, he has suggested how good it might be for him to enjoy a third term, perhaps more, even though the Constitution forcefully forbids it. 

 

In a Fathers Day tweet he fantasized over the possibility, suggesting the public might “demand” that he serve a third term. The [good news], he wrote, “is that at the end of six years, after America has been made GREAT again and I leave the beautiful White House  (do you think the people would demand that I stay longer? KEEP AMERICA GREAT)….”  

 

After Chinese president Xi Jingping abolished term limits in his own country, Trump said he liked the sound of that. “President for life. No, he’s great. And look, he was able to do that. I think it’s great. Maybe we’ll have give that a shot some day.” Just joking? It is not all that laughable. [The quotes in these two paragraphs are from an article by Gabrielle Bruney in Esquire, July 16, 2019. They have also appeared in the New York Times and Washington Post.]

 

So what does a wannabe dictator who wants to give that a shot some day have to do day-by-day to reach his miserable goal? And how is Trump measuring up?

 

It is not an easy jump from democracy to dictatorship in our country. There is that dratted Constitution in the way. There are laws to be violated. There are critics, opponents, Democrats, immigrants, and the unfaithful to be purged.There is the critical mainstream media to be done away with. There are many lies to be told. It is hard, nasty work and Trump is fast running out of time. 

 

A dictator must as soon as possible, by any means, shut down the media or control it. No dictatorship can exist in the presence of a free press. So far Trump has only been able to call the media names—"the fake news media,” “the enemy of the people,” “the opposition party.” From the time of announcing his candidacy to the end of his second year in office he had tweeted 5,400 times. Some 1,339 of those tweets were attacks of some kind on the media or individual reporters. This doesn’t count the times he has harangued the media in his speeches. He has turned many of his supporters into media haters like himself. 

 

He is particularly incensed by the Washington Post and the New York Times, both of them highly critical of him. He has said that in the event that the public did demand  that he stay, “both of these horrible papers will quickly go out of business & be forever gone!”

 

A dictator must be an avid hater of some minority group and wanting to purge it. Trump hates Muslims. He is not too crazy about blacks and Hispanics either. But there are now many of them, a very tall purging job. On the other side of this coin, Trump seems in no mind to purge white supremacists, who love him as one of their own.

  

A dictator must tell lies, lots of lies. Trump is far and away the champion liar in presidential history. It is said by news outlets who keep track of such things, that Trump averages about six lies per day.

 

A dictator must be willing to exterminate people, lots of people. Since this is forbidden in a democracy, Trump can only slander them with a tweet or in a speech or fire them. In a successful dictatorship you just shoot anybody you want any time you want. Trump can only fire them, or tweet insults at them. But it is not a far jump to think of Trump’s firings and insults as symbolic exterminations. Nor is it a far leap to think of the would-be immigrants pinned up and families separated on our southern border as concentration camps.

 

A dictator often has an affinity with other established dictators. Trump admires and is on friendly footing with two elite of the world’s dictators—Vladimir Putin of the Soviet Union and Kim Jong-Un of North Korea. He has met and spoken kindly of both. And there is that admiration of  China’s Xi Jinping for making himself president for life.

 

A dictator is narcissistic, in love with himself and glory seeking, demanding and getting total obedience and acclamation from his followers. In his first cabinet meeting Trump invited each member to praise and celebrate his greatness. He loves his many rallies out in the country, where he basks in the acclamation of his many avid followers, which are said to be a strong third of the country’s population. 

 

A dictator is lawless, often pushing the limits of his power. Breaking a law seems meaningless to Trump. Whether intentionally or ignorantly, he has violated a host of laws, many later challenged or overturned by the courts, thwarted by the judiciary. That is why it is so important for Trump and the Republican Party to appoint sympathetic judges to as many courts as possible. As for the Constitution, it can perhaps be questioned whether he has ever read it, much less  whether he worries about violating it.

 

A dictator questions the legitimacy of his opponents, demonizing them and curtailing or abolishing their rights. In the 2016 campaign for president, Trump suggested he might not accept the legitimacy of his opponent, Hillary Clinton, if she won the election, and suggested several times that she ought to be thrown in prison. He has tweeted hatred of many of his detractors and has encouraged brutality against anti-Trump demonstrators at his rallies. 

 

As a wannabe dictator in training he has been doing reasonably well.

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/article/173039 https://historynewsnetwork.org/article/173039 0
The Native Americans Who Assisted the Underground Railroad

 

In an interview conducted in 2002, the late Helen Hornbeck Tanner, an influential historian of the Native American experience in the Midwest best known for her magisterial Atlas of Great Lakes Indian History (1987), reflected on the considerable record of “coexistence and cooperation” between African Americans and Indians in the region.  According to Tanner, “[an] important example of African and Indian cooperation was the Indian-operated Underground Railroad.  Nothing about this activity appears in the historical literature.”

 

Tanner’s assertion is largely true.  Native American assistance to freedom seekers crossing through the Midwest, then often called the Old Northwest, or seeking sanctuary in Indian villages in the region, has largely been erased from Underground Railroad studies. Two key examples from the historiography of the Underground Railroad demonstrate the extent of that deficiency.  The first volume, The Underground Railroad from Slavery to Freedom (1898) by pioneering Underground Railroad historian Wilbur H. Siebert, is still a beginning point for many who investigate efforts to assist freedom seekers in the pre-Civil War Midwest.  Siebert collected testimony from hundreds of participants and witnesses in the struggle and converted this documentary record into a broad and influential work that is still in print.  Exactly two sentences in a work of 358 pages discuss the aid given to freedom seekers by Native Americans, in this case the hospitality afforded at Chief Kinjeino’s village on the Maumee River in northwestern Ohio.  

 

Fast forward nearly eleven decades to the second work, perhaps the most extensive and authoritative Underground Railroad interpretation since Siebert.  Bound for Canaan: The Underground Railroad and the War for the Soul of America (2005) by journalist and popular historian Fergus Bordewich does only slightly better.  It includes four sentences out of 439 pages on the assistance given to freedom seekers by Native Americans passing through the region, in this case the aid provided to Jermain Loguen and John Farney in northern Indiana and Josiah Henson and his family in northwestern Ohio.  Readers of these two volumes could be excused for thinking that there was little interaction between freedom seekers and Native Americans in the Midwest.

 

There are at least two primary reasons for the absence of Native Americans in the historiography of the Underground Railroad. 

 

First, both freedom seekers fleeing slavery in the South and the Native Americans who assisted them in the Midwest came from oral cultures.  Scholars of slave literacy estimate that only five to ten per cent of those in bondage could read and write.  Although the percentage might have been slightly higher among those who made their way to freedom, a small minority of freedom seekers had achieved literacy.  Indians across the pre-Civil War Midwest also lived in primarily oral cultures.  Scholars have noted that “oral histories were central to indigenous society,” making use of mnemonic devices and reflected in storytelling.  As a result, both freedom seekers and Native Americans left a limited written record of their interaction.

 

Second, local histories, including the large volume of county histories produced across the Midwest in the late nineteenth and early twentieth centuries, start the clock with white settlement, ignoring Native American contributions generally and particularly those after the War of 1812.  In fact, most of these county histories make it seem as if Native Americans disappeared from the lower Midwest by the end of the War of 1812.  My own investigation of county histories for nearly two dozen counties in northwestern Ohio, an area where Native Americans were the primary population group until the 1830s, shows that Native Americans are largely excluded from this later history.  When the Underground Railroad is mentioned, it consists of white settlers aiding anonymous freedom seekers and is completely a post-settlement phenomenon of the 1840s and 1850s.  This is reflected as well in Siebert’s massive project in the 1890s.  As a result, the interaction of freedom seekers and Native Americans in communities across the Midwest has been obscured.

 

In spite of the absence of Native Americans in the historiography of the Underground Railroad, a scattered documentary record exists to demonstrate that freedom seekers received significant assistance from Indians in the pre-Civil War Midwest. There are at least five major evidences of this interaction.

 

The first of these evidences is simple geography.  Tiya Miles, who has written extensively about African American-Native American interaction, notes that “the routes that escaping slaves took went by these (Native) communities.”  Examples abound, especially in the lower Midwest.  The Michigan Road, a major thoroughfare for freedom seekers making their way through central Indiana, ran through or past dozens of Potawatomi villages north of the Wabash.  Hull’s Trace and the Scioto Trail ran through or past Ottawa and Wyandot reserves, respectively, in northwestern Ohio.  Another major trail ran through Shawnee villages in western Ohio, before reaching Ottawa villages at Lower Tawa Town and Chief Kinjeino’s Village on the Maumee River.  From about 1800 to 1843, a maroon community of sorts existed at Negro Town in the heart of the Wyandot Grand Reserve on the Sandusky River.  It was peopled by runaway slaves from Kentucky or western Virginia who had followed the Scioto Trail northward.

 

A second of these evidences can be found in the slave narratives, autobiographies written by freedom seekers after their escape from bondage.  Several of these tell of assistance received from Native Americans. Two provide particularly instructive content about the Midwest.  Josiah Henson’s narrative traces his and his family’s escape from Kentucky to Upper Canada (contemporary Ontario) in 1830, eventually taking them up Hull’s Trace through the heart of Indian Country in northwestern Ohio.  There they were assisted by Native Americans (probably Wyandot) who fed them “bountifully” and gave them “a comfortable wigwam” to sleep in for the night.  The next day, their Indian companions accompanied them along the route for a considerable distance, before finally pointing them toward the port of Sandusky on Lake Erie, where they could take a vessel across to Upper Canada. Jermain Loguen’s narrative traces his escape with John Farney from Tennessee to Upper Canada in 1835 by way of central Indiana.  North of the Wabash, they were aided at a number of Potawatomi villages, receiving food, shelter, and direction from their Indian hosts.  Upon reaching Michigan Territory, they turned eastward and crossed into Upper Canada.  Both Henson and Loguen later achieved literacy and became well-known black abolitionists.

 

A third of these evidences survives in Native American oral tradition.  One of the best examples comes from Ottawa oral tradition in western Michigan.  A story of helping twenty-one freedom seekers to reach Upper Canada was passed down through three generations of the Micksawbay family, before it was finally recorded in print by Ottawa storyteller Bill Dunlop in the book The Indians of Hungry Hollow (2004).  The oral tradition recounts an episode in the 1830s that involved the group of freedom seekers, who had gathered at Blackskin’s Village on the Grand River.  Ottawa elders, fearful that these runaways would be overtaken and captured by slave catchers, and sensing that sending them to Detroit was unsafe at the time, arranged for ten Ottawa men to accompany them overland to the Straits of Mackinac, where they were handed off to friendly Ojibwa.  The latter took them across by canoe to Michigan’s Upper Peninsula, and then accompanied them overland, crossing into Upper Canada by way of Neebish Island.  Oral history interviews with Native American descendants in the Midwest have also proven useful in establishing elements of this African American-Native American interaction.

 

A fourth of these evidences comes from the memoirs, letters, and journals of white traders, trappers, missionaries, and soldiers who lived in or passed through Indian Country in the Midwest and recorded their experiences in textual form.  My own research in northwestern Ohio has located discussions of Native American assistance to freedom seekers in the memoir of trader Edward Gunn and the letters to Siebert by trader Dresden Howard, both of the Maumee River valley, and the letters and journals of Moravian missionaries and U.S. soldiers in the War of 1812 who recounted life in Negro Town. A particularly instructive example appears in the memoir of Eliza Porter of Wisconsin.  She and her husband Jeremiah, missionaries in Green Bay, cooperated with Native Americans on the Stockbridge reservation east of Lake Winnebago in aiding fugitive slaves making their way through eastern Wisconsin to Great Lakes ports in the 1850s.  On one occasion, detailed in Porter’s memoir, they assisted a family of four runaways from Missouri in avoiding slave catchers and bounty hunters said to be “sneaking around” the reservation.  The Stockbridge helped their guests make their way to Green Bay and gain passage on the steamer Michigan, which carried them to freedom in Canada West (formerly Upper Canada).

 

A final evidence appears in the bodies of freedom seekers and Native Americans and their descendants. This takes us into the realms of genealogy and the DNA record and particularly applies to those freedom seekers who sought permanent sanctuary in Native American villages in the Midwest. Native American genealogist Don Greene has found extensive evidence of African American ancestry among the Shawnee in the region.  A case in point is Caesar, a Virginia fugitive who escaped across the Appalachian Mountains to the Ohio Country in 1774 and was adopted by the Shawnee.  He married a mixed-race Shawnee woman named Sally and fathered children known to history as “Sally’s white son” and “Sally’s black son” due to their difference in hue.  The latter is still listed as “Sally’s black son” on the roll of Shawnee migrants removed from the reservation at Wapakoneta to the Kansas frontier in 1832.  Similarly, researchers have suggested that the origin of the R-M 173 Y-chromosome among Native Americans, especially Ojibwa in the Great Lakes region, comes from the large number of runaway slaves settling among them.  These are examples from Indian Country in the Midwest of what historian William Loren Katz labels “Black Indians.”

 

Some subjects of historical research can be substantiated by investigating a single archive or a few collections in related archives.  The role of Native Americans in assisting freedom seekers in the pre-Civil War Midwest is not one of those subjects.  The latter subject requires the historian to assemble an archive from a range of disparate sources.  The evidence exists, however, to suggest that it can be done.  Simple geography, a few slave narratives, Native American oral tradition, dozens of scattered documents by particularly involved and insightful whites in Indian Country, and genealogy and the DNA record substantiate Tanner’s 2002 observation about Native Americans and the Underground Railroad in the Midwest.

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/article/173041 https://historynewsnetwork.org/article/173041 0
S.F. History Museum Highlights America’s First Immigration Restriction: The Chinese Exclusion Act of 1882

 

It may come as a surprise to many Americans to learn that the first country to have its citizens specifically excluded by the U.S. Congress was not Mexico, or a Middle East nation, but China. In 1882, Congress passed the first of several Chinese Exclusion Acts that prevented all immigration of Chinese laborers. These laws were not reversed until 1943, when China was an important ally in the war with Japan. 

 

The Chinese Historical Society of America Museum (CHSA) is currently displaying an exhibit, Chinese American: Exclusion/Inclusion, which vividly documents the changing views of Americans towards Chinese immigrants in 19th and 20th century America. The museum, in the heart of San Francisco’s Chinatown, is the oldest organization in the country dedicated to the interpretation of the cultural and political history and contributions of the Chinese in America. 

 

Tamiko Wong, the executive director of the CHSA, spoke with the History News Network about the exhibit. Before joining the CHSA, Wong was the Executive Director of the Oakland Asian Cultural Center. A graduate of U.C. Berkeley, she is a former Asian Pacific American Women’s Leadership Institute fellow. She has served on the California Commission on Asian and Pacific Islander American Affairs and is a recent graduate of Coro’s Women in Leadership program. 

 

 

Q. The current exhibit is very timely. How was it organized? 

A. Chinese American: Exclusion/Inclusion was originally curated by the New York Historical Society in 2014. Our museum provided content and paintings for the initial 2014-2015 show in New York. The NYHS originally hoped that the exhibit would travel to different parts of the country, but after one run at the Oregon Historical Society, the New York museum gave it to us in 2016. We were delighted because it is a high quality, interactive exhibition highly relevant to our story. 

 

When it arrived in San Francisco, we added more objects from our own collection and refocused it to include more of a West Coast story.  Although there are more than 5 million Chinese in America, we are still a small minority being only about 1.5% of the total population. Our hope for the exhibition is to enlighten visitors on the struggles of immigrants and contributions of Chinese in the U.S. even when citizenship was not an option. We want to contribute to the discussion of what it means to be an American.  

 

The exhibit is timely because it clearly shows a pattern we have seen throughout history, not just in the US but elsewhere too. Immigrants are brought in as a source of cheap labor, and have often faced discrimination and harsh conditions. Especially during downturns in the economy, immigrants become scapegoats who are blamed for society’s ills. Racist rhetoric becomes normalized in the media, discriminatory feelings are codified into law, and immigrants face violence and unfair treatment. For Chinese Americans in particular, we have been seen as perpetual foreigners even when many of us are citizens, have fought and died for this country, and have contributed to the building of this country from everything from railroads to some of the civil liberties we currently have such as birthright citizenship. 

 

Q. What has been the reaction to the exhibit by the local Chinese community and the broader, regional public? 

Overall, Chinese American history is not general knowledge. Here in California, even though there are curriculum standards to teach about the Chinese Exclusion Act, I would say 90% of our new visitors know very little about Chinese American history prior to coming to the museum. So often we hear, “I had no idea about what the Chinese went through.” Those who took Asian American studies courses when they were in college may remember some aspects of our history, but the content which we cover in Chinese American: Exclusion/Inclusion is a very powerful history lesson on immigration policy, discrimination, and resilience. Visitors who leave messages in our comment books express how important it is to have this history shared and how timely it is because of how immigrant and racial issues are discussed today. 

 

Q. Most of the exhibit materials were of American origin, including photographs, posters, newspaper clippings. Apparently very few written materials from Chinese immigrants (e.g. diaries, letters, books) have survived. Why is that? 

I am not sure if the lack of first-hand accounts from letters or diaries is true of all periods in Chinese American history. The lack of letters and diaries is true in the case of Chinese railroad workers during the late 1800s. As reported by professor Gordon Chang, head of Stanford’s Chinese Railroad Workers in North America Project, despite an extensive search, researchers have been unable to recover first-hand accounts or letters from these workers although many could read and write Chinese.  There are theories about why this is so, for example, many family and historical documents were destroyed during the Cultural Revolution in China. In the U.S., many Chinese communities were damaged or destroyed by hostile forces in the late 1800s-early 1900s. And the 1906 San Francisco earthquake and fire destroyed most of this city’s Chinatown.   

 

Q. A large section of the exhibition is given over to the role of Chinese American women. One display noted that in 1850 only seven of San Francisco’s 4,000 Chinese residents were women. However, today in San Francisco, the city’s assessor and one of its seven supervisors are Chinese American women. What are the reasons for this dramatic evolution?

One current section of our museum, Towards Equality: California’s Chinese American Women is devoted to that topic. Unfortunately, this particular display will close in October. 

 

To summarize, few Chinese women immigrated to the U.S. in the mid-1800s due to patriarchal norms in China that relegated them to their homes and reinforced their inequality. In the U.S. Chinese women were viewed as immoral, and Congress passed the 1875 Page Act that effectively stopped Chinese women from immigrating. The 1882 Chinese Exclusion Act curtailed the overall number of Chinese immigrants, but ironically created a broader opportunity for Chinese women to come as family members of merchants or American citizens. Over time, the population of Chinese women grew, especially among those born in the U.S. We find that these American-born, English speaking 2nd and 3rd generation women broke with traditional Chinese values and sought independence, mainstream acceptance, and became community activists seeking inclusion by American society. 

 

Q. This year, 2019, is also the 150th anniversary of the transcontinental railroad, in which Chinese workers played an important part. One part of the exhibit notes that a delegation to the 1969 Centennial celebration was snubbed by Nixon’s Transportation Secretary. 

I was proud to have been present at the transcontinental railroad’s sesquicentennial celebration held on May 10, 2019 in Utah (also known as Spike 150). 

 

CHSA Board Emeritus, historian, and railroad worker descendant Connie Young Yu also represented CHSA at the Spike 150 celebration, and gave the opening speech. Connie’s speech paid homage to Chinese railroad workers and called for the reclaiming of the immigrants’ rightful place in history. Her presence at the podium fulfilled a mission begun in 1969 by then CHSA President Phil Choy. He was initially asked to speak at the centennial celebration, but at the last minute he was removed and replaced with actor John Wayne.  To add insult to injury, Transportation Secretary John Volpe said in his keynote address, “Who else but Americans could chisel through miles of solid granite? Who else but Americans could have laid 10 miles of track?” 

 

This was an insult to the memory of Chinese immigrants who actually performed these feats yet could not become citizens at this time due to racist laws. 

 

Q. What are the next steps for the exhibit and the CHSA? Will this exhibit or others travel to other museums?

Chinese American: Exclusion/Inclusion has content that remains relevant and we plan to continue exhibiting it. We hope to add more content that helps to tell the important and timely story of how immigrants have been treated here in the U.S. through different points of view. We hope to add features that will focus on how the lives of well-known Chinese Americans who have intersected with history, looking at immigration to the U.S. that has been mediated by experiences in other places such as the Philippines, Latin America, and Taiwan.  

 

The overall exhibition is large and this makes it difficult to travel. However, we have built a number of traveling exhibitions that touch upon the themes covered in it.  The most travelled exhibit to date is Remembering 1882 which focuses on the Chinese Exclusion Act and this year, because of the 150 Anniversary of the completion of the Transcontinental Railroad, our The Chinese and the Iron Road: Building the Transcontinental has been very popular. Additionally, starting in October, our exhibit Towards Equality: California’s Chinese American Women will be available to travel and we welcome inquiries from other institutions who may wish to show it. In addition, we have available another display, Detained at Liberty’s Door, which traces the formation of the Angel Island Immigration Station in San Francisco Bay and highlights the inspiring story of Mrs. Lee Yoke Suey, the wife of a native-born citizen who was detained for more than 15 months. 

 

Note: to see excerpts from the exhibit and learn more about Chinese Historical Society of America, go to www.chsa.org., You may contact the museum staff at info@chsa.org.    

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/article/173031 https://historynewsnetwork.org/article/173031 0
Expansion and Motivation: Frontiers and Borders in the Past and Present of the United States and Russia

 

Three new books push us to consider and compare the role of the frontier, or sometimes borders, in the past and present of Russia and the United States.  Greg Grandin, The End of the Myth: From the Frontier to the Border Wall in the Mind of America (Henry Holt, 2019); and David McCullough, The Pioneers: The Heroic Story of the Settlers Who Brought the American Ideal West (Simon and Schuster, 2019) take virtually diametrically opposed stances on what the frontier and its settlement have meant to America.  Grandin argues that expansion provided a “safety valve,” although it did not work well, for release of tension produced by internal difficulties.  In his book, expansion across North America and in foreign wars, at a great cost in blood and abandonment of ideals, is at the heart of the American experience.  McCullough offers fulsome praise for expansion in the case of Ohio, where he finds that true American ideals were put into practice. Angela Stent, Putin’s World: Russia Against the West and with the Rest (Hachette, 2019) discusses Russia’s past expansion and its relationship with “near abroad” neighbors; she finds, no surprise, that the issue bears gravely on the question of what Russia wants today.

 

The history of both Russia and America can be written around the issue of expansion and its ramifications.  Both states started small and had several factors in common in their growth, for instance the quest for furs, not mentioned by Stent and underemphasized by Grandin.  McCullough describes Ohio as a land of great resources, but his main interest is in the high purpose of white expansion into the territory.  In short, the proffered motives and results of expansion in these three books lead in profoundly different directions.

 

Grandin’s chief villain is Andrew Jackson (Donald Trump’s favorite president), who massacred Creek Indians in 1814, oversaw Indian removals from the East in 1830, and owned slaves.  For Jackson and many others, the West–wherever it happened to be–provided relief for the whole country.  Expanding westward focused people’s attention on the frontier and provided distraction from social and economic problems back east.  Americans could at least dream about going west to start a new life.  The West was always touted as a site of freedom.  But along the way, U.S. troops committed many crimes, among them rape, murder, and destruction of churches in the Mexican War of the 1840s.  Such acts occurred again in the following decades, especially in our war in the Philippines, 1899-1902, but also in Nicaragua in the 1920s and in other countries.

 

Although many of these stories are well known, Grandin weaves them together in moving and depressing fashion. He also ties the wars on the frontier and abroad to wars at home, above all the Civil War but also “race war” and violent repression of socialists and labor unions.  Together, these fights, which used up money and energy, help explain the absence here of “the social-democritization of European politics . . . including the rights to welfare, education, health care, and pensions” (95).  Attention to people’s needs at home lost out to the settlement, but even more to the idea, of the frontier.  By the 1840s, the U.S. was “becoming inured to its [own] brutality and accustomed to a unique prerogative: its ability to organize politics around the promise of constant, endless expansion” (94).  

 

Grandin portrays white movement west as a pattern of broken treaties and ethnic murder or cleansing.  Andrew Jackson is little more than a crude butcher.  Close behind in villainy is Thomas Jefferson, who also gushed about “the West” but never traveled farther toward it than the Shenandoah Valley. Jefferson justified American westward expansion as the spread of goodness and light.  However, he wrote in 1813 that “all our labors for the salvation of these unfortunate [Indian] people have failed” because of England’s support for them.  It would be “preferable” to “cultivate their love,” he said of the indigenous folk, but “fear” would also work.  “We have only to shut our hand to crush them” (43).  Thus rapacious frontiersmen strode forth to make a beautiful new world for themselves in the wilderness, relying all the way on mass violence.  Grandin’s view of American expansion is relentlessly grim.

 

Happier are the believers in McCullough’s “heroes” in the settlement of Ohio.  The dust jacket calls them “dauntless pioneers who overcame incredible hardships to build a community based on ideals that would come to define our country.” McCullough lauds the selfless careers of Manesseh Cutler, a minister and master of all sciences, and his son Ephraim. They played key roles in leading white settlers into Ohio in the late eighteenth and early nineteenth centuries and in the state’s early law-making.  They believed in and practiced democracy and insisted successfully that slavery would not be introduced beyond the Ohio River.  

 

Manasseh Cutler helped draft key provisions of the Northwest Ordinance (1787). Filled with stirring words, it insisted on freedom of religion and “morality and knowledge” spread by “schools and the means of education,” with all overseen by “good government.”  The Cutlers and friends also promoted the “New England system,” in which “the establishment of settlements [would be] done by legal process, and lands of the natives to remain theirs until purchased from them” (7). Suppose they didn’t want to sell? That possibility was not explored in the Northwest Ordinance.  Indian rights proved not to be a problem because beyond the Ohio River lay “howling wilderness” (7; McCullough is quoting a contemporary source).

 

The Treaty of Greenville (1795) opened the way “to clear and cultivate lands that had never known the axe and the plow” (118, quoting George W. Knepper).  Here is the old “empty land” (terra nullius) idea, although Indians had cultivated the earth in various parts of Ohio.  “Empty” meant that civilized people had the right to take it.  In McCullough’s narrative, Indians were obstacles to the spread of American greatness.  He rhapsodizes that, “West was opportunity.  West was the future.”  Achievements in Ohio, McCullough writes, “would one day be known as the American way of life” (13).

 

McCullough abandons the pretense of a dispassionate history in his subtitle. His book is a paean to American goodness; it ends with the idea that the Cutlers et al. overcame the “adversities they faced, propelled as they were by high, worthy purpose.  They accomplished what they had set out to do not for money” or fortune or fame, but to “advance the quality and opportunities of life–to propel as best they could the American ideals” (258). 

 

Angela Stent tries to achieve some detachment in outlining Russia’s past concern with borders and the country’s goals and fears at present.  She occasionally grants that Russians might have a point of view worth mentioning about the near abroad.  She notes that George W. Bush’s “Freedom Agenda” advocated regime change in Georgia and Ukraine.  Mistakes and insults to Russia from the American side are introduced; for example, U.S. officials misled Yeltsin about the EU’s extension to Eastern European countries. Russian Prime Minister Yevgeny Primakov learned about NATO’s bombing campaign against Serbia in 1999 only when he was in mid-air en route to Washington to discuss a solution to genocide in the former Yugoslavia.

 

NATO “made a major mistake” in 2008 when it “mishandled” enlargement to ultimately take in Poland and the Baltic States (129).  But for Stent, the major problem is that “NATO” did not think through the implications of a military pact with those countries.  Are the Baltic states “defensible,” she asks (127)? (A far better question would be what would Russia gain by conquering those countries, even if no shots were fired? More sand and gravel?)  Supposedly “the Russians” are deeply chagrined by the loss of their empire, and they want it back.  Looking at events in Georgia in 2008 and in Ukraine 2013-14, Stent asks, “What is it that propels this Russian drive for expansion?” (17).

 

The “drive” is connected to the Russian people. Stent mentions that many foreigners who went to Russia for the World Cup matches in 2018 brought with them “stereotypes about unfriendly Russians living in a backward country.”  However, at least some visitors discovered “normal” people there, ones who could smile and party.  Then we learn that they can no longer celebrate in the streets (2).  

 

Stent announces that, “To understand Putin’s world, one has to start with the history and geography—and, yes, culture, that shaped it.”  American and British readers apparently must be told that Russia possesses a culture and exhorted to think of the country’s people as “normal.” 

 

Of course Stent discerns an “iron hand” ruling the country under both tsars and Soviets (22), although in 2005 “the government was forced to back down” after protests by pensioners about “reforms” in their payments (41).  Whatever the “hand” might be, Stent insists that the U.S. can engage Russia where that country has a national interest.

 

Along among the three books, McCullough suggests that expansion was based on high ideals.  But Americans apparently have the right to bring civilization not only to those Indians or Filipinos we have failed to kill, but also to the Vietnamese, Iraqis, Afghans, and so on.  If National Security Adviser John Bolton seems like a rabid dog in his eagerness for more war, it is well to remember that he is part of a long tradition that sees American greatness as a justification for imposing our will (there should be no talk of ideals) on other peoples.

 

Grandin’s book, more solidly argued, will sadden some people–though surely far fewer will read it than will pick up McCullough’s.  However, Grandin conjures up a social democratic paradise in Europe that does not exist.  Or, if it does, it is limited to a few northern European countries that today evince a certain distrust of democracy and ugly ethnic prejudices.  In his eagerness to criticize domestic life in the U.S., Grandin sometimes goes too far.  He mentions lynching repeatedly, for example as part of the “relentless race terror African Americans faced since the end of Reconstruction” (130). Yet much recent work on lynching shows that it was erratic and fell, with some short upward movements, steadily after 1892.  See studies, for example by Michael Pfeifer, Fitzhugh Brundage, Stuart Tolnay and E. M. Beck, and myself.  Lynching was always horrible, but in my view it cannot be described as “relentless” or as a system.  Moreover, Grandin is not interested in the rise of land ownership or a middle class among African Americans, despite the great odds against them. (Yes, they lost great amounts of that land later.)  Nevertheless, his dark vision explains much about American policy at home and abroad.

 

In Stent’s Russia, expansion and the “iron hand” go together.  But what does that phrase mean?  Did people not live and love, at least a little, on their own terms?  I’ve been to a lot of Russian parties, starting in 1978, where talk and vodka flowed together, and I would say yes.  The concept of “lived socialism” (e.g. Mary Fulbrook, Wendy Goldman) should be considered by the Washington circle. And, if there has been an “iron hand,” how can anyone explain the Soviet victory in World War II (no, the NKVD did not drive troops into battle; see Richard Reese), mass mourning at Stalin’s death, his popularity in many recent polls, and Putin’s own popularity?

 

Russians are always subjects, never actors in accounts like Stent’s.  Denigrating Russians is an old tactic.  Stent cites George Kennan in Russia and the West under Lenin and Stalin (1961), where he mentioned the then-accepted estimate of twenty million Soviet dead in World War II.   He added, “But what are people, in the philosophy of those who do not recognize the existence of the soul?”  (Russia and the West, 275). Kennan liked the work of the Marquis de Custine published in 1841, also cited by Stent—but without a key passage. Custine wrote that “real barbarism” characterized Russia; the inhabitants were “only bodies without souls.” In an “empire of fear,” foreigners are “astonished by the love of these people for slavery.”  The trail then leads back to another travel account that Kennan, and surely Stent, knew, Sigismund zu Herberstein’s best seller on Muscovy.  His book was first published in Latin in 1549, then translated into multiple European languages.  Herberstein found that, “It is debatable whether such a people must have such oppressive rulers or whether the oppressive rulers have made the people so stupid.”  

 

During the Cold War, high American officials loved Custine.  Zbigniew Brzezinski, for instance, wrote in 1981 that, “No Sovietologist has yet improved on de Custine's insights into the Russian character and the Byzantine nature of the Russian political system.”  None of Custine’s leading American devotées disavowed his or Herberstein's final judgments on Russians.

There is one nearly useless map in Stent’s book, a cluttered view of Eurasia crammed onto a single page.  She does not provide a map showing the expansion of NATO over time up to Russia’s borders.  Would that expansion not have made normal people in Russia quite nervous?

 

Stent writes that, “There is no precedent in Russian history for accepting the loss of territory, only for the expansion of it” (17).  Then at the bottom of the same page, she insists that since the fifteenth century, Russia “has constantly alternated between territorial expansion and retreat.” She might have considered that from the 1760s on, Russia/USSR has “retreated” from France, Manchuria, Austria, Hungary (twice), part of Finland, Poland, Czechoslovakia, Romania, Bulgaria, northern Iran, the Baltic states, and Germany (twice or more, depending how you count matters).  

 

Yet even a cursory look at Russian expansion shows that much of it was defensive: the Tatars (Mongols) attacked deep into Russian settlements in the south and east every summer in the fifteenth and sixteenth centuries.  These raids were not the response of scattered tribes who often despised each other; they were military expeditions organized by the heirs of the original Mongol Horde that had conquered Russia in the thirteenth century.  In response to the continuing attacks, which reached Moscow as late as 1571, the Muscovite government extended a string of forts (the zaseka) further and further south.  The culmination of this drive was the conquest of Crimea in 1783 by a Russian imperial army from a Tatar remnant.

 

Catherine II then reportedly said, “That which stops growing begins to rot.  I have to expand my borders in order to keep my country secure” (17, no source given). Shades of Grandin!  But to suggest that internal security was the motivation for Russian expansion is to miss essential parts of the country’s history.

 

In the nineteenth century, Russian expansion was typical of the lust for more territory among all major European powers, the U.S., and the Japanese. In 1944-45, the Red Army marched into Eastern Europe to push the Germans out, with full Western approval.

 

Nothing in America’s past resembles the recurring invasions of Russia by foreign powers, from the Mongols in the thirteenth century to the Germans in 1941.  Sometimes, as in the early seventeenth century, assaults came simultaneously from several directions.  If Russians are sketchy on the details of these campaigns or exalt their own victories, they still base their outlook today on the knowledge that these incursions happened.  Stent is not much interested in this past.

 

 More than five years out from the Ukrainian crisis of spring 2014, little indicates that Russia covets more territory anywhere.  Annexation of the Crimea involved a region that was never Ukrainian in any profound sense.  The war with Georgia in 2008 also resulted in Russia’s absorbing new land, but areas not populated by Georgians.  Seeing some “drive” for endless conquests in these affairs, however much they broke international law, is gross speculation and is not based on a serious examination of Russian history.

 

Stent has a lot of valuable detail and some useful insights into Russian concerns. But what rational interests would a nation of dead souls have?  Her book becomes at once more suspect and more valuable when read together with Grandin’s examination of American frontiers.  In turn, Grandin will infuriate or dishearten fans of David McCullough’s glorious American past, which in its argument could serve as a foil to Russia’s supposedly limitless, ugly ambitions.  Could we at some point adduce Britain in the nineteenth century or Germany in the twentieth?  Could we just watch Game of Thrones?

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/article/173042 https://historynewsnetwork.org/article/173042 0
The Apolitical Antidote to Unjust Politics

 

The 2020 election cycle has already produced numerous solutions for the woes plaguing America and the world. Candidates in the recent Democratic debate explained how problems related to education, healthcare, racism, income inequality, and immigration can be solved. Most of their plans required more laws, bigger bureaucracies, and fully integrating more peopleninto the American political system. 

 

On display recently in conservative circles is a different kind of obsession with politics which focuses on prioritizing national interests and cultivating an intense patriotism for the American nation-state. Sometimes conservatives advocate a restricted role of government or a restricted citizenry, but, like their political opponents, they usually still see Americans more engaged with a more vigorous political system as the right way forward.

 

The Politics of Utopianism

 

American policy in the last seventy years has been dominated by this kind of thinking. Americans started a “War on Poverty” in the 1960s, which was followed by the “War on Drugs” in the 70s. George W. Bush’s “No Child Left Behind” was a battle to give “every single child” a “first class education.” RAND recently released a study about how the right kinds of laws and bureaucracies can totally eliminate fatalities on American roads. Should it be called a “War on Traffic”?

 

Foreign policy has shared the same broad, vague aspirations with regard to real wars. The Truman Doctrine committed America to support all “the free peoples of the world.” President Kennedy clarified this in 1961, saying that America would effectively respond to “any potential aggressor contemplating an attack on any part of the free world with any kind of weapons.” Johnson applied this logic in Vietnam because, in his words, “A threat to any nation in that region is a threat to all, and a threat to us.” The same logic was applied when the Cold War transitioned to a “War on Terror” after the 11 September attacks. Similar to his predecessors, Bush pledged to hunt down any and all terrorists and eliminate their safe havens. America would end world terror. 

 

The idea behind all of these is that if we just have more politics, or more of the right politics, we’ll fix everything. We’ll find perfection. Rarely does anyone actually say that, but the logic behind the objectives implies the possibility of perfection. Francis Fukuyama probably summed it up best in his 1989 article crowing over the victory of democratic capitalism in the Cold War. The liberal order, led by its chief missionary America, had saved the world and if only it could permeate to the ends of the earth, mankind would see an “end of history.”

 

But history didn’t end. Terror, poverty, and drugs remain with us or have increased. So what is wrong with politics? And why is it getting nastier despite such lofty aspirations? 

 

Like many average citizens after 11 September, I wanted to do my part for our republic. As an aspiring military historian, joining the army was a suitable detour after my PhD. All of my life I had followed politics, I had degrees in politics, and I intended to teach the history of war and politics. In Afghanistan, however, I reached the limit of politics when I saw how imperfectly the liberal democratic nation-state was being applied to a people with a vastly different culture and history than our own. Bitter and confused about my own republic and its aspirations, I returned to Plato and Augustine with fresh eyes. They argued that unjust politics needs redemption through citizens’ apolitics. 

 

A Brief History of Apolitics

In ancient Greece, the face-to-face societies of city-states were bound together by intense loyalties. A city-state saw itself as the perfection of the political ideal. Tribes, gods, rituals, and public ceremonies were all centered on the cohesion of this unit, called a polis. 

 

The obvious question is what happens when you have fundamentally different conceptions of the polis that clash? This occurred when democratic Athens and militaristic Sparta collided during the Peloponnesian War. The two poleis dragged most of the Greek world into a devastating, three-decade long war that ruined Greece. Despite their defeat, bitter Athenian citizens were still blindly loyal to the idea of the polis. A brilliant teacher, Socrates, challenged this order and pointed out the flaws in Athens’ democracy, so they tried and executed him.

 

The war and then the execution of Socrates revealed the injustices of the polis, but the unexpected occurred in the aftermath. Plato, a disciple of Socrates, transferred the concept of the polis to the soul. How do you live in a world where the political order has collapsed and injustice reigns? Plato argued that you seek wisdom and courageously apply goodness and justice in your own life, regardless of what the politicians of your day are doing. This apolitical citizen pursues a well-ordered soul based on the perfection found in the realm of ideas, and the citizen also knows perfection cannot be found on earth.

 

Plato’s apolitism created a tension in political thought and action. Henceforth the virtuous citizen should pursue goodness and order his soul, and only then should he or she try to apply it in this temporal world. Perfection could be imagined but never realized. Throughout history apolitics guided the greatest minds living in the most unjust times. 

 

Plato was followed by the Cynics, with the most famous among them being Diogenes of Sinope. When the polis collapsed, the Macedonians conquered Greece and created a new world order. Greece was no longer free and the average citizen was powerless. Diogenes was the first to apply Plato to a broader political reality. He rejected the old polis, insisting that he was a citizen of the cosmopolis. He also eschewed the tyrannical politics of great conquerors like Alexander the Great. One day Alexander met Diogenes. Standing over him, he offered Diogenes whatever he desired. The old philosopher rejected Alexander’s power by mockingly requesting, “that you step out of my sunlight.”

 

By the 1st century BC, Rome picked up Alexander’s imperial idea and gained dominion over the Mediterranean. Roman power seemed ultimate and eternal, but Jesus of Nazareth challenged the emperor obliquely by implying that he instead was the anointed son of God and the true king; however, his kingdom was “not of this world.” Jesus’ followers, like Diogenes, would claim that citizenship was broader and deeper than temporal politics. Christians were “foreigners” and “exiles” in the world. Pontius Pilate, the representative of Roman power could not understand this, so he approved Jesus’ execution as the Athenians had done with Socrates.

 

Jesus’ apolitical legacy remained, finding a particular expression in monastic communities, where men and women would withdraw in isolation or small communities in order to contemplate God and pursue holiness. The clearest articulation of Christian apolitism came from Augustine of Hippo, who described two realms, the City of God and the City of Man. The earthly city was fleeting, beset by failures and injustices. Christians should not abandon this city, but can only better it by keeping their eyes on the perfect, heavenly city. Augustine’s understanding of the tension between political and apolitical has remained the clearest synthesis of classical and Christian apolitics.

 

Apolitism Today

 

The classical and Christian orders have long since passed, but the dangers of utopianism remain. America needs a new apolitics. As this brief history illustrates, apolitics is not the hedonistic withdrawal into self-interested behavior. Neither is it apathy or laziness. Historically it has been the conscious rejection of perfection in this world. 

 

The moment a political society believes it can achieve perfection, it hardwires itself for the worst sorts of injustices. The most perfectly conceived states that pledged to purify politics were the fascist and communist experiments of the 20th century. Philosopher Eric Voegelin, who fled the Gestapo and escaped to the United States, noted the elimination of apolitics in these societies. In his later years he warned that places like the United States were taking a different path toward the same utopian dead end. The rhetoric in contemporary politics confirms his concerns.

So should we care about American politics? Yes, but not to the extent that it distracts us from loving our neighbors and treating people—all people—decently. It should also never be severed from a contemplation of the moral cosmos. Only the citizen with the well-ordered soul can begin to understand what a well-ordered society or state should look like and what its limitations should be.

 

There is no simple, five-point action plan for apolitics. The good citizen is left with a tension between the perfect order of ideas and imperfect temporal politics. This tension balances what we seek from human political order and keeps us from descending into tyranny. Without the tension—without the apolitical—no realm of perfect ideas or conception of love can exist. All that remains is the power to be as unjust, imperfect, and unloving as we can imagine.

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/article/173033 https://historynewsnetwork.org/article/173033 0
Eastern European Historian Emanuela Grama on Romania’s Heritage

 

Emanuela Grama is an Associate Professor in the Department of History at Carnegie Mellon University. She received her PhD in Anthropology and History from the University of Michigan, Ann Arbor in 2010. Her first book, Socialist Heritage: The Politics of Past and Place in Romania is currently in production Indiana University Press. Visit her website or follow her on Twitter @emanuela_grama.  

 

What books are you reading now?

 

I’m currently re-reading Transylvanian Trilogy by Miklos Banffy. This is a 1,400-page novel about the world of the Transylvanian Hungarian aristocracy at the end of the 19th century and beginning of the 20th). Banffy wrote this novel after the end of the Austro-Hungarian empire, musing both elegiacally and ironically about his Hungarian compatriots who could not see “the writing on the wall”—in this particular case, the disintegration of the empire and of the social and political order it represented. 

 

I am also reading Holly Case’s The Age of Questions, a brilliant intellectual history of the many “questions” that emerged in the 19th century and the ways in which political actors at that time tried to make sense of the inherent radical changes brought about the industrial revolution, the rise of capitalism and the modern age.  

 

In general, I am the type of person who reads multiple books at the same time, according to how I feel on a particular evening. Right now, I’m moving in between several volumes, including Toni Morrison’s Song of Solomon, Valeria Luiselli’s Lost Children Archives, an edited volume entitled How We Read (@punctumbooks), and Jill Lepore’s These Truths. 

 

What is your favorite history book?

 

Obviously, my answer to this question would continue to change probably from one month to another, depending on what I am reading at the time. One of the books I’ve read recently and loved—as in, I-could-not-put-it-down type of love—is East West Street by Phillipe Sands. It transgresses the genres, being at the same time a family memoir, a love story, and a historical analysis of the intellectual trajectory and biography of the legal scholars who coined the concepts of “genocide” and “crimes against humanity,” each echoing a particular understanding of the relationship between individual, state, and society. 

 

Why did you choose history as your career?

 

Actually, I could say that history chose me. I am an anthro-historian working in a history department and I am teaching courses in European history as well as in cultural anthropology. I was privileged to be a graduate student of and receive my PhD from the Doctoral Program in Anthropology and History of the University of Michigan. As part of this program, I took a wide range of courses, from socio-cultural and linguistic anthropology to historical methods and theory and the historiography of modern European and Eastern European history. During grad school, I learned how to think as an anthropologist when doing archival research, and as a historian while in the field. Specifically, I constantly tried to consider the historical and political conditions under which an archival fond was constituted, organized, and made available to researchers—and even sometimes, as I’ve learned during my recent research in the National Archives in Bucharest, Romania, re-classified for political reasons. (In Romanian, there is even a special word for this process, re-secretizare, meant to signal that particular files and archival funds are being re-classified, often at the request of specific political actors in the government).  

 

In my work, I continue to draw on insights from both anthropology and history. For instance, I recently published an article about some of the art historians and collectors who worked with the communist government in post-1945 Romania to reorganize the nationalized art collections and to form a socialist network of art museums. I drew on a wide range of primary sources, such as memos of the meetings, donation deeds, inventories of the collections, communist party meetings, etc., and I relied on anthropological theories about property and value to look at these sources in a new light. Specifically, I used Weiner’s brilliant concept of “keeping-while-giving” to argue that these collectors and art experts became particular “arbiters of value”, straddling two distinct political and social orders: the interwar and the early communist periods. 

 

I employ a similar strategy in my forthcoming book, Socialist Heritage: The Politics of Past and Place in Romania (Indiana UP, 2019). The book is a social and political history of one place: the historic district of the Old Town in Bucharest, Romania’s capital. I approach the Old Town as a window onto understanding broader political and cultural changes during the communist and postcommunist periods. This district had historically been a place of transactions and transgressions, a place that defied easy categorization. When the communist officials wanted to transform Bucharest into a modern socialist capital, they initially wanted to demolish the old houses in the district. The architects hated its aesthetic heterogeneity, its narrow streets, its old houses. They wanted it gone. But in the 1950s, some archaeologists found the ruins of a medieval place, and used them to fight back the demolition plans. In the end, paradoxically, the Old Town shifted from being an urban eyesore to be portrayed as a key heritage site of the socialist state and of the Romanian nation. After the end of communism, however, this heritage turned into a burden, a symbol of a time that everyone wanted to forget. 

 

I also explore how the district became once again a political resource for the postcommunist elites, who used it to naturalize a more exclusionary concept of citizenship, one that depended on property ownership. They did so by promoting the Old Town as a symbol of Bucharest’s European history and by attempting to alter its social and architectural fabric—evicting the homeless, changing the utilities infrastructure, adding new pavement to the narrow streets. In parallel, however, they refused to assume responsibility for the state-owned old buildings, many of them in decrepit condition, and implicitly for the state tenants’ precarious situation. The case of the Old Town reveals how these new elites managed to deny their own role in the increasing economic and political volatility of postcommunist Romania, and instead place this responsibility exclusively with the poor. 

 

What qualities do you need to be a historian?

 

This summer, I spent two months conducting research in two archives in Bucharest, Romania. Over coffee with an old friend, I was very excited to tell her about some of the documents that I found. But my mood suddenly changed when she looked at me unfazed and asked: do you really like doing this? Her question took me by surprise. Of course, I should have realized that some people might view the act of reading dusty old documents as a waste of time. My answer was yes, I do like it, but I did not sound too enthusiastic. I thought that a specific example would be more persuasive, so I told her how once I found a draft of a love letter hidden among some boring bureaucratic forms. 

 

The letter was written on the back of some typed documents, a long backlog of art objects in a museum collection. Who knows how that letter ended up in the archives? Maybe the writer did not have access to blank paper, and he wrote that letter on some pages that he did not think were of much value—but then he forgot to take out those pages from the file. I will probably not be able to use that love letter as a source for an article or book chapter, but I am thrilled that something like this could happen in an archive; to stumble upon a trace of an anonymous bureaucrat’s intimate anxieties about a seemingly unrequited love, and to feel empathy for someone living in a different time and place (in this case, it was Romania in the aftermath of the second world war, as the letter was dated on January 1, 1945). My friend did not seem too persuaded, but she nodded diplomatically when I told her the story. 

 

What I love most about doing archival work is the potential of creating a story out of disparate pieces. But the road from posing a research question and finding primary sources to the end product, often a book, is a long and twisted one. To succeed as a historian, to walk to the end of that road, you need to be patient, to be hard-working, to embrace failure, chance, and serendipity, to be open-minded, to be willing to revise your thinking and arguments along the way, and especially to be persistent. 

 

It is a long road, but it needs not be solitary. We need to find ways to make our tentative arguments heard, by sharing them with friends and colleagues; to go back to reading good fiction when we feel that our writing becomes stale; and especially to find a community of kindred spirits and mutual support—either in our own department, academic circles, or on #AcademicTwitter. We need to actively search for empathetic peers; and generous friends who would want to engage in conversation and allow us to talk about our work. 

 

Obviously, to find those peers and especially to keep them in your intellectual life means that we also need to be equally generous with our time and ideas. And here I would draw from my experience conducting interviews. Listening, truly listening to someone, takes tremendous effort and energy. (During my fieldwork, after one hour and a half of a conversation with someone, when I was trying to follow every word and think about each possible significance of every utterance, I would become so tired that I would often need to take a nap.) I have tried to apply this active listening during conversations with my peers and my students. I don’t always succeed, but when that happens I feel I can truly engage with someone’s ideas in a fresh and generous way. 

 

Ultimately, in my view, the best historian is a kind of magician; one that can transform a puzzle of disparate dusty documents into a persuasive analysis and an electric narrative. And whenever a friend asked them whether they like what they are doing, this historian would answer: “Yes, I love it! Let me tell you about this time when…” And then a true, genuine and generous conversation would follow.

 

Who was your favorite history teacher?

 

I was privileged to learn a lot from Katherine Verdery and Gillian Feeley-Harnik, my PhD co-advisers, both brilliant anthropologists who seriously engaged with historical analysis in their work and conducted extensive archival research in addition to fieldwork. Their scholarship represents a model of astute analysis and intellectual rigor. 

 

What is your most memorable or rewarding teaching experience?

 

I once had a student who was completely silent in class and even looked a bit aloof. It was the beginning of Spring semester and this was a seminar of around 15 students. I wanted to better understand why: did she have a very busy schedule that would not allow her to get prepared for this class? or did she find the material plainly boring? I invited her to schedule an appointment during office hours. During that meeting, she admitted to me that she could not follow the class conversation and that she felt always behind. I asked her to describe to me how she studied for the class. She took out one of the books assigned for the course and I took a look. All of the pages were highlighted, because, as she put it, she felt that everything was important. I realized how much work she had been putting in this course, and also how lost she might have felt dealing with all that information. I told her that she needed to learn how to skim read, a skill that is so important in college. 

 

Afterwards, we met almost every week that semester and we discussed different ways in which she could go through a book without paying attention to every word. I taught her tricks that I’ve myself learned in graduate school: read the introduction and conclusion first, then read the intro and conclusion of various chapters, skim through pages until you get to a part that catches your attention and then read closely only that section. We alternated between different forms of reading, ranging from quickly skimming some pages and getting one idea on the paper to closely reading a particularly beautiful paragraph or a persuasive analysis—and stopping there.

 

The student became more and more confident and she began speaking in class. She was soft-spoken, but her comments were poignant and persuasive, revealing her originality of thought and attention to detail. Her writing improved significantly, and she passed the course with a good grade. When I returned to campus at the beginning of the next academic year, I ran into her on campus. She was with her mom and her grandmother. I then learned that she was a first-generation college student—something that she did not tell me during our meetings. She was elated to be done with all of the required courses and to be soon the first in her family to become a college graduate. 

 

From that story I have learned that I should not imagine anything about students’ silences; that there could be many other issues hidden behind their unwillingness to speak in class. Since then, I’ve tried to encourage every student in my courses to come to talk to me at the beginning of the semester. Such individual meetings have helped me to learn more about each student and forge a more nuanced interpersonal connection, one that would otherwise be more difficult to emerge in the classroom.

 

What are your hopes for history as a discipline?

 

That more archival funds will be made available to researchers and that more positions will become available for junior scholars currently on the job market. And that more and more students will choose to enroll in history courses and pursue history majors because they will realize how much those courses could contribute to their becoming informed citizens, confident in their beliefs and less prone to be influenced by political manipulation.

 

Do you own any rare history or collectible books? Do you collect artifacts related to history?

 

I’m not a collector but I love to visit second-hand bookshops and museums.

 

How has the study of history changed in the course of your career?

 

The field of Eastern European history has changed both thematically and quantitatively in the last twenty years. This is a byproduct of two major changes: 1) the archives of the former communist governments have been mostly made available for research and 2) a new generation of scholars in the region have begun a systematic study of this treasure trove of newly available primary sources. There is a much more intense conversation and collaboration among scholars living and working in the region and historians living abroad, as I could see at the conferences that I attend regularly (especially the ASEEES, the American Association for East European and Eurasian studies). 

 

Also, up to the mid 1990s, the field of post-1945 Eastern European history continued to be heavily influenced by topics and assumptions that were themselves byproducts of Cold War: a penchant for political history, the assumption of a clear division between East and West, one that would not pay attention to systematic exchanges among of various European countries within and outside the communist bloc. Things have changed dramatically. Historians have recently shown that such transfers between West and East were part and parcel of the politics of Cold War, and not just simple "deviations" from the ideological norm. The politics of urban planning and the relationship between place-making and state-making during communism is another subfield that is rapidly expanding.

 

What is your favorite history-related saying? Have you come up with your own?

 

I don’t have a history-related saying, but one of my all-time favorites, one that I keep repeating to myself when I get stuck, is “feather by feather.” It is inspired by “Bird by Bird,” the famous piece by Anne Lamott, in which she talks about the trials and tribulations of writing. She starts with the story of her own father telling her brother how to begin and stick with a school project—by drawing one bird at the time and not become panicked by thinking about the magnitude of the whole project. When I was writing my book, by the end I was so tired that I thought that perhaps I could not accomplish even one “bird” at the time—the equivalent of a few pages. So, I deconstructed that “bird” into a multitude of “feathers,” that is, individual words. It felt easier to just think in terms of word counts instead of pages. However, if I think about it, “feather by feather” is also fundamentally historical. It speaks about gradual change, and thus implicitly about history as a process. Almost anything, from political institutions to ideas, concepts, and attitudes, needs time; to emerge, to mature, and to flourish.  

 

What are you doing next?

 

I am currently writing an article based on my recent archival research, focusing on the confiscation of the property of the German ethnic minority in post-1945 Romania and on the subsequent negotiations that the Lutheran Church initiated with the communist state to regain some of these assets.

 

I have also started working on my next book, which draws on archival and ethnographic research to explore how political regimes (communist and postcommunist) in Romania used property confiscation or restitution to negotiate their relationship with Transylvania’s ethnic Germans and Hungarians.  

 

I am also preparing to start the Fall semester, when I would be teaching a graduate seminar, Methods and Theory in Historical research, and an undergraduate course about immigration. 

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/article/173038 https://historynewsnetwork.org/article/173038 0
Celebrating the 200th Birthday of Prince Albert

 

200 years ago, a German petty princeling was born. Little Albrecht came into the world in what is now Bavarian hinterland, in a summer residence that had recently been revamped in Gothic Revival Style. A second son, he missed out on being heir to a diminutive duchy that was about the size of the Isle of Wight. 

 

In 1819, “Germany” was a confederation of sovereign states that were mostly run by many royal families. Thus, the birth of yet another royal mustn’t have seemed all that historically significant. But this year historians, tourists, and a fair few locals will be celebrating in Schloss Roseau. In Newcastle, meanwhile, visitors to the Laing Art Gallery can view watercolours of this beau, and slightly embellished, “castle” — on loan from the Royal Collection. 

 

Why the celebration? Because aristocratic Albrecht grew up to become Albert, husband of Queen Victoria of England.

 

In the age of Victoria and Albert, the Prince’s German-ness was a PR problem. Today, it is welcome soft power in politically hard times. In May, on what had been planned as a post-Brexit visit, Prince Charles was sure to mention the legacy of Albert. Over a century before, Queen Victoria had tried to tone down Albert’s teutonic traits: she worried about British prejudice. Germans were seen as serious, pedantic, and in search of power abroad. 

 

A biography of the Prince commissioned by the Queen portrays him as an English witto avoid such stereotypes. In his student days he was apparently good at comic impressions, which “a University, especially a German University, with the oddly accentuated ways of its professors, can never fail to supply.” Behind the managed public image, though, in private Victoria and Albert wrote to each other in earnest — and in German. 

 

Despite the Germanophobia or Germanophilia that has surrounded Albert, it’s debatable how much he embodied the Germany of two centuries ago. Born into a contemporary culture famous for its poets and philosophers, once in Britain Albert became Chancellor at Cambridge. He’s credited with bringing the university into competition with the continent’s most reputable establishments — in Germany. But Albert wasn’t a bastion of German Bildung, or education, himself. In Bonn he followed a standard syllabus for his noble sort, rather than embrace any republic of letters. He did, however, attend lectures by a luminary of German Romanticism: August Schlegel. He didn’t insist on either a reserved seat or being addressed formally at the start of the hour — unlike pompous princely peers. Though perhaps acting as one of the people allowed Albert to skive off without much notice. 

 

If Albert wasn’t a token German intellectual, neither Dichter nor Denker, in domestic life he was more typically Germanic. His and Victoria’s palaces were decorated with Christmas trees — although these were first introduced by earlier German British royals, those Georgian Hanovarians — and guarded by dachshunds called Waldmann and Waldina. And Albert loved German Lieder, or classical songs: he would play on a Buckingham Palace piano and sing as Victoria and even Mendelssohn joined in. 

 

Princes like Albert were the period’s major German export. Germans had long set up royal houses abroad, and the provincial German aristocracy already married internationally. Victoria and Albert were cousins, after all. But two foreign monarchies came from the diminutive duchy of Saxe-Coburg-Saalfeld in the early nineteenth century, which still exist today: in Belgium, Albert’s uncle took up the throne; in Britain, Albert gave his name to Victoria’s line of succession. It was in this way that the tiny German territories expanded, in what was otherwise an era of global empires. 

 

Albert’s father, Ernest, had been given an additional petite principality at the Congress of Vienna in 1815, yet he found “Lichtenberg” to be a bit of a burden. So he sold it to the Prussians without consulting either the people of the place or his own, nominal parliament in Coburg — in a move Donald Trump would surely admire. While Ernest focused on ruling his lands closer to home (in reactionary fashion), his brother, son, and nephews were shipped away in the name of the family. Just like their becrowned German colleagues. The strategy that worked in Britain and Belgium was less successfully rolled out in other countries, however. Prince Otto, second in line to the Bavarian throne, was made King of Greece — until he was deposed. Maximilian I of Mexico arrived as the younger brother of the Habsburg Emperor, and was soon executed.

 

A motley collection of mini-monarchies is hardly our stock image of Germany. But it defined most of German history. Britain is part of this legacy. What’s more, the German royalty of these times — such as Albert and his relatives — have survived longer abroad than in Germany, since they abdicated there en masse after the First World War. Gloria von Thurn und Taxis, of a House that was made princely because it ran an efficient postal system, has reigned only over the parties of 1980s high-society — and has now switched her allegiance to high priests, and reportedly Steve Bannon.

 

There’s an exception to this German royal rule of thumb. Last month, Liechtenstein celebrated its national day as a German-speaking monarchy, a micro-state even smaller than the Isle of Wight. Although exceptional in the present, in the past Liechtenstein was exchangeable with the likes of Saxe-Coburg-Saalfeld, or for that matter Lichtenberg. The story goes that Prince Philip once painted watercolours on Liechtenstein’s mountainside. Whether true or not, imagine the scene for a moment: as a small, pretty, and provincial Schloss. That would symbolise both British royal and German cultural history of Albert’s age, which in 2019 is — somehow — still alive. 

 

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/article/173043 https://historynewsnetwork.org/article/173043 0
The History of Impeachment and Why Democrats Need to Act Now Ronald L. Feinman is the author of “Assassinations, Threats, and the American Presidency: From Andrew Jackson to Barack Obama” (Rowman Littlefield Publishers, 2015).  A paperback edition is now available.

 

 

Two American presidents have been impeached: Andrew Johnson in 1868 and Bill Clinton in 1998-1999. Richard Nixon resigned in order to avoid formal impeachment. All three instances produced extreme political division and controversy.  All three occurred with a divided government—the President was from a different party than the Congressional majority.

 

Andrew Johnson became president after Abraham Lincoln’s assassination in 1865. Lincoln, a Republican, asked the Democratic Johnson to be his running mate in 1864 due to concerns that Lincoln might face a tough reelection campaign against former General George McClellan. Lincoln hoped Johnson would help him gain the support of loyal Democrats who appreciated Johnson’s strong support of the Union.

 

However, Johnson did not agree with much of Lincoln’s agenda and Republicans in Congress strongly turned against him. The inability of Johnson to work with and get along with the party that had elected him Vice President was made worse by his horrible temper, refusal to compromise, and tendency to use foul language.  No one would defend his prickly personality and racist tendencies in retrospect.

 

Johnson was impeached and brought to trial for breaking the Tenure of Office Act of 1867, which was designed to prevent the President from dismissing cabinet officers without approval of the Senate. Johnson fired Secretary of War Edwin Stanton, a major critic and collaborator with Radical Republicans, who wished Johnson to be removed. The law was eventually declared unconstitutional by the Supreme Court in Myers V US in 1926, 59 years after the enactment of the law. 

 

Ultimately, Johnson avoided removal from office by just one vote. Ten Republicans joined with nine Democrats and voted to keep Johnson in office. The final vote was 35-19, one vote short of the two-thirds majority needed to approve removal. Johnson had not abused power or obstructed justice, and the impeachment case was flimsy. While his personality and viewpoints were obnoxious to many, there was no real justification for his impeachment.

 

Richard Nixon was facing impeachment in 1974 from the opposition Democratic Party in Congress due to strong evidence of abuse of power, obstruction of justice,  contempt of Congress, refusal to cooperate with the impeachment investigation relating to the Watergate Scandal, and other illegal acts discovered in the process of the investigation by the House Judiciary Committee.  

 

Ultimately, the Nixon impeachment was based on bipartisan support of that committee, with seven Republicans joining the Democrats in backing three articles of impeachment against Richard Nixon. The Supreme Court also stepped in via the case of US V Richard Nixon, ordering that Nixon must hand over the Watergate tapes demanded by the House Judiciary Committee.  

 

Additionally, bipartisan support for Richard Nixon’s removal from office was made clear by a visit of Republican leaders of Congress to the White House, including Senate Minority Leader Hugh Scott of Pennsylvania, Senator Barry Goldwater of Arizona, and House Minority Leader John Rhodes of Arizona, informing Nixon that he had lost the support of the Republicans in the US Senate, and would be unlikely to gain more than fifteen votes of the 34 needed to survive an impeachment vote to remove him.  

 

With the strong case against Nixon, and the bipartisan move against him staying in office growing rapidly, Nixon realized it was time to leave the Oval Office, and avoid a further constitutional crisis.

 

Bill Clinton faced impeachment in 1998-1999 from the opposition Republican Party in Congress. Republicans were determined to remove him based on his perjury before a grand jury in the Jones V. Clinton 1997 Supreme Court case regarding Clinton’s extramarital sexual relationships, and the need for the President to testify before a grand jury.

 

Clinton was impeached on the last day of the 105th Congress in December 1998 and the trial was held by the new 106th Congress in January and February 1999. This violated the rule that an impeachment and trial must be conducted in the same Congress. The trial was part of the policy of Newt Gingrich and other Republicans to do what they could to undermine the Bill Clinton Presidency and plan for the upcoming Presidential and Congressional Election of 2000.  

 

Ultimately, the Senate voted to determine if Clinton would be removed from office on two counts. On the first count, lying under oath, the Senate voted 55-45, but this was less than the two-thirds majority necessary to remove the president. On the second count, obstruction of justice, the Senate voted 50-50 to remove Clinton, again short of the two-thirds majority required. Ten Republicans joined the Democrats on the first charge and five Republicans on the second count. Although some Republicans attempted to hold a vote on another impeachment article on a separate obstruction of justice charge, this failed miserably and was not considered by the Senate.  

 

The case against Bill Clinton was more similar to the political vendetta of the Republican Party against Andrew Johnson 130 years earlier than Richard Nixon’s offenses.  No one then or since would defend Clinton’s private behavior in the Oval Office or his lying under oath, but it was clearly an unpopular move by Republicans to impeach Clinton, and the President remained popular in public opinion polls at that time.

 

So, what do these past examples tell us about a potential impeachment of Donald Trump? It is extremely unlikely that Trump would be removed from office because the Senate is Republican-controlled. It is still essential, however, that Democrats push impeachment to make a political point. Just as the Republicans in 1999 pursued impeachment without consideration of how they might appear in public opinion, the Democrats should not worry about public opinion or political ramifications because Trump’s actions require accountability. If Democrats don’t take action, history will record that the Democratic Party refused to see the long term danger of Trump, and it will set a bad precedent for the future. 

 

As I’ve written before, the case against Donald Trump is overwhelming. Donald Trump obstructed justice to prevent a thorough investigation into Russian involvement in the 2016 Presidential campaign. His son and others in the Trump campaign engaged in collusion with a foreign nation determined to undermine the candidacy of the opposition nominee, Hillary Clinton. Trump has also violated the Emoluments Clause of the Constitution, by making profits daily on his various hotel properties and other business ventures, as recent reporting has made even more clear.   

He has abused the Pardon power by promising or hinting at pardons for those who break the law and enforce his illegal and unethical actions.  He has engaged in conduct that grossly endangers the peace and security of the United States in foreign policy.  He has advocated violence and undermined equal protection under the law. He has undermined freedom of the press, a threat against American democracy, and has pressured the Department of Justice to investigate and prosecute political adversaries. 

 

Finally, Trump has shown contempt of Congress by refusing to cooperate with their investigation of his administration, a charge that was one of the three brought against Richard Nixon before he decided to resign ahead of a certain impeachment by the House of Representatives and conviction by the US Senate. 

 

Democrats need to act before the upcoming presidential election consumes even more political energy. It is time for the Democrats to move ahead on what needs to be done:  the impeachment on high crimes and misdemeanors of the 45th President.

 

For more on impeachment, click on the links below: 

What To Know About the History of Impeachment

George Orwell and Why the Time to Stop Trump is Now

What Should Historians Do About the Mueller Report?

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/blog/154248 https://historynewsnetwork.org/blog/154248 0
What Historians Are Saying: 2020 Election Democratic Primary Debates Click inside the image below and scroll down to see articles and Tweets. 

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/article/172385 https://historynewsnetwork.org/article/172385 0
The Divine Right Presidency Steve Hochstadt is a professor of history emeritus at Illinois College, who blogs for HNN and LAProgressive, and writes about Jewish refugees in Shanghai.

 

 

Trump’s latest use of our government to cover up his mistakes, this time about weather forecasting, is revealing about the nature of his Presidency.

 

No government weather maps showed Hurricane Dorian threatening Alabama. On Thursday, August 29, Trump was briefed in the Oval Office on the Hurricane by the head of FEMA, which released a photo of him looking at a map of where Dorian had been and where it was headed. A white curved line showed the areas that Dorian might possibly hit. Not Alabama.

 

Early Saturday morning, August 31, the National Hurricane Center realized that Dorian was not going to hit Florida directly, and threat projections were shifted further east. The next morning, Sunday, at 7:51 AM Trump tweeted the following: “In addition to Florida - South Carolina, North Carolina, Georgia, and Alabama, will most likely be hit (much) harder than anticipated.”

 

The National Weather Service’s Birmingham office reacted in 20 minutes, tweeting at 8:11: “Alabama will NOT see any impacts from #Dorian. We repeat, no impacts from Hurricane #Dorian will be felt across Alabama. The system will remain too far east.”

 

For Alabamans, whew. For Trump, though, emergency – he had made a mistake. Nobody died, his tweet perhaps scared some people, but he had been wrong, and that was impossible. At noon on Sunday at FEMA headquarters, he repeated that Alabama remained in the path of the storm, based on “new information”.

 

As the Hurricane moved north, doing tremendous damage but having nothing to do with Alabama, the storm in Washington about Alabama intensified. On Monday Trump repeated his clam that Alabama was in danger. By then, it was clear to everyone that Alabama would remain untouched, and the controversy shifted to whether Trump was correct that Alabama had been part of earlier forecasts. On Wednesday, Trump brought out the map from his briefing 6 days earlier. Somewhere in the White House, a new black Sharpie line had been added, extending Dorian’s “threat” another 100 miles west into a corner of Alabama.

 

On Thursday, Rear Admiral Peter J. Brown, Trump’s homeland security and counterterrorism adviser, released a statement that Alabama had been in the path of the storm. Wilbur Ross, the Secretary of Commerce who oversees NOAA and the National Weather Service, threatened to fire any employee who contradicted Trump.

On Friday afternoon, NOAA disavowed the Birmingham NWS office’s statement that Alabama would not be hit.

 

We all might soon forget this saga of Dorian and Alabama when the next outrage emerges, but its details display the character of our current government. Right-wing populist politicians and parties in democratic systems across the globe are being examined for their similarities to 20th-century fascists. Trump however is no strongman, he commands no armed militia of followers, who brutalize opponents. He acts more like the unelected monarchs who ruled for hundreds of years by divine right. Trump is the state and “L’état, c’est moi,” as Louis XIV is supposed to have said.

 

Trump’s equation of himself with the state emerges in many of his statements. When the prime minister of Denmark curtly rejected Trump’s notion of buying Greenland, he said, “She’s not talking to me, she’s talking to the United States of America. You don’t talk to the United States that way.”

 

Let’s add up some individual instances where Trump has identified the USA with himself, made the government into his personal servants, and claimed unprecedented powers to do whatever he wants. As soon as he was inaugurated, he enlisted the National Park Service to crop photos of the inauguration to pretend that his crowd was larger than Obama’s. He ordered by tweet all US companies to stop doing business with China. He claimed he had the right to end the Constitutional provision of birthright citizenship by executive order. He threatened to close our southern border with military force to stop migrants. He deployed the National Guard and active-duty troops to the southern border to deal with the “emergency” that he had created.

 

In response to Robert Mueller’s investigation, Trump’s lawyers created an argument that the President cannot commit obstruction because he can do anything he wants: “the President has exclusive authority over the ultimate conduct and disposition of all criminal investigations and over those executive branch officials responsible for conducting those investigations. Thus as a matter of law and common sense, the President cannot obstruct himself or subordinates acting on his behalf by simply exercising these inherent Constitutional powers. This led Trump to claim that he has the “absolute right to PARDON myself.”

 

King George III said during the American Revolution that “A traitor is everyone who does not agree with me.” Trump has often characterized his critics as traitors: when Democrats did not applaud his State of the Union speech in 2018; any Jews who vote for Democrats; congressional Democrats for opposing his anti-immigration policies. The website AXIOS counted 24 times by this past June that Trump had accused other Americans of treason.

 

Things didn’t turn out so well for George III, when the American colonists decided that he did not represent them. To prevent Trump from crowning himself King Don I, Americans will again have to reject divine right pretensions.

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/blog/154247 https://historynewsnetwork.org/blog/154247 0
Roundup Top 10!  

On 9/11, Luck Meant Everything

by Garrett M. Graff

When the terrorist attacks happened, trivial decisions spared people’s lives—or sealed their fate.

 

Busing Ended 20 Years Ago. Today Our Schools Are Segregated Once Again

by Gloria J. Browne-Marshall

Any desegregation plans must be a shared burden. But are we willing to take it on?

 

 

On or off, peace talks with the Taliban spell disaster for Afghanistan

by Ali A. Olomi

If history is any indication, the consequences of the Trump administration’s reckless attempt at an agreement and even hastier reversal will be borne out by Afghans themselves.

 

 

The Electoral College was Terrible from the Start

by Garrett Epps

Epps doubts that Alexander Hamilton could foresee the consequences of an electoral college.

 

 

The Lost Promise of Reconstruction

by Eric Foner

Can we reanimate the dream of freedom that Congress tried to enact in the wake of the Civil War?

 

 

Should We Give Up on Democracy?

by Rick Shenkman

Some social scientists say we might not have a choice.

 

 

How Africa is transforming the Catholic Church

by Elizabeth A. Foster

Pope Francis’s visit to Africa highlights the growing trend toward decolonizing Catholicism

 

 

Why Southern white women vote against feminism

by Angie Maxwell

The often overlooked question that explains why discussion of a gender gap leads us astray.

 

 

The Necessary Radicalism of Bernie Sanders

by Jamelle Bouie

Conflict was the engine of labor reform in the 1930s. And mass strikes and picketing, in particular, pushed the federal government to act.

 

 

 

The Power of Serena Williams

by Tera W. Hunter

"What she and Venus have accomplished is far more important than future titles and broken records."

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/article/173035 https://historynewsnetwork.org/article/173035 0
Trump’s Wall and the Aggrandizement of Despots

 

During the last week of August, The Washington Post reported that President Trump told aides to “fast-track billions of dollars’ worth of construction contracts, aggressively seize private land and disregard environmental rules.” He reportedly added that he would pardon any “potential wrongdoing.” Although acknowledging that an administration official insisted the president was only joking about pardons, the report reveals the extent of the president’s desperation to secure a victory before the 2020 presidential election. A week after the Post story, the U. S. Department of Defense authorized diverting $3.6 billion to fund 11 wall projects along the Mexican border.

 

Egotistical rulers like Trump often have grandiose architectural plans. Hitler had his “Germania,” his name for a new redesigned Berlin that would dazzle the world. Mao Zedong had his “10 Great Buildings” built in Beijing 1959. Stalin had his never-built Palace of Soviets, which was to be higher than the Empire State Building, and later, Moscow’s seven skyscrapers known as the “seven sisters.” As a candidate and heretofore as president, Donald Trump has been consumed by his dream of building "a great, great wall" on the United States–Mexico border. After announcing this when declaring his presidential candidacy in mid-2015, he added that “nobody builds walls better than me,” and that he would “have Mexico pay for that wall.”

 

For comparison with Trump’s wall obsession, however, let’s just concentrate on one despot’s architectural plans—those of Stalin. Although the idea of Palace of Soviets had been floating around for a while, it was not until August of 1932 that Stalin began to personally supervise its design by indicating specific alterations he wished in one architect’s plan. In 1933, he gave further instructions. They indicated various details in regard to shape, the need for the building to reflect the international solidarity of the proletariat, and, most importantly, that it be a monument to Lenin. Thus, Stalin wanted it to be topped by a gigantic statue of Lenin that would be much higher than the Statue of Liberty. To make room for the new structure, which was to be the tallest building in the world,  the massive Cathedral of Christ the Savior was destroyed (later rebuilt in post-Soviet Russia). 

 

But the German invasion of the USSR in 1941 prevented Stalin’s architectural dream from being realized. Building materials for it simply could not be diverted from the war effort. Stalin’s penchant for gigantic buildings, however, resurfaced after Germany’s surrender in 1945. From 1947 to 1953 (the year of Stalin’s death), he had seven skyscrapers built. One of them, a new structure for Moscow State University (MSU) became the highest building in Europe.

 

According to Khrushchev’s memoirs, Stalin said, “We’ve won the war. . . . We must be ready for an influx of foreign visitors. What will happen if they walk around Moscow and find no skyscrapers?They will make unfavorable comparisons with capitalist cities.” Although Stalin wanted “to impress people” with the grandeur of buildings such as the new MSU one, Khrushchev thought “the whole thing was pretty stupid.” 

 

Not only did Stalin impose the “seven sisters” on Moscow, but he also dictated a similar architectural style on some buildings in other cities dominated by Soviet power. Warsaw’s tallest building, originally the “Joseph Stalin Palace of Culture and Science” but later renamed just the “Palace of Culture and Science,” is one example.  

 

Yet, neither Stalin, nor Hitler, nor Mao, ever obsessed about building any structure as much as Trump has about “the wall.” From the beginning of his presidential campaign to the present, no other topic has concerned him more—even his Mueller investigation worries did not begin until 2017. A May Washington Post article stated that he “has demanded Department of Homeland Security officials come to the White House on short notice to discuss wall construction and on several occasions woke former secretary Kirstjen Nielsen to discuss the project in the early morning.” He also repeatedly urged the U.S. Army Corps of Engineers and Department of Homeland Security to award the border-wall contract to a construction firm whose head frequently appears on Fox News and is a GOP donor. 

 

Two primary reasons seem to propel Trump’s wall fixation. First, “the wall” is a clear symbol of his immigration policy, which demonizes immigrants crossing our southern border—“When Mexico sends its people, they're not sending their best. . . . They're sending people that have lots of problems, and they're bringing those problems with us. They're bringing drugs. They're bringing crime. They're rapists” (June 2015). More recently he warned of “invaders” coming across in “caravans.” The “wall” is the major symbol of his politics of fear and division

 

Secondly, Trump not only considers himself a master builder, but a “master” at most things (“My I.Q. is one of the highest.” “I’m a smart person. I know how to run things. I know how to make America great again.” “I have a very good brain and . . . . I know what I’m doing.”). Thus, as The Washington Post reported, Trump is defying Congress and diverting military and other funds to build his “wall,” and he “is micromanaging the project down to the smallest design details. But Trump’s frequently shifting instructions and suggestions have left engineers and aides confused, according to current and former administration officials.”

 

In the face of expert opinions, he has agreed to build steel bollard fencing as opposed to a concrete wall, but he insists the bollards (or slats, as he likes to call them) should be painted black to make them hot and less climbable. He has also expressed a desire to arm the slats with sharp spikes that would cut the hands of any attempted climbers. And, like Stalin, “the higher the better.” As one official said in regard to Trump’s wall preferences, “He always wanted to go higher.”

 

In an earlier article, I mentioned “6 disturbing parallels between Stalin and Trump,” as well as some differences. Two of the former, their egoism and “politics of fear,” have already been suggested, and both men attempted to foster “a cult of personality.” 

 

But Trump’s use of the catchwords “Make America Great Again” and “Build the Wall” indicates still another commonality between Stalin and Trump: a willingness and ability to successfully employ simplistic slogans. Like Stalin, for example, Trump has labeled political opponents “enemies of the people.” He has also encouraged “build the wall” rallies and apparel—one of his supporters (singer Joy Villa) wore a “Build the Wall” dress with a “Make America Great Again” purse to the 2019 Grammy awards. And in January 2019 Trump tweeted, “BUILD A WALL & CRIME WILL FALL.” 

 

Such slogans cater to basic emotions like fear. Liberals and progressives sometimes find it difficult to understand the appeal of a Stalin or Trump—in 2019, six and a half decades after his death, Stalin remains tremendously popular in Russia. This failure, as Lionel Trilling indicated in 1950 and Rick Shenkman more recently, stems partly from underestimating the importance of emotions, myths, and non-rational political behavior.

 

Many of Trump’s slogans also reflect a populist and anti-intellectual mindset, an “us versus them” dichotomy that both Stalin and Trump often employed. Stalin frequently attacked “bourgeois specialists,” bureaucrats, and intellectuals, as enemies of the people. As a January 2017 Washington Post columnist noted: “Trump's campaign was pitched entirely at the idea that egg-headed wonks and liberal elitists—including the entire literary and entertainment culture centered on the two coasts—were not only deeply out of touch with the concerns of average Americans but also dismissive of them.” It went on to state that Trump views himself as channeling the will of the people, a group that has been ignored or laughed at by coastal elites over the past decade.” 

 

Although the future of the “wall” remains uncertain, it would not be surprising, if unblocked, he follows the example of the man who changed city names to the likes of Stalingrad, Stalinbad, Stalino, and Stalinogorsk. Plastering his name on everything from hotels and casinos to planes and golf courses is already characteristic of Trump. And on a trip to George Washington’s Mount Vernon estate he commented that if our first president “was smart, he would've put his name on it. You've got to put your name on stuff or no one remembers you.” 

 

Trump has already invaded our brains so that we will never again be able to hear “wall” without thinking of him. Maybe that will be enough for the “great wall-builder.” Or maybe he has heard of the “Great Wall of China,” and dreams that someday a “Great Wall of Trump” will help memorialize him. 

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/article/172975 https://historynewsnetwork.org/article/172975 0
The American Left Needs a Contemporary Thad Stevens

 

Donald Trump’s presidency has accelerated what was already the biggest upsurge for the American Left in several generations. The past decade’s crises, beginning with the Great Recession of 2008 and then the Republican Party’s lurch to the right under President Obama, have radicalized many Democrats and young people, with thirteen million people voting for an avowed socialist in 2016. This realignment leftwards has increased since Trump’s election: hundreds of thousands who had never participated in grassroots politics have joined local groups like Indivisible;  socialists are running for and winning office in many parts of the country; mainstream Democratic presidential candidates are vying to propose the most comprehensive programs for economic and social transformation.

 

The present momentum is a great opportunity for practical radicals, but they need to get serious about politics if they expect to seize this day.  Protest, “resistance,” and speaking truth to power are no longer enough. Leftists need to think about how to wield power in our complex political system.

 

For many, the sudden proliferation of radical movements and ideas evokes “the Sixties” or even “the Thirties,” when powerful movements drove massive social change. But today’s party and electoral politics differ profoundly from those two eras.  For most of the last century, the key ideological divisions in U.S. politics were not partisan, but regional and cultural. As Joe Biden’s recent gaffes have reminded us, only a generation ago the Democratic Party’s congressional leadership still included Southern white supremacists controlling key committees. When Jimmy Carter took office in 1977, the most powerful Democratic Senators were two Mississippians, John Stennis (Chairman of the Armed Services Committee) and James Eastland (Chairman of the Judiciary Committee). These men entered politics in the late 1920s and the Senate in the 1940s, and both remained obdurate foes of racial equality and any use of federal authority to guarantee black civil and voting rights. Conversely, in 1977 and after, some of the strongest defenders of black rights were northern Republicans like New York’s Jacob Javits, New Jersey’s Clifford Case, and Rhode Island’s John Chafee. The twentieth century’s only black Senator until 1993 was Massachusetts Republican Edward Brooke. Nor were civil rights a residual exception to an otherwise clear distinction between the parties. On other key issues, whether environmental, peace, or social welfare, Southern conservatives (mostly still Democrats) voted with conservative Republicans, and Northern liberals voted as a bloc across party lines.

 

Since then, we have lived through a fundamental realignment.  Democrats like James Eastland and Republicans like Jacob Javits are long gone.  Today the most powerful Southern Democrat is Representative James Clyburn of South Carolina. Like Clyburn, the majority of the Democratic party in South Carolina is African American (in the state where Radical Reconstruction crested). The Republican progressives are extinct and, while caucuses of center-right Democrats remain, the Solid South’s “yellow dog Democrats” committed to racial domination have disappeared—or turned Republican, with Strom Thurmond and Jesse Helms leading the way in the 1970s.  

 

Having finally attained ideological clarity in our party system, there is a historically unprecedented opportunity to make the Democratic Party what this country has always lacked: a party of working people and all those historically excluded by race, gender, sexuality, religion, or nativity—the party of human rights, if you will. Since Thomas Jefferson, Democrats have claimed to be the “party of the people,” but that boast always was qualified by white skin and manifold exclusions.  Myths aside, the party always included plenty of rich planters like Jefferson, Andrew Jackson, and later James Eastland, plus the oilmen and agrobusiness interests whom Lyndon Johnson and others faithfully represented for decades, and more recently, the financial and tech sectors avidly pursued by Clintonian neoliberals.

 

Today’s left-wing Democrats need to examine which legacies from U.S. political history they should draw upon in remaking their party. Mainstream pundits are waking the ghost of Eugene V. Debs, five-time Socialist presidential candidate in 1900-1920, as a forerunner to Bernie Sanders. Debs was a remarkably effective agitator who repeatedly went to jail for his principles and put socialism on the map in American politics, but he is not a model for the intra-party struggle American radicals need to wage now. Debs never held elective office, and his party never managed even a small caucus in Congress or any state legislature outside of Oklahoma (a “red” outlier during the ‘Teens).  Their greatest accomplishment was periodic control of city hall in industrial towns like Reading, Pennsylvania, Schenectady, New York, and Bridgeport, Connecticut, and one major city, Milwaukee, a far cry from national power. 

 

If contemporary leftists want to learn from the past, a better example would be the most revolutionary parliamentary leader in our history, Congressman Thaddeus Stevens of Lancaster, Pennsylvania. Stevens was an extremely effective legislative infighter in Pennsylvania and then in Washington, renowned for his deadly acuity in debates, admired and feared by both allies and enemies. He was Lincoln’s bane during the Civil War, relentlessly pushing the President to do what needed to be done--free the slaves and crush the slaveowners. 

 

Stevens understood that at key moments politics really is a zero-sum game, in which you either win or lose.  Moral victories are bittersweet consolations; prevailing over one’s opponent is what matters. In 1866-1868, he helped unite the Republican Party in pushing through the House all the key measures of Radical Reconstruction, including the Thirteenth Amendment (uncompensated freedom for all slaves) and the Fourteenth (making the “freedmen” into citizens with equal rights which no state could abrogate).  He passed the crucial Reconstruction Act of March 1867, which imposed military governments over the former Confederate states to block their legislatures’ efforts to recreate slavery via Black Codes controlling the freedpeople. That legislation authorized the Army to hold elections for state constitutional conventions in which all men, regardless of race, could vote. 

 

Stevens and the other Radicals grasped the essence of “movement politics,” to push from the outside and mold public opinion via ceaseless agitation while carefully maneuvering on the inside to get the votes needed for decisive policy changes.  These are the lessons we need to learn now, post-haste. It is also worth noting that Stevens was fearless in the face of significant disabilities. He was born with a club foot and ceaselessly mocked as a “cripple,” and in his youth suffered a disease which left him hairless, requiring ill-fitting wigs for the rest of his life.  For decades, he lived openly with his black housekeeper, Lydia Hamilton Smith, ignoring salacious rumors. In 1851, while a Congressman, he defended 33 black men in the largest treason trial in U.S. history after some of those men killed a Maryland slave-owner who crossed into Lancaster County to claim his escaped chattels.  

 

Thaddeus Stevens gave no quarter to the enemies of liberty.  He focused relentlessly on how to defeat them, by any and all means necessary, to bring about a true republic. When he died in April 1868, he lay in state in the Capitol with an honor guard of black soldiers. He asked to be buried in Lancaster’s one integrated cemetery with the following epitaph: “I have chosen this that I might illustrate in my death the principles which I advocated through a long life, equality of man before the Creator.” We need women and men like him now, in Congress and in the statehouses, and in power.

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/article/172967 https://historynewsnetwork.org/article/172967 0
The Democratic Presidential Candidates’ Ivy League Problem – and the Party Divide It Signals

 

The latest round of Democratic presidential debates invited the candidates to weigh in on the question of whether the oldest candidates in the race have the “vision” to appeal to a new generation.  But since all three of the frontrunners are septuagenarians – while no candidate under age 60 has reached even 15 percent in the polls – a more relevant question is whether the “vision” of the younger candidates will catch fire among Democratic voters of any age.  Most of the younger Democratic presidential contenders have an educational and ideological background that is at odds with the views and experiences of many in the party. Nowhere is that more evident than in the near-monolithic dominance of elite colleges in the younger Democratic candidates’ educational histories.

 

Eleven out of the fourteen candidates born after 1960 were educated at Ivy League universities, while only two attended state institutions.  (As a comparison, among the 20 million American undergraduates who begin college each year, only 0.4 percent go to the Ivy League, while nearly 75 percent go to public colleges and universities). By contrast, among the Democratic presidential candidates born before 1960, not a single one attended an Ivy, and several began their educational career at local public colleges.  Bernie Sanders went to Brooklyn College.  Joe Biden attended the University of Delaware.  Even Elizabeth Warren, who eventually became a Harvard Law School professor, earned her bachelor’s degree in speech pathology as a transfer student at the University of Houston after initially dropping out of college to get married. Like an earlier generation of Democratic politicians who often attended state colleges (as Lyndon Johnson did) – or even, like Harry Truman, skipped college altogether – the oldest Democrats in the race for the 2020 presidential nomination did not enter adulthood as part of a meritocratic educational elite who had the credentials or resources to attend the nation’s most selective schools.

 

Of course, Ivy League-educated presidential candidates are nothing new; the United States has had them since the eighteenth century.  But for much of the twentieth century (until at least the late 1960s), the Ivy League functioned more often as finishing colleges for the privileged than as creators of a new meritocratic class defined by intelligence.  Money and family connections often mattered more than SAT scores in securing admission.  As a result, hardly any Democratic politicians from working-class backgrounds were Ivy League alumni; the only Democrats who did attend were, like their Republican counterparts, children of wealthy families or political dynasties. The rest – including Hubert Humphrey, Eugene McCarthy, George McGovern, Walter Mondale, and a host of others – went to local state schools or even religious colleges, and their political priorities reflected the education that they received there.  McCarthy, a Catholic graduate of Saint John’s University, was a philosopher-of-sorts on the campaign trail, while McGovern, a graduate of Dakota Wesleyan, could quote the Sermon on the Mount with the fervor of an evangelical advocate of the Social Gospel.    

 

All of this began to change in the last few decades of the twentieth century, when the Ivy League became a gateway to national politics for many first-generation members of a rising meritocratic class.  Though scions of wealthier families are still vastly overrepresented in the Ivy League, a focus on academic merit and a concerted effort to make these schools more racially and economically diverse has enabled many brilliant, hardworking people from lower-income homes to make it into the Ivy League.  As a result, the Ivy League has become more important than ever as an imprimatur of academic merit – and an increasingly important gatekeeper for entry into the upper echelons of any profession, including politics.  Since 1988, when Harvard-educated Michael Dukakis ran for president, the Democratic Party has never nominated a graduate of a state university or non-elite college; every Democratic presidential nominee for the past thirty years has had a degree from either Harvard or Yale.  And every current member of the Supreme Court has likewise attended law school at one of these two universities.  

 

Even as elite college students have become more racially and economically diverse, they have become ideologically more monolithic.  Only a generation ago, income rather than education was a better predictor of people’s political leanings, but now people with a graduate degree are far more likely to be liberal Democrats than conservative Republicans, regardless of their race or income.  To be sure, there is a conservative contingent at all of the nation’s colleges, including those that are most elite, but conservatives from the Ivy League are usually conscious that they are defying the intellectual currents at their school and that they are rebelling against the prevailing academic ethos of rights-conscious liberalism.  Liberal students, on the other hand, commonly confuse the secular, rights-conscious liberalism of their academic milieu with the views of many lower-income Democrats of color – even though there are significant differences between the two, especially on issues of religion, sex, and gender.

 

Fifty-four percent of Democrats with graduate degrees identify as “consistently liberal” on all issues (social and economic), but the same is true of only 24 percent of Democrats with “some college” and only 11 percent of Democrats with a high school education or less. Surveys show that less-educated Democrats are overwhelmingly liberal on economic questions, such as jobs and healthcare; it is only on the cultural issues, such as abortion or LGBT+ rights, that significant differences by education and race show up.  And on these issues, the differences are stark.  About half of all Hispanics, for instance, would like to make abortion illegal. Fifty-five percent of black Democrats believe that a person’s gender is determined by their birth sex – a view that only 24 percent of white Democrats take.

 

These differences on abortion, gender, and sexuality reflect a larger divide in the party between secular and religious voters.  White Democrats are heavily secular: only 22 percent attend religious services once a week, while 44 percent attend “seldom or never.”  Nineteen percent of white Democrats are atheists.  But among blacks and Hispanics – especially those who have less education – the picture is very different.  Forty-seven percent of black Democrats attend religious services at least once a week and an additional 36 percent attend at least once a month.  Seventy-six percent of black Democrats – but only 35 percent of white Democrats--say that religion is “very important” in their lives. Whether they draw from the progressive strands of black Protestant theology, socially conscious Catholicism, or another religious tradition that teaches concern for one’s neighbor, religion shapes their economic views in a way it does not for most white Democrats.  

 

The secular, rights-conscious, cultural liberalism of white Democrats is largely a mirror of the prevailing ideology at equally secular elite private colleges. A century ago, most of these colleges were bulwarks of liberal Protestantism, but their pluralistic, democratic values have now been thoroughly secularized and divorced from the religious traditions that initially shaped them.  The percent of white Democrats who never attend religious services (44 percent) just happens to be exactly the same as the percent of Yale class of 2019 undergraduates who entered college identifying as “atheist, agnostic, or nonreligious.” And while this might be merely a coincidence, it seems to point to a larger reality: the views of white Democrats on cultural issues such as abortion, gender, and sexuality are almost identical to the views of the majority of elite college students, while the views of non-whites (who, on average, are much less likely to have attended a top-tier college or, in many cases, any college at all) often diverge radically. Highly educated cultural liberals often imagine that their progressive views resonate with lower-income, racial minorities who want affordable healthcare, lower housing costs, and sustainable wages, but in many cases, they do not.  

 

To be sure, earning a degree from an Ivy League college or another highly selective institution by no means necessarily suggests that one is out of touch with the socially conservative values found among some in the working class.  There is, of course, a sizable minority of cultural conservatives even at the most liberal colleges.  Nor is it impossible for a Democratic candidate with an elite college education and progressive views on cultural issues to appeal to working-class voters, since Bill Clinton (a graduate of Georgetown and Yale Law School) and Barack Obama (Columbia and Harvard Law School) both did this very effectively.  But both Clinton and Obama were masters of expressing their liberal views in a religiously inspired language of cultural consensus that demonstrated respect for the cultural values of socially conservative voters.  Whether the current crop of young Democratic presidential contenders can do this effectively remains to be seen.  The preference of African American voters and other people of color for an older white man with working-class roots over any of the younger African American or Hispanic candidates in the race suggests that so far, they have not.     

 

If the younger generation of Democratic politicians would like to be the face of the party’s future, they may need to take a page from the party’s past and exchange their rights-based, cultural liberalism for a jobs-focused message that is sensitive to the social issue concerns of the millions of party members who have never set foot on an Ivy League campus.  The younger Democratic presidential candidates might be Ivy League graduates, but to win support from the rest of their party they will need to translate their ideas into a cultural vernacular that they probably did not learn in the classroom.  

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/article/172964 https://historynewsnetwork.org/article/172964 0
A Nation Headed to Civil War: The Compromise of 1850

 

In 1850, the Union was proclaimed to have been saved again in a great compromise that removed slavery as a controversy from national politics. President Millard Fillmore declared it nothing less than “the final settlement.” The issue tearing the country apart, whether the vast territory conquered in the Mexican War would be slave or free, was no longer to be a matter of debate. “We have been carried in safety through a perilous crisis,” Franklin Pierce announced at his inauguration on March 4, 1853.

 

The Compromise of 1850 admitted Texas as a slave state and California a free one, and avoided determining the status of New Mexico until far into the future. Only a few agitators trying to shield fugitive slaves from being returned to their masters under the new federal law continued to be nuisances. Slavery as a question that would divide the country was now safely consigned to the past as it had once before.

 

Most importantly, this new compromise left sacrosanct the Compromise of 1820, the Missouri Compromise, the original “final settlement.” The Missouri crisis had aroused all the issues and arguments revived in the crisis in the aftermath of the Mexican War. The admission of Missouri as a state would increase the proslavery bloc in the Senate to a four-seat majority. Its admittance would also establish a precedent for admitting further Western states as slave states. The Northern objection was mirrored in Southern fears that the entire West would be denied to slavery and the balance of power inevitably shifted. Secretary of State John Quincy Adams wrote in his diary that the Missouri problem was “a flaming sword . . . a mere preamble—a title page to a great tragic volume.” He believed it was based in the Constitution’s “dishonorable compromise with slavery,” a “bargain between freedom and slavery” that was “morally vicious, inconsistent with the principles upon which alone our revolution can be justified.” He prophesied that “the seeds of the Declaration are yet maturing” and that its promise of equality would become “the precipice into which the slave-holding planters of his country sooner or later much fall.” In the Senate, the Southerners’ anxiety that slavery might be prohibited in the territories assumed a hostility congealed into ideology against the egalitarian premise of the Declaration of Independence. Senator Nathaniel Macon of North Carolina, the former Speaker of the House, posed the question, “A clause in the Declaration of Independence has been read declaring that ‘all men are created equal’; follow that sentiment and does it not lead to universal emancipation?” The Declaration, Macon stated, “is not part of the Constitution or of any other book” and there was “no place for the free blacks in the United States.” Senator Henry Clay of Kentucky managed to hammer together a narrow majority for a compromise that brought in Maine as a free state to balance the slave state of Missouri and established a line restricting slavery north of 36°31’ latitude excepting Missouri. The debate inspired a sense of panic in Thomas Jefferson retired at Monticello. “This momentous question, like a fire bell in the night, awakened and filled me with terror. I considered it at once as the knell of the Union.”

 

Jefferson’s nightmare hung over the Senate debate of the Compromise of 1850, filled with frightful images of death, premonitions of catastrophe, and curses of doom if slavery were allowed to persist as a vital issue. The Great Triumvirate of Henry Clay, Daniel Webster, and John C. Calhoun, the representative political men of their age, hurled lightning bolts from their Olympian heights. Henry Clay, young Abraham Lincoln’s “beau ideal of a statesman,” who invented the power of the Speaker of the House, who as a senator crafted the Compromise of 1820, who served as secretary of state, and who was nearly elected president, warned that the nation stood “at the edge of the precipice before the fearful and leap is taken in the yawning abyss below, which will inevitably lead to certain and irretrievable destruction.” Daniel Webster of Massachusetts, the Godlike Daniel, the voice of “liberty and Union, one and inseparable, now and forever,” whose framed picture hung in Lincoln’s law office, cautioned, “Secession! Peaceable secession! Sir, your eyes and mine are never destined to see that miracle. The dismemberment of this vast country without convulsion! . . . Sir, he who sees these States, now revolving in harmony around a common center, can expect to see them quit their places and fly off without convulsion, may look the next hour to see the heavenly bodies rush from their spheres and jostle against each other in the realms of space without producing a crash of the universe.” John C. Calhoun of South Carolina, whose stunning career included every office—congressman, senator, secretary of war, vice president, secretary of state—but the one he coveted most—president of the United States—sat wrapped wraithlike in a black cape on the Senate floor. The great nullifier, who insisted the states had preeminent authority over the federal government, objected to any compromise that would thwart the extension of slavery anywhere in the country, an “injustice” which he called the “oppression” of the South. “No, sir,” he prophesied, “the Union can be broken.” Calhoun’s acolyte, Jefferson Davis of Mississippi, in opposing the admission of California as a free state, threatened, “If sir, this spirit of sectional aggrandizement, or if gentlemen prefer, this love they bear for the African race, shall cause the disruption of these states, the last chapter of our history will be a sad commentary upon the justice and the wisdom of our people.” Calhoun died less than a month after his final appearance in the Senate. Clay and Webster were dead within two years. The old order passed. By then Secretary of War Jefferson Davis was the power behind the president.

 

 

Excerpt from ALL THE POWERS OF EARTH by Sidney Blumenthal

Copyright © 2019 by Sidney Blumenthal. Reprinted by permission of Simon & Schuster, Inc, NY.

 

To hear Sidney Blumenthal discuss his work on his five-part biography of Abraham Lincoln and more, watch his interiew with HNN editor Kyla Sommers. 

 

 

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/article/172969 https://historynewsnetwork.org/article/172969 0
What We Can Learn About Surviving Frauds Like Trump from Titus Oates

 

Before there was Donald Trump there was Titus Oates. Known as Titus the Liar after he was finally revealed and reviled, Mr. Oates succeeded in roiling England for three painful years, 1678-1681. Almost single-handedly, he fabricated the now-infamous “Popish Plot” that resulted in the execution of at least 15 innocent men (mostly peers of the realm along with priests and even archbishops), the death of another 7 in prison, a genuine constitutional crisis, widespread riots, panic, dislocation, heightened distrust among neighbors, and religious hatred. In short although history doesn’t literally repeat itself, sometimes it rhymes. 

 

Born in 1649, by his mid-twenties Titus Oates had accumulated a long history of failure, fabrication, and expulsions, along with a narrow escape from the gallows. As a youth, he was expelled from several schools - mostly for financial misbehavior - before entering Cambridge University, where he was also expelled after he reneged on paying a tailor whom he had engaged to make him a coat. He then faked a ministerial degree, masqueraded as an Anglican priest and was ejected from that position for drunkenness, lewd behavior, and misusing congregation funds. He went back to his father’s residence where he manufactured false charges against a local schoolmaster, hoping to accede to his position, but when the perjury was discovered, he was jailed, but escaped to London, and eventually shipped as an Anglican chaplain aboard a naval vessel. Within a few months, he was caught at “buggery,” then a capital offense, but avoided execution because of his supposed religious vocation, although he was soon drummed out of the Royal Navy. 

 

Having returned to London, Oates was re-arrested on his earlier perjury charge, but managed to escape once more, and briefly served as Anglican chaplain once again, this time to an aristocratic family, but was soon sacked for “unsatisfactory behaviour.” Oates’s religious beliefs, if he had any, are unclear. He converted to Catholicism briefly, later claiming that he did so in order to go under cover and reveal the plot that he was soon to cook up, out of thin air. As a putative Catholic, he wheedled his way into several schools in Europe, only to be kicked out of at least two, after which he pretended to have obtained a Doctorate in Catholic Theology – which was soon revealed to be bogus because he did not know any Latin. 

 

He returned to England, having devised details of a sensational plot – allegedly hatched in Rome and to be carried out by English Jesuits – to murder the Protestant English king, Charles II. In conjunction with one Israel Tonge, a fanatic anti-Catholic crusader, Oates managed to impress many officials with precise details as to the assassination plans. One killing was said to have been foiled when a musket jammed, after which a crack team of Jesuit assassins, armed with foot-long daggers, had allegedly been dispatched to murder the king while he was on his daily walk in St. James Park, not to mention a group of Irish “ruffians” waiting to accost the king; plus, the queen’s doctor was said to be planning to poison him if all else failed. Charles himself was dubious, in part because Oates claimed to have met Don John of Austria, describing him as tall and fair, whereas Charles had actually met the Austrian nobleman and knew him to be short and dark. Nonetheless, Titus Oates proved remarkably persuasive to many in the king’s court and to the public at large. Things came to a head when he testified about this “plot” before an Anglican magistrate, Edmund Berry Godfrey, who was found murdered a month later. Oates immediately announced that the Catholics were responsible, generating a panic of anti-Catholic frenzy in which Berry Godfrey virtually became a Protestant martyr. (The actual murderers were never identified.)

 

Mobs rampaged, burning effigies of the Pope, and breaking into Catholic-owned stores. Oates was given leadership of a contingent of the King’s Militia, which entered Catholic homes, terrorizing the occupants and arresting suspects. Before the tumult was over, he had fingered hundreds of peers and prelates, Parliament had mandated that Catholics be forcibly relocated to at least 16 kilometers from London, and a constitutional crisis arose because King Charles had no legitimate heirs, and his brother, the Duke of York, being a Catholic, was considered an unacceptable successor. 

 

By the first year of his colossal hoax, Oates had become the most popular man in the country, basking in the adulation of large crowds, and proclaiming himself "The Saviour of the Nation." He also assumed the title of "Doctor," professing that he had earned the degree at Salamanca, undeterred by the fact that he had never been there. He was lodged at public expense at Whitehall, given a handsome stipend, dressed himself in fine episcopal attire, and was accorded an official bodyguard. 

 

Eventually, the fraud crumbled. Acumulated evidence of Oates’s lies plus revulsion at the execution of many highly regarded persons led to his unmasking. He was convicted of multiple perjuries and was whipped through the streets of London and imprisoned for the duration of Charles’s reign.

 

How did this gratuitous grifter, this frequent failure, this persistent perjurer and master of mendacity succeed in hoodwinking so many, and in turning England upside down? There were three main contributors: Titus Oates’s personal appeal, an inchoate fear of England’s Catholic minority, and the acquiescence of public officials, many of whom knew better but failed nonetheless to hold him to account. Thus, Oates was a gifted and charismatic orator, demagogically adroit at playing to the emotions of his followers. He had no source of income other than his personal brand, which he burnished at every opportunity. 

 

At the time, Catholics constituted only about one percent of the English population; overwhelmingly, they just wanted to practice their pre-reformation religion, often in secret because of pre-existing prejudice against them. But nonetheless, there was widespread fear of Catholicism, even as people were often friends and neighbors of individual Catholics. By the latter half of the 17th century, history was casting a long shadow over England, notably a scalding memory of the nearly successful Gunpowder Plot of 1605, which had in fact been orchestrated by a small terrorist coterie of Catholics, and which, had it not been uncovered, would have blown up the Protestant English king James I and much of Parliament as part of a conspiracy to forcibly turn England back to Catholicism. There was also the terrifying Irish rising of 1641, which slaughtered nearly all Irish Protestants; a slogan promoted by Oates and his followers was “41 is come again.” Moreover, the Great Plague of London (1665) and the Great Fire of London (1666) had lent themselves to an earlier spate of anti-Catholic rumor-mongering.

 

On top of this loomed recollection of the Spanish Armada, as well as the fact that Protestantism, although successful, was geographically limited to northern Europe, while the great powers – France and Spain – were Catholic, as was Charles’s mistress, his wife, and his brother. Furthermore, Charles had attempted to ameliorate some of the more severe anti-Catholic laws of the time, while seeking accommodation with the rulers of France and Spain. Although he was definitely an Anglican, the Jesuits very much disliked him, reputedly calling him, among other things, the “Back Bastard.” 

 

And finally, there were members of Parliament, the clergy, judiciary, and the nobility who were reluctant to criticize Oates for fear of angry public reaction, and others, notably many in the newly formed Whig Party, who embraced his lies because they fed into their own agenda of suppressing Catholicism. It was not until the Roman Catholic Relief Act of 1829 that most of the discriminatory legislation passed because of Titus Oates’s malign influence was finally suspended. Oates himself didn’t go quietly; in 1699 he loudly disrupted the funeral of a woman who had forbidden him to preach at her demise and then in 1702 he was arrested for assaulting a woman with a cane. He died in 1705, largely forgotten and certainly not mourned. 

 

Before there was Donald Trump, there was Titus Oates – but England survived, thrived, and even became great. There have been frauds, con men (con women, too), and truly dangerous, deranged characters who have sown chaos, pain, and despair. However, the true story of Titus Oates, although horrifying and downright infuriating, should give us hope that the US, too, can recover from You Know Who, just as England did from Titus Oates. 

 

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/article/172963 https://historynewsnetwork.org/article/172963 0
The U.S. a Christian Nation? Not According to the Founders!

 

George Washington may have said it best, if not first: “Religious controversies are always more productive of acrimony and irreconcilable hatreds than those which spring from any other cause.” To prevent such controversies, Washington ordered Continental Army commanders “to protect and support the free exercise…and undisturbed enjoyment of…religious matters."

 

But former attorney general Jefferson [“Jeff”] Beauregard Sessions, III, of Alabama, contends that Washington’s views were “directly contrary to the founding of our country.” And Vice-President Michael Richard Pence, a fervent church-goer who publicly proclaims his Christian beliefs whenever he can, insists the United States was “founded as a Christian nation.” 

 

Pence and Sessions are but two prominent Americans in and out of politics today who continue refueling a centuries-old controversy over the role of religion in American life.

 

Washington’s friend, the widely heralded polemicist Thomas Paine tried ending the controversy. “I do not believe in…any church,” he declared. In a call to arms against what he called church-state tyranny in early America, he insisted that “every national church or religion accuses the others of unbelief; for my own part, I disbelieve them all.”

 

Both Benjamin Franklin and Thomas Jefferson agreed. President Jefferson denied that Jesus was “a member of the Godhead,” and Benjamin Franklin, a co-author of the Declaration of Independence with Jefferson, decried Christian church services for promoting church memberships instead of “trying to make us good citizens.” An outspoken Deist, Franklin criticized all religions for making “orthodoxy more regarded than virtue.” He insisted that man be judged “not for what we thought but what we did…that we did good to our fellow creatures.”

 

Most of America’s Founding Fathers echoed Franklin’s beliefs. America’s fourth President, James Madison was raised an Anglican and was a cousin of Virginia’s Episcopal bishop. But he was a fierce proponent of church-state separation and fathered the Bill of Rights, whose opening words outlawed government “establishment of religion” and any prohibition of “the free exercise thereof.” Both Congress and all the states agreed. 

 

“It was the universal opinion of the [18th] century,” Madison wrote in 1819, “that civil government could not stand without the prop of a religious establishment and that the Christian religion itself would perish if not supported by a legal provision for its clergy.” But as President, Madison found that, “the devotion of the people have been manifestly increased by the total separation of church from the state.”

 

Even the devout, church-going Congregationalist John Adams, who had signed the Declaration of Independence, inked his presidential signature on the 1796 Treaty of Tripoli affirming to Americans and the world that “the United States is not, in any sense, a Christian nation.” The 23 members present in the U.S. Senate (out of 32) ratified the document unanimously. 

 

That should have settled matters, but in the centuries since the founding, some Americans have persisted in claiming that the United States was founded as a Christian nation, ignoring-- even scoffing at--the words of the Founders, the Constitution, and the Bill of Rights. 

 

The sole grain of truth to claims of governmental ties to Christianity in early America lies in the different religions established in each of the independent British-North American provinces before the birth of the United States. Although individual states retained state-supported religions well into the 19gh century (four did so until after the Civil War), the ratification of the Constitution created an absolutely secular nation.

 

Indeed, each of the nation’s three founding documents—the Declaration of Independence, the Articles of Confederation, and the United States Constitution—carefully avoided all mention of Christianity or Christ. Article VI of the Constitution states as dramatically as possible, that “no religious test shall ever be required as a qualification to any office or public trust under the United States” –hardly the hallmark of a “Christian” nation. 

 

To reaffirm America’s not becoming a Christian nation, Congress and all the states added the First Amendment to the Constitution in 1791, reiterating the nation’s areligious character by barring government establishment of any and all religion.

 

Only the Declaration of Independence even mentions God--in a single ambiguous reference in the opening paragraph to what Deists rather than practicing Christians called “Laws of Nature and Nature’s God.” 

 

Like the founding documents, the collected letters, speeches, and papers of George Washington never invoked the name of Christ or Christianity and mentioned God only once, as he concluded his oath of office as first President of the United States and added, “So help me God.” Prior to that, he carefully omitted all references to God and Christ, appealing instead to “providence,” “destiny,” “heaven,” or “the author of our being” as sources of possible supernatural favor for himself and the nation

 

“Providence has directed my steps and shielded me.” young Colonel Washington affirmed after escaping death in a fierce encounter in the French and Indian War. And as President, he wrote carefully worded letters affirming the nation’s areligious status and its promise of religious freedom to leaders of twenty-two religious groups—and atheists!

 

In a reaffirmation of his deep opposition—and that of all the Founding Fathers--to state-sponsored religion, Washington wrote a personal letter to members of the Jewish synagogue in Newport, Rhode Island, in 1790, restating the United States Government commitment that “gives to bigotry no sanction, to persecution no assistance.” 

 

Again, the nation’s first President avoided all mention of God or Christ. 

 

Thomas Paine reinforced the thinking of Washington and America’s other Founders in his famed pamphlet Common Sense—the most widely read publication in the western world in the late 18th century after the Bible. Washington called Common Sense critical in convincing Americans of “the propriety of a separation [from Britain].”

 

A fervent patron of Deism, Paine called the “connection of church and state adulterous.”  He said such a connection in Britain and British-America had been designed to enrich both institutions and keep mankind in their perpetual thrall by infecting men’s minds with the myth of divine right of kings and hereditary rule. “Why,” Paine demanded, “should someone rule over us simply because he is someone else’s child?”  Calling the notion absurd, he added, “Mingling religion with politics [should be] disavowed and reprobated by every inhabitant of America.”  The Founding Fathers agreed.

 

John Adams disliked Paine intensely, but  nonetheless declared, “I know not whether any man in the world has had more influence on its inhabitants or its affairs for the last thirty years than Tom Paine. Call it then the Age of Paine.” He might have said, “The Age of Deism.”

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/article/172973 https://historynewsnetwork.org/article/172973 0
1933 May Be Closer than We Think

 

On January 30, 1933, Adolf Hitler was appointed Chancellor of Germany, effectively ending the Weimar Republic, the nation’s second attempt at democracy. On January 20, 2017 Donald Trump was inaugurated President of the United States effectively ending….well, what exactly?

 

Immediately after Trump’s ascension to office many political commentators sought to fill in this blank with comparisons to the ill-fated Weimar Republic. Historians and other academics rejected the analogy as too facile. They pointed out that, unlike the United States, Germany had little experience with democracy. It had lost a major war and suffered a draconian peace settlement. Its economy had also been buffeted by rampant inflation, high unemployment and finally a Great Depression. Moreover a large share of its population believed in conspiracy theories, including the infamous “stab in the back” legend that blamed the nation’s defeat in World War I on internal enemies such as Socialists, Communists and Jews.

 

While contingent events leading to the rise of Adolf Hitler and the election of Donald Trump might seem to differ beyond the point of comparison, two and a half years of the latter’s presidency now force us to look deeper below the surface. Increasing cultural, social and political continuities between Weimar and America should give us serious concern in assessing the fate of these two democracies as part of an analogous historical phenomenon.

 

The sociologists Rainer Baum and Frank J. Lechner characterized pre-Hitler Germany as a “nation of moral strangers.” It was a country whose people could neither agree about the nature of a good society nor the social relations and community that that social order entailed. Germans generally divided into three closely bounded and often incompatible social and cultural milieu: liberal, social democratic and authoritarian corporatist. 

 

From an American perspective the seemingly most unique of these milieu was the authoritarian corporatist, or what the historian Mack Walker characterized as the German mentality of hometowns. Hometowns, according to him, were communities of webs and walls that could be both physical and cognitive in character. The webs consisted of integrated and hierarchical social status groups or corporations, such as craftsmen, merchants, financiers and local government officials in cities and towns, and peasants and small farmers in the countryside. These groups earned their legitimate place in society through extensive training and socialization. They shared solidary, often in-bred and exclusionary values in opposition to liberal individualism and socialist collectivism and were considered “rooted” like no others in the nation’s social fabric. The walls, in contrast, protected against those elements of society who were “rootless” and “disturbers” of the hometown community. They consisted primarily of the working class and the Jews, but also included immigrants, criminals and social deviants.

 

Hometown mentalities in the United States historically flourished in the ante-bellum South, with its belief in the principles of social honor and white superiority, its exclusion of millions of non-white slaves and its staunch opposition to Northern economic and political liberalism. The Civil War and Reconstruction were supposed to have brought an end to such particularistic and racist visions of the good society. But notions of the glorious lost cause of southern independence, underlying today’s overt and covert white nationalism and nativism, have proven that American webs and walls continue to flourish in our collective psyche. They exist literally in terms of building a physical barrier along our border with Mexico designed to keep out “rootless” and therefore dangerous immigrants. They also continue to exist mentally in the recent words of a President who can, without apparent penalty among his supporters, blithely tell women of color elected to the House of Representatives to “go back and help fix the totally broken and crime-infested places from which they came.”

 

The Weimar Republic tried to reconcile the values of liberalism, socialism and hometown corporatism in a single constitution. It proved to be a spectacular failure. In the words of Otto Kirchheimer, a contemporary jurist and political scientist, the effort resulted in a “constitution without decision,” one that did not contain “any values in whose name the German people can be in agreement.”  By its very nature, it did not encourage true democratic compromise and reconciliation among interested parties, but only winning and losing based on the political strength of competing social milieu, each seeking to impose its own worldview and material interests on its opponents.

 

In the United States our own revered Constitution is showing similar signs of cultural and ideological strain and conflict. Although it did not include socialist values, it did try to reconcile liberal and hometown visions of a good society in a great compromise over the existence of slavery. Its very federal foundations were designed to protect the hometown aspirations of a white nationalist South, by giving each state in the Union two senators regardless of population, creating an electoral college to elect the President and preserving the right of individual states to oppose federal authority through the so-called reserved powers clause of the Tenth Amendment. The result has been a thwarting of a democratic interpretation of the popular will of the people, most recently through the election of two Republican presidents receiving fewer votes than their opponents and the prospect of it happening again in 2020.

 

In fact, the supporters of hometown values in the United States—be they Donald Trump, the Republican Party or right-wing media commentators—have come to the same conclusion that their predecessors in the Weimar Republic reached. Under a liberal constitution they can neither win nor maintain political power. Even in the Reichstag parliamentary elections of March 1933, with National Socialists in power and the full force of the state’s coercive powers behind their campaign, Hitler could only garner 44% of the national vote.

 

Two factors in particular enabled the victory of hometown values and the destruction of liberal and social democratic ones in the Weimar Republic, and may yet do so in the United States. The first was the power and prejudices of the courts. Despite the socialist-democratic revolutions of 1918/19, very few judges from the German Empire were replaced. Educated in a hometown milieu and usually staunch opponents of parliamentary democracy, they exploited the process of legal and constitutional review to undermine democratic practices and procedures at both the national and state levels of government.  They defined endemic domestic terrorism as the stepchild of the left and ignored radical right-wing terrorism against the Republic as the legitimate outrage of national patriots. Even when Adolf Hitler staged a violent uprising in Munich in November 1923 against the Republic and was convicted of treason, he spent a mere 264 days of a five year sentence in the relative comfort of Landsberg prison, where he composed Mein Kampf.

 

No one understands more fully the lesson of the judiciary in the Weimar Republic in preserving hometown political power than the Senate Majority Leader, Mitch McConnell. He has made it his primary mission to eradicate the “liberal bias” of the federal court system.  He spectacularly violated accepted Senatorial practices by refusing to even meet with, let alone hold a hearing on, Justice Merrick Garland, President Barack Obama’s nominee for the Supreme Court. Since then he has been assiduously pursuing the appointment of extremely conservative, mostly white and male judges to the federal bench. According to a recent review in The Nation, Mitch McConnell has been able to confirm to date 123 federal judges, including 41 to the federal court of appeals, compared to only 19 circuit-court judges during a similar period under President Obama.  These appointments were 78 percent male and 81 percent white, with an “unsettling number of them” having “earned their stripes as partisan think-tank writers, op-ed columnists, or even bloggers.” The vetting for most of these nominations has been through the ultra-conservative Federalist Society, while in March 2017 the more professional liberal American Bar Association was denied by White House Counsel Donald F. McGahn II its previous special access to background information on judicial candidates prior to their nomination. Right-wing critics of the ABA have always chastised it for its “liberal” biases.

 

The second factor contributing to the victory of hometown values in the Weimar Republic, which eventually morphed into the “blood and soil” and Volksgemeinschaftof the Third Reich, was the expansion and use of the office of the President. Nothing enables an untrammeled misuse of executive power more than a compliant court system and an impotent legislature. Article 48 of the Weimar Constitution granted the President the right to take emergency measures in times of crisis and national emergency. While the Reichstag could rescind an emergency decree, it never did so. By the time of the economic Depression of the early 1930s its impotence as a legislative body had become a stark reflection of a German nation of “moral strangers.” It proved virtually incapable of agreeing on anything and ultimately consisted of a majority of elected parties staunchly opposed to the continued existence of democracy. As a political force it became totally irrelevant in the face of the expanding executive rule by the President and the Chancellor he appointed. In 1932 the Reichstag met for only 13 days in total, passing only five laws in the entire year.

 

Donald Trump, in his more than two years in office, has been busy crafting an American version of Article 48. He has discovered the possibility of governing without legislative approval. His tools have been the executive order, the declaration of a national emergency and the extension of executive privilege. He has steadfastly ignored subpoenas for members of his staff and government to testify before the House of Representatives. Legislative efforts to reign in his executive proclivities have proven futile in a badly fractured Congress, with the Republican led Senate determined to deflect all efforts to hold the President and his staff publicly accountable. While Democrats have been able to seek succor in the courts to some degree, that opportunity is withering and dying as Mitch McConnell perfects his reshaping of the Federal judiciary in a hometown image.

 

On February 27, 1933, the Germany Reichstag, the physical symbol of country’s democracy and the rule of the people, burnt to the ground. Hitler immediately blamed Communist agitators and used the national crisis as a springboard to dismantle the Republic. In short order he assumed virtual dictatorial powers by means of legislative Enabling Decrees, interned Communist leaders and members in concentration camps, excluded Jews from public service, outlawed trade unions and banned all remaining political parties except for National Socialism. By the summer of 1933 the Third Reich could no longer be deterred.

 

What might prove the tipping point for American democracy almost ninety years later? It could be a severe economic crisis, a war with Iran, another massive terrorist attack or simply the fact that in 2020 President Trump refuses to leave office after adverse election results, claiming that the outcome was rigged by unspecified “outsiders” seeking to destroy hometown America. Would the ideologically refashioned Federal courts, especially the Supreme Court, stand in his way? The Supreme Court has already intervened in the outcome of one presidential election in its Bush v. Gore decision halting the recount of ballots in Florida. Would the present Court, with its growing penchant to ignore standing legal precedent, be willing to go even further this time around? And would a badly fractured Congress be able to act effectively, or would our democracy simply dissolve in a stalemate as it did at the close of the Weimar Republic?

 

To some, these questions might seem at best hypothetical, and at worst illusionary. But the mere fact that they can now be seriously entertained in terms of the historical precedent of Germany’s Weimar Republic should give us pause. In today’s United States of America, 1933 may be closer than we think.

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/article/172965 https://historynewsnetwork.org/article/172965 0
The Cuban Missile Crisis and the Trollope Ploy Myth

 

Response to Matthew Hayes: “Robert Kennedy and the Cuban Missile Crisis: A Reassertion of Robert Kennedy’s Role as the President’s ‘Indispensable Partner’ in the Successful Resolution of the Crisis,” History, The Historical Association and John Wiley and Sons Ltd (May 7, 2019), 473-503 and “RFK’s Secret Role in the Cuban Missile Crisis,” Scientific American (August 6, 2019).

 

I was naturally intrigued when I learned about a purportedly new take on Robert Kennedy’s role in the Cuban Missile crisis. The unique personal/official relationship between President John F. Kennedy and his younger brother Robert has been thoroughly explored in dozens of studies over the last half century. RFK’s “portfolio,” widely understood at the time, was that of JFK’s most trusted adviser and confidant—and, as Hayes suggests, “the president’s de facto chief of staff.” A different attorney general would likely not even have been invited to take part in secret discussions during a dangerous foreign policy crisis. The loyalty and trust between the Kennedy brothers will surely remain a one-off in the history of the American presidency. 

 

Matthew Hayes’ work confirms the already well-documented story of RFK’s unique role, especially his JFK-approved back-channel contacts with Soviet diplomats before, during and after the missile crisis; he emphasizes, however, the importance of the more than 3,500 recently declassified documents which confirm that the attorney general was overseeing interdepartmental planning for possible contingencies in Cuba—including “the installation of missile sites” and “warning his brother of the possibility over a year before the crisis.” [Scientific American 2 (5 page printout); hereafter SA] Hayes cites Cuba-related documents which undeniably confirm that RFK was not your conventional attorney general. These examples augment the historical record but fail to provide anything genuinely new about the bond between President Kennedy and the brother eight years his junior. [History 32-35, 38,42; hereafter HY]  

 

“In the first days of the crisis,” referring directly to the ExComm tapes, Hayes contends that RFK “insisted that an invasion remain on the table and even pushed for a reduction in lead time required to initiate one. Until recently (italics added) this approach was held up as evidence for a belligerent, hawkish adviser, promoting the sort of military action that would have led to dangerous escalation.” (SA3) In fact, from 1962 to the declassification of the White House tape recordings in the late 1990s, historians took for granted that RFK was the top dove at the meetings—mainly because of his posthumous 1969 memoir, Thirteen Days (which has never been out of print). Hayes declares that: 

 

He saw his role as pressing for all alternatives, regardless of where they might lead. … he was instrumental in convincing other advisers of its [the naval blockade’s] merits and, ultimately, the president. In both cases he was able to do so because he was seen as balancing resolve with restraint, bridging the more forceful approach advocated by the military and Joint Chiefs with the optimistic diplomacy pushed by dovish advisers such as U.N. Ambassador Adlai Stevenson.”[SA3]

 

The quote above is a historical rope of sand. RFK only briefly and reluctantly backed the blockade and continued to grumble about it well after the president had endorsed it; he certainly did not convince the JCS to support it: they never did. There is no escaping or rationalizing the facts—the tapes have irrefutably identified RFK as one of the most contentious ExComm hawks—from day one to day thirteen. Hayes is, in effect, turning the historiography of the missile crisis upside down, as if these new documents [“Until recently”] can somehow explain away the substance and tone of what Robert Kennedy actually and repeatedly said in the recorded meetings—but carefully concealed in Thirteen Days. RFK’s role as chair of the Special Group Augmented, even more thoroughly documented since 2012, (https://www.jfklibrary.org/asset-viewer/archives/RFKAG) is entirely consistent with his hawkish views in the ExComm meetings—in which he certainly did not reveal an “innate understanding of the missile crisis as more a political struggle than a military one, with its own limitations.” [SA2;HY480] Hayes’ nebulous claim that these “declassified private notes and a closer understanding of the brothers’ intimate relationship, now support a more holistic view of RFK,” fails to even dent the indisputable historical record on the White House tapes. 

 

RFK’s key responsibilities included chairing the Special Group Augmented which coordinated Operation Mongoose in Cuba, overseeing industrial and agricultural sabotage, which some historians have called ‘state-sponsored terrorism,’ as well as attempts to assassinate Fidel Castro. Richard Helms, CIA deputy director for operations, recalled: “If anybody wants to see the whiplashes across my back inflicted by Bobby Kennedy, I will take my shirt off in public.” A senior Mongoose planner agreed, “That’s how he [RFK] felt about this stuff. It was unbelievable. I have never been in anything like that before or since and I don’t ever want to go through it again.” [Stern, Averting the Final Failure14; hereafter AV] Hayes never even mentions the Special Group Augmented.  

 

Hayes’ discussion of the “Trollope Ploy,” (hereafter TP: a reference to a plot device in a 19th century Anthony Trollope novel) is even more problematic. He explains the TP as “a bold strategy for navigating two different proposals from Khrushchev…within the space of a few hours.” The first (late on 10/26) promised to remove the missiles if the US pledged not to invade Cuba; the second (early on 10/27), asserted publicly on Moscow Radio that the missiles would be removed if the US withdrew the Jupiter missiles from Turkey. “RFK took hold of the situation,” Hayes concludes, “assuming the leadership mantle.” He and the president’s chief speechwriter, Ted Sorensen, went into a separate room and came up with what Arthur Schlesinger, Jr. called an idea of “breathtaking simplicity:” “we ignore the latest [10/27] Khrushchev letter [Hayes incorrectly substitutes “while barely acknowledging receipt of the second”] and respond to his earlier [10/26] letter’s proposal.” [SA4;Thirteen Days,1971 edition,77] “JFK approved the ploy,” and sent RFK to make what Hayes calls a “highly secret assurance to [Soviet Ambassador] Dobrynin that the missiles would be removed ‘at a later date.’” 

 

This account, however, is not what happened! The tapes reveal conclusively that JFK remained very skeptical and only grudgingly and unenthusiastically agreed “to try this thing [the TP];” but also demanded new contacts with Turkey and NATO to convince them to give up the Jupiters because Khrushchev “had moved on” and could not go back to his earlier demand for a non-invasion pledge after his public statement about a trade. The entire ExComm—very much including RFK—continued to vigorously oppose the trade. The real breakthrough did not occur until the late evening rump meeting (about 20 minutes) of seven ExComm members, chosen and invited by the president himself. (JFK failed to activate the tape recorder and we will never know if he acted deliberately or simply forgot.) Secretary of State Dean Rusk, finally acknowledging the president’s determination about giving up the missiles in Turkey, suggested requiring that the Soviets keep the swap secret; the president accepted this recommendation and everyone finally acquiesced—however reluctantly. The president, in short, never let go of “the leadership mantle.” As Barton Bernstein observed, “they were the president’s men and he was the president.” [AV369]

 

It was JFK himself who first utilized the TP myth. Just hours after Khrushchev had agreed on 10/28 to the removal of the missiles in Cuba, the president phoned his three White House predecessors (Eisenhower, Truman, and Hoover) and skillfully lied to them, claiming that Khrushchev had retreated from the 10/27 missile trade proposal and had agreed, in the end, to remove the Cuba missiles in exchange for a non-invasion pledge. Eisenhower, who had dealt with Khrushchev, was skeptical and asked if the Soviet leader had demanded additional concessions; JFK coolly repeated the contrived administration cover story. The same version was fed to a gullible press corps and quickly became the conventional wisdom, later enshrined in Thirteen Days. [AV388]

 

Hayes criticizes my work for “dismissing the accounts of early [missile crisis] historians such as Schlesinger as ‘profoundly misleading if not out-and-out deceptive.’” [HY476] This accusation is irresponsible as well as false. First, the quoted passage actually refers to onedocumentfrom the first day of the ExComm meetings found in RFK’s papers by Schlesinger (granted special access by the family in the 1970s). Second, I explicitly warned readers that “Schlesinger could not have known the full context of the RFK quote” at the time because the tapes were still classified. My judgment has nothing whatsoever to do with the ‘early [missile crisis] historians.’ If there is deception here, the deception was neither Schlesinger’s nor mine. [AM34-5]

 

“Historians such as Sheldon Stern,” Hayes maintains, “have argued that President Kennedy ‘bore a substantial share of the responsibility’” for precipitating the crisis. Hayes, however, chooses to call the missile crisis one of the Kennedy administration’s “principal moments of glory” and “a heroic and ingenious defense against Soviet aggression.” [HA476]

 

This “moment of glory,” “heroic and ingenious” language is unprofessional advocacy, bordering on hagiography, and is particularly baffling because there is a huge amount of evidence (including in Soviet archives) which confirms Khrushchev’s claim that the missiles were sent to Cuba to defend Castro against a second US-backed invasion. Hayes, nonetheless, dances around RFK’s dominant role in the Special Group Augmented and Operation Mongoose, which in reality aimed “to undermine the Cuban regime and economy by blowing up port and oil storage facilities, burning crops (especially sugarcane) and even disabling or assassinating Castro himself. … It became the largest clandestine operation in CIA history up to that time, ‘involving some 400 agents, an annual budget of over $50 million.’” [AV15] Hayes acknowledges that RFK was the president’s “eyes and ears in Mongoose,” (HY495) but otherwise ignores RFK’s fervent leadership role in that effort. 

 

“Stern,” Hayes complains, “continues to quote a second-hand exchange between RFK and Kenneth O’Donnell, JFK’s special assistant and confidant during the crisis, to undermine the veracity of RFK’s memoir Thirteen Days.” After reading the manuscript, “O’Donnell is said to have exclaimed, ‘I thought your brother was president during the missile crisis!’, while RFK replied, ‘He’s not running [for office], and I am.’” Hayes insists that this account “by someone who didn’t participate in most of the ExComm meetings should surely not be given so much prominence.” [HY478] This is an apples and oranges argument: the remark is not about the meetings or the crisis, but instead about O’Donnell’s shrewd insight into RFK’s personal, political motives in writing his memoir. (Of the four people present, the surviving two I consulted vividly recalled and confirmed each other’s account.)

 

That ambition is precisely what O’Donnell, known for his candor and directness, immediately perceived and RFK promptly admitted. RFK initially intended this crisis memoir for publication during JFK’s 1964 reelection campaign, but changed his purpose after Dallas. Bobby’s ambition, in fact, had even surfaced during the crisis itself. On October 29, Ambassador Dobrynin gave the attorney general a letter from Khrushchev to the president which specifically mentioned the missile trade. RFK consulted with JFK and returned the letter, reminding Dobrynin that the swap was to remain secret—and explaining that he personally could not “risk getting involved in the transmission of this sort of letter, since who knows where and when such letters can surface or be somehow published—not now, but in the future…. The appearance of such a document could cause irreparable harm to my political career in the future.” [AV403] The O’Donnell/RFK exchange is an entirely legitimate nugget of historical evidence and Hayes’ objection is disingenuous special pleading. 

 

“Critics such as Stern,” Hayes continues, [HY483-4]

 

far from viewing RFK as a leader of the doves (through his support for the blockade route), point to the primary source material and advocate his role as a dangerous hawk advocating invasion from the outset. 

 

In evidence for this assertion, Stern directly quotes RFK: ‘We should just get into it, and get it over with and take our losses if [Khrushchev] wants to get into a war over this.’… Stern argues that RFK’s memoir of the crisis ‘was an effort to manipulate this history of the missiles crisis and invent the past. A ‘consistently hawkish’ figure emerges from Stern’s analysis of RFK, ‘one in sharp contrast to his brother.’

 

I don’t “view” RFK as “the leader of the doves” because he was not; he accepted the blockade only after JFK publicly announced it. I plead guilty as charged to pointing “to the primary source material,” the tapes, to prove conclusively (not to “advocate”) that RFK was a hawk on the first day and was still pressing to “take Cuba back” militarily on the thirteenth day. The “consistently hawkish figure” that rankles Hayes was not invented by “Stern’s analysis”—but derived from RFK’s own words captured on the ExComm tapes, words which he spunvery differently in his memoir. The assertion that ‘I was there’ is most often a red flag for historical manipulation, not a superior form of validation. History based on individual memory rarely rises above the personal motives for writing it. Thirteen Days and the tape recordings cannot both be right, and there is absolutely no question which account is reliable. 

  

Hayes, however, cites a specific case to allege that “this analysis is skewed, for Stern quotes RFK out of context, paring back RFK’s words selectively to support his argument.” The indented quote below, he claims, “actually begins with a series of qualifications, as RFKtentatively hedges his comments.”

 

Now [think] whether it wouldn’t be the argument, if you’re going to get into it at all, whether we should just get into it, and get it over with, and take our losses. And if [Khrushchev] wants to get into a war over this . . . Hell, if it’s war that’s gonna come on this thing, he sticks those kinds of missiles in after the warning, then he’s gonna get into a war over six months from now, or a year from now…. [HY483]

 

Accusing a scholar of “selectively” using evidence “to support an argument,” is a serious personal and professional accusation—especially when untrue. This passage is not, as Hayes is determined to “prove”  in spite of the ExComm tapes, some one-off, devil’s advocate musing by Bobby before he settled on a dovish line; rather, it is typical of his approach through the entire crisis. I just relistened to this tape and there is no question that before the “get into it” comment RFK is overtly scoffing at all suggestions of more limited action (such as air strikes) rather than invasion. Indeed, adding the “Now [think]….” sentence makes no change whatsoever in the meaning of his remarks. He is not “tentatively” hedging anything. In fact, Bobby makes his position abundantly clear minutes later, suggesting that the administration could stage an incident that would justify military intervention: “You know, sink the Maine again or something.” I included the ‘sink the Maine’ statement later in my narrative – yet Hayes leaves it out entirely. A reader might reasonably ask just whose version is skewed and selective.

 

Equally important, the indented quote above first appeared in the 1997 May-Zelikow transcripts, which I was the first to publicly expose as seriously flawed and unreliable. (AV,’ Appendix, 427-439.) Nothing in the Hayes articles suggests that he is even aware of the ensuing controversy. No historian genuinely familiar with the crisis literature would trust the 1997 version, which the editors themselves finally acknowledged has been superseded by the much-improved 2001 Miller Center transcripts. 

 

Hayes also accepts RFK’s claim in Thirteen Days that “many meetings” of the ExComm took place “without the President.” [HY491] I listened to every recorded meeting numerous times over two years (including the crucial “post-crisis” meetings that continued into late November)—as well as checking passages in the original White House master recordings against the copies used for research and studying the minutes of the unrecorded meetings. JFK definitely attended every ExComm meeting, except during brief campaign trips to New England (10/17) and the Midwest (10/20).  

 

The November post-crisis lasted longer (32 days) and required more recorded meetings (24 vs. 19) than the iconic Thirteen Days. [AV403-12] The naval blockade remained in place and tensions remained high after 10/28. Negotiations at the UN broke down over Soviet resistance to removing the IL-28 nuclear bombers from Cuba and the deadlock was not resolved until 11/20. JFK then ordered the lifting of the blockade, but not before RFK persuaded him to drop the non-invasion pledge: “I don’t think,” RFK insisted, “that we owe anything as far as Khrushchev is concerned.” The president worried that it would “look too much like we’re welching” on our promise and added that retaining the pledge might “make it politically less difficult for Khrushchev to withdraw his conventional forces from Cuba.” In the end, however, JFK agreed to his brother’s tougher stance. Bobby was Bobby, hawkish to the last. Hayes never even mentions the November post-crisis—in effect leaving out everything after the 9th inning in the account of an extra-inning game—a fitting metaphor for these essays. [AV410]. (1)

 

(1) When I began listening to the tapes I did not expect that they would fatally undermine the veracity of 13 Days. I had worked in RFK’s presidential campaign, convinced that he was a very different man than in 1962. However, as a historian, I had to confront the evidence on the tapes. I admired Bobby in 1968, and still do.  

 

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/article/172974 https://historynewsnetwork.org/article/172974 0
Beyond the Vote, The Suffragists Helped Launch Modern Business Franchising

 

As we begin to celebrate the 100th anniversary of the 19th amendment, giving women the right to vote, we should also acknowledge the role of the suffrage movement in supporting the launch of modern franchising begun by Martha Matilda Harper in 1891.

 

Harper was born near Oakville, Ontario, Canada, and at age seven, she was bound out into servitude. For the next twenty-five years, she remained a servant, but was determined to find a path out of her poverty- stricken world.  Fortunately, her last Canadian employer was a holistic doctor who taught her about healthy hair care including demonstrating the power of his proprietary hair tonic. As a result, Harper had Rapunzel-like floor length hair.  On his deathbed, the doctor bequeathed Harper the hair tonic formula.   With it, she left Canada in 1882 and emigrated to Rochester, NY, a hotbed of entrepreneurial innovation and social advocacy.  

 

Harper remained a servant for six more years until she opened Rochester’s first beauty salon for women with her lifetime savings of $360 in 1888.  That was the same year George Eastman launched the KODAK camera with one million dollars of venture capital.  Harper located her salon in the most fashionable office building in Rochester, where people banked, visited art galleries, took music lessons, and conducted various business transactions. It happened that Susan B. Anthony’s trusted lawyer, former Congressman John Van Voorhis, was located in that building.

 

Harper surely knew of that building from serving Rochester’s society folks and wisely chose to locate there. However, Daniel Powers, the building owner, was reluctant to allow Harper to start such a questionable business, fearing it would attract prostitutes and the wrong kind of women to his fancy building. Cleverly, Harper enlisted Van Voorhis to advocate for her and he succeeded. Van Voorhis additionally provided two key contacts to support Harper: his wife, a society lady, and his former client Susan B. Anthony.  Both ended up playing major roles in Harper’s success.

 

A keen observer and well taught in the art of pleasing, Harper invented the first reclining shampoo chair and cut out a neck rest along the sink’s rim to make hair shampooing a more comfortable process.  She also invited mothers bringing their children to music lessons next door to rest their weary feet in her salon. Astutely, she even posted a picture of her flowing floor-length hair on the exterior of her office door to attract customers.  With this inventiveness, she wowed Rochester women, recruiting Anthony and many others.

 

It is said that Anthony and Harper had long discussions about women’s rights and the cause of full equality during treatments.  During those discussions and with Anthony’s advocacy, suffragists began to patronize her shop and became a niche of growing Harper’s customer base.  As Rochesterian Mrs. Josephine Sargent Force indicated, “I was brought up a suffragist and was interested in Miss Harper and her work and was interested in anything women were doing.”

 

Suffragists understood the bigger issues facing women. In the 1848 Declaration of Sentiments, Elizabeth Cady Stanton and others spoke about the restrictions of women’s education and job prospects. These limitations assured women were destined to earn meager wages as servants or factory workers.  In addition the Declaration took on coverture,a legal term that referred to the limitation of women's possessions including wages, children and property.  Stanton and the suffragists were envisioning a world where women would be independent of such legal restrictions. Susan B. Anthony understood all of this.  She understood the vote was simply a first step to emancipating women.  Thus she declared, “Every woman needs a pocketbook, ” suggesting that the control of money was essential for women’s equality.

 

Harper was likely influenced by this philosophy as she expanded economic opportunities for herself and other women.  Enter Bertha Palmer, Chicago socialite, who was lured to Rochester to experience the Harper Shop.  She loved the experience and told Harper that she wanted such a shop in Chicago in time for the 1893 Chicago World Exposition, where she would be the President of the Women’s Division.  Boldly, Harper countered Palmer, informing Palmer  that she would need to deliver a written commitment from 25 of Palmer’s best friends assuring future patronage  at a Chicago Harper shop.  Palmer delivered and then Harper had to figure out how to expand.  

 

Cleverly, Harper used the Christian Science Church structure as something she could replicate, with herself as the strong female leader, her Rochester headquarters like the Mother Church in Boston, and then satellite shops around the world that followed Harper’s instructions and used her products. For the first 100 Harper shops, Harper put only poor women into ownership positions of each shop.  Harper, thereby, was also a pioneer of social entrepreneurship. Ultimately, there were 500 such franchises around the world.  British royalty, the German Kaiser, U. S. Presidents and their First Ladies, suffragists, luminaries were all loyal Harper customers. 

 

Harper’s success was recognized around the world.  When she died in 1950 even the NY Times wrote a two -column obituary about Harper, citing her relationship with Anthony.  Unfortunately, Harper’s achievements and the role of the suffragists in creating this new business model has been forgotten. It is time to credit Harper and the suffragists with the fastest growing segment of retailing— franchising.  As Susan B. Antony understood, though, that was only part of the goal.  The other was to assure that all women had their own purse for financial independence and choice.  Perhaps by August 26, 2020, real pay equity for women and widespread recognition for forgotten achievements  of women like Harper will have been established.  Then, the ultimate goal of the suffragists will have been met and we can all cheer!

 

©Jane R. Plitt 2019

]]>
Sat, 21 Sep 2019 10:51:25 +0000 https://historynewsnetwork.org/article/172968 https://historynewsnetwork.org/article/172968 0
Sir Ian Kershaw on His Latest Book, Brexit, and the Future of Europe Sir Ian Kershaw, FBA (born 29 April 1943) is a British historian and author whose work has chiefly focused on the social history of 20th century Germany. He is regarded by many as one of the world's leading experts on Adolf Hitler and Nazi Germany, and is particularly noted for his monumental biographies of Hitler.

 

David O’Connor:  Your new book was a massive undertaking, covering so many issues in Europe’s post-war history.  How did you go about setting up a framework for your analysis?  

Sir Ian Kershaw: The first step was to acquire an overview of the period, its most important developments, changes and so on. Secondly, I then worked out the chapter divisions, and the subdivisions. Thirdly, I explored the most important literature on the relevant themes. Finally (and obviously the difficult part), I attempted the actual writing.

 

DO: You use the term “matrix of rebirth” to explain how Europe was able to recover so well from the devastation of World War II.  What are some of the key features of this “matrix,” and which do you consider the most important?  

IK:  The ‘matrix of rebirth’, as I called it, arose from the condition of Europe at the end of the Second World War. It comprised, as its fundamental premise, the elimination of German great-power ambitions. A second component was the territorial and geopolitical reordering of central and eastern Europe under the aegis of Soviet power. Thirdly, national interests were now subordinated to the interests of the two new superpowers – in western Europe the USA, in eastern Europe the Soviet Union. A fourth element was the extraordinary economic growth that, in western Europe, was a major contribution to the consolidation of pluralistic democracy. Finally, and perhaps the most important factor of all, the availability to both superpowers of a growing arsenal of devastating nuclear weapons acted as a vital deterrent to another war in Europe. 

 

DO: Many of the tensions that exist in the European Union today—especially on the issue of national sovereignty—were present from the beginning of European economic integration with the formation of the European Coal and Steel Community soon after World War II. Please explain some of the main motivations for integration and how objections to it were initially overcome.  

IK: It was widely felt in the post-war years that a new basis for friendship, cooperation and supranational organization was needed to overcome the extreme nationalism that had produced such catastrophic conflict, and to rule out any prospect of a return to war in Europe. Once the Soviet Union had replaced Germany as the major international threat in the eyes of western leaders, the path opened up for the first steps towards European integration to bolster security and promote prosperity. Alongside the idealism, the different but complementary national interests of France and West Germany were served by the creation of a common market in a trading bloc that also included the Netherlands, Belgium, Luxembourg and Italy. A tension between national and supranational interests was present from the start. But the economic success of the trading bloc in the early years overcame many of the objections, even if advances towards further integration were slow and often difficult.

 

DO: The division between Eastern and Western Europe is a prominent issue throughout your book.  The two sides had very different cultures, social structures, and economic and political systems, yet the Cold War was a remarkably stable period in European history.  What are some of the most important factors that contributed to this stability and absence of outright warfare?  

IK: Crucial, as already mentioned, was what came to be labelled ‘mutually assured destruction’ of the superpowers, both of which presided over immense nuclear capability. The nuclear deterrent was represented organizationally by the existence of NATO in western and the Warsaw Pact in eastern Europe. Stability in the Cold War was always under the shadow of a potential nuclear conflict. But once the Berlin Wall was built in 1961 and the Cuba Crisis the following year ended without catastrophe, the likelihood of nuclear confrontation in Europe greatly diminished. Meanwhile, the power of Soviet repression was sufficient to contain, if sometimes with difficulty, instability in the eastern bloc, and the West generally accepted that Soviet domination of eastern Europe could not be ended. This in itself contributed to stability in Europe.

 

DO: You cover a lot of what you call “impersonal dynamics” (demographics, economic growth, etc.) but in the book’s “Forward” you also note the importance of individual leaders who made important decisions that shaped their eras. One of the most prominent figures you examine is Konrad Adenauer. Please explain how he helped the Federal Republic of Germany become such a powerful force in Europe’s post-war economic recovery and an anchor in NATO.  

IK: Adenauer is certainly among the individuals who helped to shape Europe’s postwar history. He was crucial in ensuring that West Germany turned to the West and wedded its future to its membership of NATO, to west European integration and to friendship with the traditional enemy, France. What seems today to be an obvious step was at the time highly controversial, since the turn to the West ruled out re-unification as a realistic goal – something that was greatly unpopular with the oppositional Social Democrats and much of the population which preferred a re-unified and militarily neutral Germany to commitment to the American-dominated capitalist and militarized West.

 

DO: One of the key themes you explore is how Europeans were greatly affected by events outside of Europe in the post-war era.  Perhaps the most jolting and consequential example was the oil embargo imposed by Middle Eastern countries in the 1970s.  How did this shake the European economy and the confidence of Europeans in general?  

IK: The extraordinary economic boom that had lasted for more than two decades was already fading when Europe was hit by the first oil-shock in 1973 in the wake of the Yom-Kippur War in the Middle East, followed by a second after the Iranian Revolution of 1979. The double oil-crisis in countries by now so heavily dependent on oil led to high rates of inflation accompanied by a significant rise in unemployment. States struggled to adjust to an abruptly altered economic climate and increased political volatility. All at once, it seemed, the optimism that had characterized the first post-war decades had evaporated. The oil crises inaugurated a new era in Europe, east and west.

 

DO: The expansion of the European Community and later the European Union has often been controversial, whether it was the integration of Spain, Portugal, and Greece in the 1980s or the entry of former communist countries from Eastern Europe.  In each case, the European Community allowed for expansion even if the nations didn’t meet the economic standards needed for entry.  How did geopolitical considerations shape these decisions to ignore prescribed preconditions?  On balance, was the expansion beneficial?  

IK: Spain, Portugal and Greece had all recently emerged from dictatorship – in the first two cases lasting for decades – when they were integrated into the European Community. The political consideration was that integration would significantly help to consolidate democracy in those countries, and so it proved. A similar imperative lay behind the readiness to integrate former communist countries in eastern Europe. Here, too, the benefits have greatly outweighed the disadvantages of incorporating less developed economies, even though Hungary and Poland, especially, have come to pose some new challenges to the liberal values of the European Union.

 

DO: Mikhail Gorbachev is another one of the key figures who gets a lot of your attention in the book.  One of the most important parts of your analysis of the era of perestroikais the impact that the Chernobyl nuclear disaster had on Gorbachev.  How did Chernobyl affect his decisions at this critical point in history?

IK: No individual had a greater impact on European (and global) change in this era than Gorbachev. Chernobyl, little over a year after Gorbachev had acceded to power in the Soviet Union, convinced him that the Soviet system as a whole was rotten, and needed root-and-branch reform.

 

DO: In addition to explaining the roles of leading politicians you also provide examples of less well-known figures who played great parts in momentous events, including a Polish priest named Jerzy Popieluszko.  Who was he and how did he help strengthen opposition to the Polish government in the 1980s?  

IK: Popiełuszko was a young Catholic priest who had been vociferous in support of Solidarity, the trade-union opposition to the Polish regime which had been banned under the declaration of martial law imposed in December 1981. In October 1984 he was kidnapped and murdered by members of the state security police. The murder of Popiełuszko led to an enormous outburst of popular anger, reflected in the huge numbers attending his funeral. Indirectly, the reaction to the murder convinced the regime that concessions had to be made to the opposition. By 1986 an amnesty for all political prisoners arrested under martial law was granted.

 

DO: The fall of communism in Eastern Europe ushered in incredibly high expectations for Europe and indeed the world, yet despite the enthusiasm, there were many obstacles standing in the way of Eastern European prosperity and hopes for a peaceful transition.  Which of the former communist states did the best job making the change to democratic government and market-based economic systems? What made them successful?  Which of them fared the worst and why?  

IK: The difficult transition was best managed by the German Democratic Republic (though, of course, the incorporation in the Federal Republic made this a special case), and by Poland. In the latter case, the change to trade liberalization, convertible currency, a fully-fledged market economy, and extensive privatization took place extremely rapidly, through what was labelled ‘shock therapy’. Poland’s debts were effectively written off and the country benefited from assistance from the International Monetary Fund and the European Union. By 1992 Poland was recovering strongly, though experts differ over whether this was on account of the ‘shock therapy’ itself. The slowest countries to adapt were Romania, Bulgaria and Albania, which had been relatively backward under Communism, with poor infrastructure, low level of industrialization and weak civic culture and high levels of corruption and clientelism.

 

DO: Why did you describe Helmut Kohl as a “disciple” of Adenauer?  How do you assess Kohl’s impact on Germany and on European integration?  

IK: Kohl was particularly keen to continue Adenauer’s policy of binding Germany to the West and consolidating the friendship with France as the basis of policy towards Europe. Kohl’s reputation as the Chancellor who brought about unification is guaranteed, and as such his impact on Germany was enormous. Probably, once the Wall had fallen unification would have happened anyway in the relatively near future. But Kohl’s negotiations, especially with Gorbachev, were important. Kohl, like Adenauer, was a fervent advocate of European integration. His legacy here was the agreement that he and the French President, Mitterrand, reached at the Maastricht conference in 1991 to introduce a single European currency, the Euro.

 

DO: In the final chapter, which you call “Global Exposure,” you analyze several threats to twenty-first-century Europe that were the result of developments outside the continent, including the financial crash of 2008 and resulting recession, the rise of nationalism, mass migration of refugees from the Middle East, and the rise of Islamist terrorism.  We don’t have time to cover each so I’d like to focus on one and put it into historical perspective.  How is Islamist terrorism different from other terror campaigns in Europe like those of the IRA, ETA, and the Red Army Faction? 

IK: IRA and ETA terrorism, though the consequences for the victims were horrendous, had limited aims of national independence. The Red Army Faction’s nebulous objective was the destruction of capitalism and the West German imperialist-fascist state (as they saw it).  The terrorist attacks of these organisations were directed in the main at political, military and economic representatives of the states that they wanted to destroy – though, of course, many innocent bystanders were sometimes caught up in the attacks. Islamist terrorism, in contrast, had an essentially unlimited objective – the destruction of western values and their replacement by those of fundamentalist Islam. Civilians were directly targeted in order to make the greatest impact. And the perpetrators were ready to sacrifice their own lives for the cause. 

 

DO: When you concluded the book in August 2017, Brexit was a simmering issue, and you had some rather witty yet unkind things to say about then Foreign Minister Boris Johnson.  What are your thoughts about him now that he has become Prime Minister?  How do you see Brexit unfolding and does it pose an existential threat to the EU?  

IK: Johnson should not be taken for a fool because he sometimes acts like one. He is a clever, calculating politician. He hopes to attain legendary status in Britain as the savior of the Conservative Party who achieves Brexit and restores British greatness. He has become Prime Minister only on the votes of around 160,000 Conservative Party members and lacks any popular mandate. Nevertheless, he has surrounded himself by a cabinet committed to leaving the EU on 31 October, if need be without a deal which all experts see as damaging for the EU but far more so for the UK. Both the EU and the UK will survive even ‘no deal’, but lasting, and unnecessary, harm will have been done. How the political drama will unfold over the coming few weeks is impossible to foresee with any clarity.

 

DO: One of the striking things about your book is how well it illustrates the durability of the institutions that were created in the post-war era, despite facing numerous crises over the last seventy years.  Now with Russia’s meddling in the domestic politics of other countries, Eastern Europe’s reversion to authoritarianism and numerous other problems, are you optimistic about the future of the EU?  

IK: The EU has come a long way and, as Brexit shows, the networks built up over previous decades are extremely complex. What has been achieved will go a long way to sustaining the EU in the future. As it has done so often in the past, the EU will have to adapt to change and the current organizational framework may be reformed and in some ways reconstituted in years to come. But the prospects for the EU’s future remain bright, despite Brexit and other current economic and political problems. 

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172966 https://historynewsnetwork.org/article/172966 0
Hindus-Muslim clash 72 years after Britain left India

 

Partition of India in 1947 by Britain to create two independent countries wrecked a havoc in human lives and miseries. It killed two million people, according to various estimates, and displaced 14 million. Its legacy, the two siblings of the midnight—nuclear-armed Pakistan and India, which have fought two major wars since the separation—are still at loggerheads. Was this inevitable?

 

This question assumes greater relevance today in light of India's recent decision to annex the Muslim-majority Kashmir state. On August 5, keeping Kashmiri Muslim leaders under house arrest and deploying tens of thousands of soldiers in heavily fortified Kashmir, Delhi snatched away their special rights—their own flag, own law and property rights, which were granted to Kashmir by India's constitution—in a blitzkrieg exercise in a matter of hours.

 

Kashmir is a picturesque Himalayan region that encompasses roughly 135,000 square miles, almost the size of Germany, and has a population of about 18 million. India controls 85,000 square miles, Pakistan 33,000 and China 17,000. Both Pakistan and India claim the entire state as their own and have fought two major wars over it since the British left India in 1947.

 

Both India and Pakistan claim the entire state as their own. In 1948, after a fight between the two nations, India raised Kashmir in the UN Security Council, which called for a referendum on the status of the territory. It asked Pakistan to withdraw its troops and India to cut its military presence to a minimum. A ceasefire came into force, but Pakistan refused to pull out its troops. Kashmir has remained partitioned ever since.

 

By scraping Kashmir's special status and dividing the state into two, India's Prime Minister Narendra Modi has taken a dangerous step toward making India an ultra-nationalist Hindu nation. Pakistan's Prime Minister Imran Khan has threatened war again—even nuclear. China, which occupies parts of the state, denounced India's action as “unacceptable.” The warring nations might be just a miscalculation away from a nuclear winter.

 

U.S. SOLUTION WENT NOWHERE

 

The United States has pushed the warring neighbors since the Kennedy administration to make the existing division their permanent border, but the idea went nowhere because of a fatal flaw in it—it gives nothing to the victims of this tragedy, the Kashmiris. India loves the U.S. idea, but Pakistan wants no part of it, and the Kashmiris outright hate it. 

 

An examination of the major factors that led to the fateful partition on 14 August 1947 helps understand what happened then and what is happening now. Apart from intricate socio-economic and political reasons, one thing that contributed heavily to the division was mutual distrust of the Indian National Congress and the Muslim League, British India's two major political outfits. Congress leaders Jawaharlal Nehru and Sardar Vallabhbhai Patel both doubted sincerity of their League counterparts Mohammad Ali Jinnah and Liaquat Ali Khan. Likewise, Jinnah and Liaquat never trusted Nehru and Patel.

 

U.S. diplomatic cables from New Delhi on conversations with these leaders during a crucial phase in India's freedom struggle give an interesting insight into what was behind the tragedy. One such cable came to the State Department on 14 December 1946 from Charge d'Affaires George Merrell, then the highest-ranking American diplomat in India, who reported on his talk with Nehru the night before. He interestingly noted that Nehru in his remarks painted Jinnah as a Hindu and identified himself more closely with Muslims.

 

The United States pushed Britain to leave India sooner after London had become weak following World War II. Washington feared that if the British prolonged their rule through repression, Indians would become radicalized and tilt toward communism. America wanted to keep India united, too. The Soviet Union, on the contrary, supported India's partition in an attempt to create multiple entry points to spread communism.

 

WAS JINNAH REALLY HINDU?

 

While talking with the U.S. diplomat, Nehru "embarked on restrained but lengthy attack on Jinnah who he said had Hindu background and lived according to Hindu law, Nehru himself being imbued with more Muslim culture, linguistically and in other ways, than Jinnah," Merrell wrote.

 

On Pakistan's creation, Nehru was baffled by Jinnah's posture. Congress had endeavored to learn what Jinnah wanted, but never received satisfactory replies. Jinnah never even adequately defined Pakistan. Nehru believed that Jinnah sought some changes, but did not want a democratic government. His argued that prominent Leaguers were landholders, who preferred antiquated land laws—British rule.

 

The British, however, believed that Jinnah embraced the Pakistan idea for bargaining purposes, but by the mid-1940s the movement had gained such momentum that neither he nor anyone else could apply the brakes.

 

The crux of the internal problem that India faced before the partition stemmed from differences between Congress and League as to the conditions under which provinces would join or remain out of sub-federations in northwest and northeast India.

 

"I am confident that if the Indian leaders show the magnanimous spirit the occasion demands, they can go forward together on the basis of the clear provisions on this point contained in the constitutional plan proposed by the British Cabinet Mission last spring to forge an Indian federal union in which all elements of the population have ample scope to achieve their legitimate political and economic aspirations," Merrell wrote to Washington.

 

DID NEHRU FORESEE CARNAGE?

 

Britain wanted the two major political parties to jointly frame India's constitution as a prelude to independence. This idea resulted from the British Cabinet Mission to India in 1946. The mission proposed a united India, having groupings of Muslim-majority provinces and that of Hindu-majority provinces. These groupings would have given Hindus and Muslims parity in the Central Legislature. 

 

Congress abhorred the idea, and League refused to accept any changes to this plan. The parity that Congress was loath to accept formed the basis of Muslim demands of political safeguards built into post-British Indian laws to prevent absolute rule of Hindus over Muslims. Reaching an impasse, the British proposed on 16 June 1946 to divide into a Hindu-majority India and a Muslim-majority Pakistan.

 

This resulted in unprecedented bloodbath and mass migration. In the riots in the Punjab region alone, as many as a half million people perished, and 14 million Sikhs and Muslims were displaced. 

 

No one knows for sure whether Nehru anticipated the carnage. He should have, though, because his comrade, Moulana A. K. Azad, had cautioned that if India were divided violence could erupt. Nehru remained convinced that League would ultimately join the Constituent Assembly. 

 

He, however, doubted that League would ever work constructively in a coalition government in a free India. Congress never liked the Cabinet Mission proposal, but in the interest of a peaceful and fair settlement had formed the interim government before the partition. This decision was based on an understanding that League would cooperate. But League members said they joined the cabinet to fight. If they entered the Constituent Assembly, where Muslims held 73 seats against 208 by Congress, "it would be with the purpose of wrecking it," Nehru vented.

 

NEHRU COULD PREVENT PARTITION

 

Still, had Nehru accepted Jinnah's demand for parity in the federal legislature and regional groupings as outlined in the British Cabinet Mission plan, India would have remained united. He could have served India better by following President Abraham Lincoln's policy during the American Civil War.

 

One sticking point in the partition plan was the division of Bengal and Punjab, the two Muslim-majority states with a large number of non-Muslims. Regarding Bengal's status, on 11 December 1946, Merrell talked with Chakravarti Rajagopalachari, an interim cabinet member and a favorite of both Nehru and M. K. Gandhi, India's paramount independence leader. He told the envoy that "Congress could not possibly agree to [the] interpretation of cabinet proposals which would inevitably place millions of Hindus under Muslim rule, particularly in [the] Bengal-Assam group." 

 

Asked how the basis for a democratic government could be established as long as mutual distrust between Hindus and Muslims exemplified by this view persisted, Rajagopalachari evaded the issue.

 

The United States favored India's early emancipation and pushed Britain toward this end. Washington strove to persuade Nehru to accept the Cabinet Mission plan that envisaged a weak federal administration and strong regional governments for free India.

 

"We have found that a central [government] initially with limited powers gradually acquires, as experience demonstrates necessity therefor, the additional authority which it must have to meet problems of the Federal Union," the State Department advised Nehru. "Our hope that Congress accept clear implications Brit Cabinet Mission plan...on reciprocal undertaking by Muslim League to work loyally within [the] framework [of] Indian Federal Union, subject only to reopening constitutional issue after 10 years of experiment." 

 

MUSLIM LEAGUE DISTRUSTED CONGRESS

 

Muslim League's views on its difficulty with Congress were articulated by Liaquat Ali Khan during a discussion with Merrell on 27 December 1946. Muslims, Liaquat said, "would not agree to independence [from British rule] unless adequate safeguards for minorities were provided." 

 

He expressed grave doubts whether Congress would accommodate Muslims. "Liaquat ...discussed at length his conviction that Congress leaders have no intention of trying to work Cabinet mission plan conscientiously but are determined to seize power without regard for Muslim rights," Merrell wrote. 

 

As evidence of Nehru's lack of interest in Congress-League cooperation, Liaquat pointed out that Asaf Ali was appointed India's first ambassador to the United States without consulting League members of the interim government. Liaquat learned about the appointment from read press reports in London. Asaf Ali, he said, did not command respect or confidence of Muslim Indians. 

 

Furthermore, Liaquat added, as soon as League joined the interim government, he proposed two League representatives—Begum Shah Nawaz, a Punjabi lawmaker, and Mirza Abol Hassan Ispahani, a Constituent Assembly member who later became Pakistan's first ambassador to Washington—be appointed to the UN delegation. Nehru refused on the ground that the number was limited to five and the appointment of these two would mean replacing the two who had already prepared themselves for work at the UN.

 

When League joined the interim government, Liaquat proposed that in the interest of efficiency and cooperation, questions concerning more than one department be discussed by ministers concerned prior to full cabinet meetings, regardless whether these ministers were Congress or League members. Nehru again refused, arguing it was preferable to thrash out all questions in full cabinet meetings. When Merrell asked whether all votes in cabinet meetings were along party lines, Liaquat answered in the affirmative.

 

In reply to a question from Merrell, Liaquat said he was convinced Gandhi had no desire for Hindu-Muslim cooperation; he was working for Hindu domination of India—to be attained through violence, if necessary. When the envoy further asked whether Liaquat believed that Gandhi's activities in East Bengal were a deliberate attempt to embarrass the Bengal government and to divert attention from Bihar, where communal violence had killed thousands of Muslims, he said "there was no question about it." 

 

Gandhi had gone to East Bengal to restore communal harmony after a series of massacres, rapes, abductions and forced conversions of Hindus as well as looting and arson of Hindu properties by Muslims in October–November 1946, a year before India won freedom. However, his peace mission failed to restore confidence among the survivors, who couldn't be permanently rehabilitated in their villages. Meanwhile, Congress accepted India's partition, and the mission and other relief camps were abandoned, making the bifurcation a permanent feature in South Asia.

 

MODI'S INDIA MIMICS HITLER'S GERMANY 

 

Following the partition, Kashmir won a special status as a precondition to join India. By scraping Kashmir's decades-old special autonomy status, Modi has taken a risky step toward implementing the dream of a right-wing Hindu extremist, the late V. D. Savarkar. Sitting in a prison cell on the Andaman Islands in the Bay of Bengal, in the mid-1920s the convicted-violent-revolutionary-turned-nationalist drew up his solution to the vexing issue of India's minorities, much like Adolf Hitler's final solution to the Jewish question. It is interesting to note that both of them came up their ideas almost at the same time and under similar circumstances—both were in prison for political violence. 

 

In Savarkar's Hindudom, Muslims and Christians were unwelcome, as were the Jews in Hitler's Third Reich. Savarkar disliked Muslims and Christians because of their allegiance to Mecca and Rome; they worshiped foreign gods and had no cultural affinity with Hindustan. Buddhists and Sikhs were no longer as pure as Hindus, but they were still acceptable because their religions originated in Hindustan. Hitler branded Jews as Gemeinschaftsfremde (community aliens) and condemned them as communists who aspired to dominate the world.

 

Savarkar initially wanted to convert all Muslims and Christians back into Hinduism. But he faced a significant obstacle. He could convert them, but could not arbitrarily decide their caste. A Hindu must belong to a hierarchical caste, which he acquires through birth only. Hindu religion forbids assigning a caste. 

 

To overcome this barrier, he revised his idea. First, he came up with a new identify for himself: He is a Hindu, not an Indian. Then he figured that his motherland is Hindustan, not India. Hindustan extends from the Himalayas to the Indus River and boasts a 5,000-year-old rich culture that influenced a vast number of people from Greece to Japan. On the contrary, India is a parochial concept that separates Hindus from their ancient heritage; it is championed by the nationalists who, unlike the orthodox Hindus, wanted an independent and united country for all Indians, regardless of their religion.

 

SAVARKAR'S VISION TAKES CENTER STAGE

 

Savarkar, an atheist who labeled his vision as nonreligious and cultural, was unwilling to give the Muslims a separate homeland next to his Hindustan. He feared that even though they were only 25 percent of the total population, they could still someday reconquer Hindustan if allowed to have their own country. He was very much aware that the Muslims were a small band, too, in 712 when they conquered India and eventually built a vast empire. 

 

He feared that next time they would be in a much stronger position to repeat their past success because they would be supported by other Muslim nations. To nip that possibility in the bud, he favored the creation of Israel. He saw the Jewish state as a barricade against the Muslim Arab world.

 

Savarkar dreaded a Muslim resurgence so much that he wanted British rule in India to continue. He sought only dominion status for Hindustan to keep it under the British military umbrella. Only Britain, he felt, was powerful enough to keep the Muslims at bay if they ever tried to invade Hindustan again.

 

But to his chagrin the nationalist tide swept India, as independence stalwarts like Gandhi, Nehru and Azad pressed the colonial power to quit. Savarkar's idea took the back seat, but remained very much alive, even though malnourished.

 

After Prime Minister Indira Gandhi's murder in 1984, the Indian National Congress party, the champion of secular India, fell on hard times; it had no comparable charismatic leader to carry forward the torch. Savarkar's followers gradually gained ground and picked Modi, who was once condemned globally as the mastermind behind a Muslim massacre in his home state of Gujrat, as the reincarnation of their guru.

 

Armed with a huge re-election victory in May, Modi moved full-seam ahead to fulfill Savarkar's dream to appease his hardcore anti-Muslim saffron brigade. First, he nullified a centuries-old Muslim marriage law. India's constitution, however, protects religious laws of other minority groups, and Modi did not touch them, showing his bias against Islam. Even the Mughals or the British left India's religious laws unchanged. India is a nation of 1.3 billion people, with 14 percent Muslim and 2 percent Christian.

 

NIRVANA LIES IN SECRET PLAN

 

Modi’s highly controversial and dangerous power grab is unlikely to end the crisis. The nirvana lies in a blueprint that was secretly drafted a decade ago by two former leaders of India and Pakistan but failed to execute it because of one them was suddenly pushed out of office. 

 

The idea, developed by aides to former Prime Minister Manmohan Singh of India and former President General Parvez Musharraf of Pakistan through back-channel talks from 2004-2007, is the best plan ever produced in 70 years, and this formula is a win-win realistic approach for everyone—India, Pakistan and Kashmir.

 

Under the plan, India and Pakistan would pull out soldiers from Kashmir, Kashmiris would be allowed to move freely across the de facto border; Kashmir would enjoy full internal autonomy; and the three parties—India, Pakistan and Kashmir—would jointly govern the state for a transitional period. The final status would be negotiated thereafter. 

 

Given the region's history, the Musharraf-Manmohan concept is realistic. It gives the Kashmiris near independence, allows India to maintain sovereignty over Kashmir and lets Pakistan claim it has freed Kashmir from Hindu domination. Compromise is the art of politics, and India must not repeat Pakistan's mistakes in East Pakistan, which led to a war in 1971. Both India and Pakistan must dig themselves out of the mass hysteria of jingoism they have created during the past 70 years over Kashmir.

 

CHAUVINISM POISONS PUBLIC PERCEPTIONS

 

Pakistan's claim over Kashmir is more emotional than material. Pakistan was created based on the concept that Muslim-majority areas of British India would form the Muslim nation. If Pakistan gives up Kashmir, it will void the very ideology that supported its creation and pave the way for its eventual demise, with constituent parts going their own ways. Still, Pakistan has softened its position because it cannot match India's firepower and take over Kashmir by force; Islamabad now wants a face-saving solution that it can sell to the Pakistanis.

 

India, in the beginning, sought to keep Kashmir in its grip to prove that the two-nations theory was wrong. Some attribute it to Nehru's emotional attachment to Kashmir as his birthplace. But over the years India's mindset has taken a different twist. Now it is driven purely by its hatred of Muslims, principally because of the fact that Hindus have been subjugated by a gang of Islamic invaders for 1,000 years; the orthodox Hindus think the Muslims have polluted Hindu culture. If they could, they would wipe out this black spot from the face of Hindustan. Because that is an impossible task, the radical Hindus want to take revenge by driving the Muslims out of India or making them subservient to Hindus.

 

Many Kashmiris, meanwhile, nurture a dream of an independent country of their own. They argue that the Kashmiris are the masters of their fate and that both Pakistan and India must respect their universal right of self-determination. This thinking process ignores India's security concerns vis-a-vis China, and because of this reason, the vision of an independent Kashmir will remain elusive.

 

The main problem that stands today in the way of achieving peace in Kashmir is chauvinism in both India and Pakistan. It has cost tens of thousands of lives and prosperity of both the nations as well as their neighbors. Modi's Nazi-type extremist party has always opposed a negotiated settlement. It operates on a misguided dream of reuniting the subcontinent into one Hindu nation, if necessary, through violence.

 

Because of this faulty doctrine, when Singh invited his predecessor, former Prime Minister Atal Behari Vajpayee, to lead the peace talk with Pakistan, he refused. He cited stiff opposition from the Bharatiya Janata Party. The Indians have a hard-time to accept a negotiated settlement because they have the notion that Kashmir is already theirs, a notion that has resulted from decades-long, hyper-nationalist propaganda by news media.

 

To achieve lasting peace, the M-squared formula should be revived, even though it may be political suicide for any one who dares doing so, especially in India, where a hysteria of Hindu radicalism now reigns supreme. Still, one of the Himalayan gods must make the sacrifice for the sake of the people who have suffered too much for too long.

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172972 https://historynewsnetwork.org/article/172972 0
“The thinking power called an idea”: Thomas Jefferson and the Right to Patents

 

As the duties of the various members of a president’s cabinet were much less fixed than they are now, Jefferson was often asked by President Washington to do many tasks that that would seem strange for today’s Secretary of State to do. One such task was to oversee an office directing the granting of patents for new inventions.

 

Jefferson, no stranger to invention and a lover of ideas, was singularly qualified for the job. Secretary of the Treasury, Alexander Hamilton, and the Attorney General, Edmund Randolph assisted him. A patent act—“An Act to Promote the Progress of the Useful Arts”—was introduced to the Congress in 1789, while Jefferson was still in France. After Jefferson’s return and acceptance of the position of Secretary of State in 1790, the act was passed (Apr. 10) and Jefferson was appointed head of the new-formed Board of Arts. If two of three members of the committee like the idea, the patent was approved. Unlike most European offices which tended to only favor ideas from the aristocracy, ideas from all citizens were considered. “The United States consciously created patent and copyright institutions that were intended to function as the keystone of a democratic society,” writes B. Zorina Khan.

 

Despite democratizing invention, the committee granted relatively few patents, just three in the first year and 57 in the three-year period when the law was in effect. In Jefferson and the Rights of Man, Dumas Malone writes that “Guiding Jefferson while patents came to him for review was the belief that patents should be given to particular machines, not to all possible applications or uses of them; that mere change in material or form gave no claim; and that exclusive rights of an invention must always be considered in terms of the invention’s social benefit.” Examples of accepted patents include improved sail-cloths for ships, distilling techniques for alcohol, and fire retardants as well as an improved steamboat.

 

Jefferson was, without question, a natural choice to head the Board of Arts, as no other politician of his day patronized the sciences as did he. “To all who knew this man of limitless scientific curiosity and inventive mind,” continues Malone, “his official connection with the promotion of the useful arts must have seemed eminently appropriate.” Jefferson himself wroteto Monsieur l’Hommande (9 Aug. 1787), and said he “found the means of preserving flour more perfectly than has been done hitherto”: “Every discovery which multiplies the subsistence of man, must be a matter of joy to every friend to humanity.”

 

Yet in another sense, it is bizarre that Jefferson headed an office issuing patents, given that he was always against monopolizing of any sort. He considered monopolizing a useful idea to be a crime, especially in a developing country in large need of useful ideas. 

 

Yet praxis and duty to his country triumphed. In a letter to Madison (28 Aug. 1789), he, critiquing the Bill of Rights sent to him while still in France, offers certain addenda to that bill. One addendum is a right to patent an idea, but for a limited period of time. “Monopolies may be allowed to persons for their own productions in literature & their own inventions in the arts, for a term not exceeding *** years but for no longer term & no other purpose.” He stated to Oliver Evans some two decades later (2 May 1807): “an inventor ought to be allowed a right to the benefit of his invention for some certain time. it is equally certain it ought not to be perpetual.” If perpetual, it would “embarrass society with monopolies for every utensil existing, & in all details of life.”

Jefferson’s cleanest expression of his views on patents came in a weighty letter to Isaac McPherson (13 Aug. 1813) about Oliver Evan’s proposed elevator patent—a string of buckets fixed on a leather strap, for drawing up water. Is Evans’ machine his own, “his invention,” or do others have right of usage? Jefferson wasc oncerned with the machine itself, not its usage. If one person, for instance, received a patent for a knife that points pens, another could not receive a patent for the same knife for pointing pencils.

 

Jefferson begins by noting he has seen similar contraptions used by numerous others—“I have used this machine for sowing Benni seed also” and intends to have other bands of buckets in use for corn and wheat—and even notes that such an elevator was in use in Ancient Egypt. He sums, “There is nothing new in these elevators but being strung together on a strap of leather.” If Evans is to be credited with anything new, “it can only extend to the strap,” yet even the leather strap was used similarly by a certain Mr. Martin of Caroline County, Virginia. There is, Jefferson is clear, nothing original in Evans’ machine.

 

Jefferson, however, had more to say: many believe that “inventors have a natural and exclusive right to their inventions,” which is “inheritable to their heirs.” Yet it “would be singular to admit a natural and even an hereditary right to inventors.” 

 

Why? “Whatever, fixed or movable, belongs to all men equally and in common, is the property for the moment of him who occupies it.” Yet when he relinquishes occupation, he relinquishes ownership. It would be strange to think that a person acquiring ownership of some property, thus, has a natural right to it. That would mean that no one has a right to the property after he perishes, and even more absurdly, that no one had a right to that property prior to him having acquired the land. “Stable ownership is the gift of social law,” and not of nature. The argument applies straightforwardly to ideas. Jefferson sums, “It would be curious then,” adds Jefferson, “if an idea, the fugitive fermentation of an individual brain, could, of natural right, be claimed in exclusive and stable property.” The argument for patenting ideas by appealing to nature is untenable.

 

Jefferson still has more to say. The analogy has its flaws. Ideas are singular. If there is anything that nature has made “less susceptible than all others of exclusive property, it is the action of the thinking power called an idea.” Each person possesses exclusively any idea so long as it is unshared. Once shared, it belongs to everyone.

 

Moreover, an idea shared is fully possessed by all who entertain it. “He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me.” The same cannot be said for property shared. It is that power of an idea, to be shared without lessening its density, which makes it a special gift of nature for “the moral and mutual instruction of man.” He sums, “Inventions then cannot, in nature, be a subject of property.”

 

If there is no natural right for patenting an idea, it is now a matter of convention—that is, appeal to law. What sort of law ought there to be concerning patents? To Jefferson, the answer concerned the social benefits of patents.

 

England was the first country to patents ideas, and America copied it—an indication of some benefit to the patenting of ideas. Still most nations thought that such monopolizing of ideas engenders “more embarrassment than advantage to society.” Moreover, Jefferson believed nations that didnot monopolize inventions “are as fruitful as England in new and useful devices.” Patenting an idea didnot seem to extend its benefits.

 

Though Jefferson expressly settles on no conclusion apropos the utility of patenting ideas, his appeal to nature indicates disrelish of the notion of social benefits of patents. Ideas are too powerful to be imprisoned by patents, and to patent a useful invention, for instance, is to prevent it from full moral and mutual instruction. No better evidence of that is Jefferson’s own refusal to patent his plow moldboard, which was awarded a gold medal for it by the French Society for Agriculture.

 

Jefferson’s suggestion that ideas, because of their potential for social benefit, not be chained to patents but be freely shared, goes against the etymological legacy that ideas are personal. The Greek idea means “form,” “appearance,” “nature,” or “idea.” Its adjectival form, idios (m.) means “one’s own,” “personal,” or “peculiar.”

 

“The action of the thinking power called an idea” is a powerful, even intoxicating Jeffersonian sentiment. Ideas, thus, are potent and singular possessions. One, moved by an idea, is no less moved when he shares it with another, hence the warrant for sharing.

 

Jefferson’s own ideas on ideas, I maintain, are as pungent now as they were when he articulated them, and that pungency can nowise be diminished today by sharing them. They also signal much about the man who articulated them.

 

Just an idea!

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172970 https://historynewsnetwork.org/article/172970 0
How The Media Covered Woodstock – and How Woodstock Changed The Media

via Wikipedia

 

Initial media coverage of the Woodstock concert portrayed the event as a disaster in the making. However, a young generation of reporters saw the event differently. Harlan Lebo, author of 100 Days: How Four Events in 1969 Shaped America (AmazonBarnes & Noble), describes how coverage of Woodstock set a tone for reappraising American perceptions of young people in the 1960s. This is the second in a two part series on Woodstock. Click here to read the first. 

 

As the last stragglers from Woodstock drifting back to their normal routines on the third Monday in August 1969, reporters and editors in newsrooms across America struggled to characterize the era-changing events that had occurred over the weekend.    

At the offices of The New York Times, the mood was highly-charged, with the staff and editors divided by the same types of conflicts Americans at large were experiencing in the wake of cultural change.

 

Over the weekend, most of America’s newspapers – drawing on articles written by national wire services – had covered Woodstock by focusing on the disaster angle, with sprinklings of drug overdoses and mud thrown in. Typical of the headlines was the banner across the top of the New York Daily News on Saturday morning: “Traffic Uptight at Hippiefest.” A caption for a photo that showed a choked road began “Go-Go Is a No-No.” 

 

Perhaps the most flagrant of these stories was the report distributed across the country by United Press International (UPI), which covered the story entirely in a negative light by focusing on the “mud, sickness, and thousands of drug overdoses.” UPI quoted “one far-out music lover” as saying, “I don’t know, man – this thing is just one bad trip.”

 

Reporters from theTimes who were at Woodstock knew better. Writer Barney Collier fought by phone with the news desk to cover the event as an evolving cultural milestone.

 

“To me, it looked like an amazingly well-behaved bunch of folks,” Collier recalled in 2009.  “Every major Times editor up to and including executive editor James Reston insisted that the tenor of the story must be a social catastrophe in the making. 

 

“I had to resort to refusing to write the story unless it reflected to a great extent my on-the-scene conviction that ‘peace’ and ‘love’ was the actual emphasis, not the preconceived opinions of Manhattan-bound editors. 

 

“This caused a big furor at The Times in New York,” Collier recalled. “And this eventually went up to Reston. He said, ‘if that’s the way Barney sees it, that’s the way we’ll write it.’ Collier’s article ran as the lead story on page one of the Sunday Times, and was picked up by other papers across the country.

 

“Despite massive traffic jams, drenching rain storms, and shortages of food, water, and medical facilities, young people swarmed over this rural area today for the Woodstock Music and Art Fair,” Collier wrote.  “The young people came in droves, camping in the woods, romping in the mud, talking, smoking, and listening to the music.”

 

To Collier, “that was probably the most important thing I did – to get it to be seen as it was, rather than the preconceptions of a lot of editors back on the desks. After the first day’sTimes story appeared on page one, the event was widely recognized for the amazing and beautiful accident it was.”

 

Conflict on the editorial page

 

But Collier and other New York Times writers who had covered the event had no control over the editorials that appeared in the paper.

 

That Monday, in a staff editorial titled, “Nightmare in the Catskills,” the Times editors – none of whom had first-hand knowledge of what had occurred in Bethel – blistered the Woodstock Festival.

 

“The dreams of marijuana and rock music that drew 300,000 fans and hippies to the Catskills had little more sanity than the impulses that drive the lemmings to march to their deaths in the sea,” the Times’ editors pontificated. “What kind of culture is it that can produce so colossal a mess?” 

 

Reporters at the Times rebelled against the editorial, some reportedly threatening to quit if the paper did not revise its view. And incredibly, the Times did something rare for the paper: it recanted – reluctantly.

 

On Tuesday, August 19, the editors ran a second editorial – this one entitled “Morning After at Bethel,” which toned down the rhetoric from the day before and offered a somewhat more thoughtful appraisal.

 

“Now that Bethel [New York] has shrunk back to the dimensions of a Catskill village,” the editorial began, “and most of the 300,000 young people who made it a ‘scene’ have returned to their homes, the rock festival begins to take on the quality of a social phenomenon.”

 

The attendees endured the discomforts, the Times wrote, “to enjoy their own society, free to exult in a life style that is its own declaration of independence.”

 

TV network news enlightens the nation

 

While newspapers across the country continued to focus on the concert-as-disaster-area and “hippiefest” in their coverage during the weekend, network television news programs were quick to pick up on the message of Woodstock. At that time, the media with the broadest reach – some 20 million households nightly – were the half-hour evening news shows aired by ABC, CBS, and NBC.  

 

Each network had a crew at the concert site on August 18 to wrap up the coverage. While the reporting touched on the logistical problems over the weekend, the correspondents– each with experience covering youth and the issues of the ‘60s – focused most of their attention on the message of Woodstock.

 

“This weekend says a lot about the youth of America,” said Lem Tucker from NBC, standing near the stage surrounded by a sea of refuse. “More than 350,000 people came looking for peace and music. Many said they learned a lot about themselves, and learned a lot about getting along together, and priorities. And for most, that alone makes it all worthwhile.” 

 

On the CBS Evening News with Walter Cronkite, reporter John Laurence delivered a commentary that looked past the drugs and the traffic.

 

“What happened this weekend may have been more than an uncontrolled outpouring of hip young people,” said Laurence. “What happened was that hundreds of thousands of kids invaded a rural resort area totally unprepared to accommodate them, among adults who reject their youthful style of life. And that somehow, by nature of old-fashioned kindness and caring, both groups came together, in harmony and good humor, and all of them learned from the experience.”

 

Laurence described Woodstock as “a revelation in human understanding.” The local – and older – eyewitnesses, Laurence explained, “had not been aware, as the kids are, of the gentle nature of kids to one another. These long-haired, mostly-white kids in their blue jeans and sandals, were no wide-eyed anarchists looking for trouble.

 

“So what was learned was not that hundreds of thousands of people can paralyze an area…but that in an emergency at least, people of all ages are capable of compassion,” concluded Laurence. “And while such a spectacle may never happen again, it has recorded the growing proportions of this youthful culture in the mind of adult America.”

 

The realities of change

 

How would media coverage of youth culture change after Woodstock?   

 

To Ken Paulson – in 1969 a fifteen-year-old fledgling music writer who four decades later would become editor of USA Today– the reporting on Woodstock and the internal strife it produced at The New York Times were vivid examples of the disconnection at many media outlets that would continue to fester in their coverage of youth culture in America.

 

“The news media didn't know how to cover a cultural event like Woodstock, and they had no appreciation of the art involved,” Paulson said. “This was no surprise. Newspapers across the country were staffed with people who grew up on Elvis, and it is a giant leap from Elvis to The Who.”

 

This lack of appreciation says much about thoughtful discussions of news coveragein the 1960s, but speaks even louder about the financial needs that were emerging for media in that era. During the 1960s, news organizations had to court the emerging baby boomers – a generation strong in numbers and in buying power– but not without significant challenges.  

 

“Media slowly began to realize that they needed younger readers to buy their publications and buy from their advertisers,” said Paulson. “But most aspects of counterculture– such as alternative lifestyles and social protest – just didn't lend themselves to advertising revenue or support for general-interest publications. 

 

One method to reach that growing audience was through coverage of music, which created opportunities for a new, more diverse generation of reporters.  

 

“Music coverage was the most visible form reflecting the culture at the time,” Paulson said.  “Woodstock inspired a reexamination among the nation's news media about how they cover these events – and without appearing to be totally out of touch.”

 

“As a result, in the early seventies, major publications hired young people who could write about young people's music, as well as film.” 

 

This lack of awareness of the contemporary scene among mainstream media outlets continues to shape coverage.

 

“It's not unusual for news organizations to be clueless about emerging cultural developments,” Paulson said, “especially when it involves the art of young people." “I think the gap has intensified now – it’s just taken different forms today,” Paulson said. “For instance, it’s very difficult for mainstream news organizations to understand rap, or to truly appreciate hip hop.”

 

So although that gap has lingered in the fifty years since the summer of 1969, Woodstock was nevertheless a milestone – not just in coverage of the music scene, but also in broader media exploration of social and economic issues that affect younger audiences. After the Woodstock weekend, rock music and other topics concerning young people in the American experience were no longer oddities. It was clear that the future had arrived, when for three days, 400,000 people were part of an instant city that defined its own culture.

 

“Suddenly,” Said Collier, “we began to realize that we were coming into a different world.”

 

At ABC News the night Woodstock ended, veteran journalist Howard K. Smith agreed.

 

“Over the last few days we’ve had a glimpse of our future,” Smithtold his viewers, “and this is what it looked like.” 

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172951 https://historynewsnetwork.org/article/172951 0
Seeing Our Environment Steve Hochstadt is a professor of history emeritus at Illinois College, who blogs for HNN and LAProgressive, and writes about Jewish refugees in Shanghai.

 

 

Two sets of opinions about our environment, the earth which makes our lives possible, are at war in our country. The scientific set is alarmed about the mounting effects of human activity on air, water, animal and plant life, and climate. As population and consumption grow, and industrial methods of doing everything proliferate, the earth has become unable to absorb the multiplying impact on its interlocking natural systems. The pollution of our water supplies, the increasing ferocity of storms, the warming of climate, the rising level of oceans, and the dying of species are already negatively affecting people around the globe. Projections of these trends into the near future predict severe problems for billions of people.

 

The ignorant set of opinions dismisses all evidence with stupid arguments. “The earth’s climate was much warmer long ago.” Yes, it was, before agriculture, before human life emerged, before millions of people lived on the edge of the oceans. “There is no scientific consensus.” Just a lie about the small number of isolated cranks who put forward specious contentions based on made-up evidence. “Computer models are unreliable.” You don’t need a computer to read a thermometer or see how the projections from 10 years ago have already come true. “The end times are coming, so don’t worry about climate change.” Religious dogma trumps science again.

 

Ignorant and stupid may be understatements. The political forces which have argued against doing anything to reduce our impact on the environment, and which now actively reverse previous efforts to protect the earth, deliberately lie about what has already happened. Republicans in Congress and the White House know that temperatures are rising. But they prioritize their own ideological short-term gains over the long-term prospects for our children and grandchildren. Their rich donors believe that their money can protect them against the disasters that will eventually befall the less wealthy, who have always borne the brunt of human-caused environmental disasters. The ignorant, stupid, dishonest set of opinions has been backed by billions of ideological dollars for decades.

 

Against the torrent of influence-buying, the willingness of ideologues like the Heartland Institute to twist the truth, and the self-interest of venal politicians, the honesty and empathy of someone like Greta Thunberg, the Swedish teenager who just sailed across the Atlantic to urge Americans toward bold action against climate change, stands little chance of success. It appears that no amount of evidence, neither scientific articles nor photographs of melting glaciers, can affect the deliberately ignorant.

 

The so-called age gap in climate consciousness might appear to be a hopeful sign for the future. A Gallup poll last year found that 70% of 18- to 34-year olds worry about global warming, but only 63% of 35- to 54-year olds, and 55% of people 55 and older. Nearly half of older Americans put themselves into the ignorant camp, not believing that most scientists agree that global warming is occurring, that global warming is caused by human activities, and that the effects of global warming have already begun. Maybe the key is that only 29% of older Americans think global warming will pose a serious threat in their lifetime. As Louis XV is supposed to have said, “Après moi le déluge.”

 

Politics has an even stronger effect on beliefs about climate than age. The most ignorant Americans are older Republicans, less than half of whom believe global warming is occurring, and less than one-third of whom believe that most scientists agree about global warming.

 

But young people are also not that worried. Only half believe that global warming will pose a serious threat in their lifetimes, which extend well past 2050, the nightmare date by which climate across the globe will be unrecognizable.

 

Opposition to efforts to ameliorate climate change comes not only from conservative politicians. A couple in Missouri who wanted to install solar panels on their roof had to fight for years with local politicians and neighbors who didn’t like the look. In some classic cases of “not in my backyard”, people in the most liberal places refuse to accept minor lifestyle changes. An attempt to construct a wind farm off the shores of Nantucket Island near Cape Cod resulted in years of controversy, litigation, documentaries, books, and polls, and was eventually shelved. The most significant argument against the tall turbines 15 miles offshore from Nantucket was that they would spoil the view. A 2013 law in Massachusetts that would have indexed the tax on gasoline to inflation was repealed by popular vote the next year.

 

Although my family believes I am a Luddite because of my reluctance to embrace cell phones, I blame their use for some of our environmental problems. I am constantly amazed when I walk around on a sunny day and most of the people I see are staring fixedly down at a tiny screen. A flock of ducks flies overhead, trees wave in the breeze, clouds march across the sky, but they earn not even a glance. The younger generations are turning away from the natural world in favor of virtual unreality. They may be watching videos of Greenland’s ice pack melting, but they miss what is happening to their own environment.

 

There is much discussion of the physical dangers of using smart phones while walking. I am more concerned about the intellectual danger of ignoring the physical environment during the short periods when most people are outside.

 

Some scientists are worried about the “human costs of alienation from the natural world”, which has been labeled “nature deficit disorder”. Biologists identify “plant blindness” as one symptom, “the inability to see or notice the plants in one’s own environment”. As our society has moved off the land into cities and suburbs, we have distanced ourselves from the natural world. Now the lure of rapidly changing images and instant communication distracts too many people from the slow degradation of the earth on which we stand.

 

The pace of environmental change is much faster than ever before, but slow in terms of human life span. It is difficult to convince anyone to accept something now that they don’t like in order to prevent a catastrophe decades away.

 

Looking down at our phones, we won’t see the cliff ahead.

 

Steve Hochstadt

Springbrook WI

September 3, 2019

 

Thanks to my cousins, Roger Tobin and Saul Tobin.

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/blog/154243 https://historynewsnetwork.org/blog/154243 0
Roundup Top 10!  

Lessons from the UN peacekeeping mission in Rwanda, 25 years after the genocide it failed to stop

by Samantha Lakin

Despite the broader mission’s many well-documented failings, peacekeepers took risks to save lives, going beyond official orders to protect innocent Rwandans.

 

As Hurricane Dorian Threatens Florida, Gov. DeSantis & Trump—Who Haven't Curbed CO2 Emissions—Should Resign

by Juan Cole

If you don’t recognize the cause of a problem, you can’t fix the problem.

 

 

To rescue democracy, we must revive the reforms of the Progressive Era

by Ganesh Sitaraman

The playbook for taming industrial capitalism already exists. It’s the essential starting point for reform today.

 

 

Police and punitive policies make schools less safe, especially for minority students

by Kathryn Schumaker

The increase in school security is directly linked to the rise of student activism that started to transform schools 50 years ago.

 

 

The Original Evil Corporation

by William Dalrymple

The East India Company, a trading firm with its own army, was masterful at manipulating governments for its own profit. It’s the prototype for today’s multinationals.

 

 

Not having kids is nothing new. What centuries of history tell us about childlessness today.

by Rachel Chrastil

The long history of childlessness can help us to debunk myths, tell our stories and expand the range of our possibilities.

 

 

When Henrietta Wood Won Reparations in 1878

by W. Caleb McDaniel

She sued the man who had kidnapped her into slavery for damages and lost wages, offering lessons for today’s debate.

 

 

Could footnotes be the key to winning the disinformation wars?

by Karin Wulf

More than ever, we need what this tool provides: accountability and transparency.

 

 

 

The Price of Self-Delusion

by Ronald Radosh

Paul Robeson, the towering figure of American arts, athletics, and civil rights activism, was also an unapologetic Stalinist. Failing to acknowledge this checkered legacy ultimately does a disservice to the goals he fought for.

 
]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172962 https://historynewsnetwork.org/article/172962 0
Thou Shalt Not Ration Justice

 

With the passing of Norman Lefstein this week in Indiana where he had served as Dean of the IU McKinney School of Law, the country has lost a champion for indigent defense reform whose initiatives and advocacy have impacted the national conversation for generations. 

 

 Norm often quoted the late federal Judge Learned Hand: “if we are to keep democracy, there must be a commandment: Thou Shalt Not Ration Justice.” He argued that the promise of justice is hollow so long as public defense remains an unfunded mandate. The uneven dispensing of justice from state to state, county to county, and defendant to defendant has too often been the result. 

 

Norm was no idle critic. Norm took his first case as a public defender in 1963, only six months after the Supreme Court’s landmark decision in Gideon v. Wainwright.  In the early 1970s, Norm  headed the Public Defender Service of the District of Columbia, an agency which quickly became —and remains — a national model for delivering public defender services.  In the words of his colleague,  Barbara Babcock, “many of Norm’s ideas were novel for the time; today they are the hallmarks of excellence in a defender program.”

 

By the 1980s, Norm was an indispensable member of any state or national conversation about public defense. Until his dying day, Norm remained  a fixture of the American Bar Association’s Standing Committee for Legal Aid and Indigent Defendants. There, he co-authored the Association’s touchstone pronouncements on indigent defense work, including its Standards for Providing Defense Services and the Defense Function, and its Ten Principles of a Public Defense Delivery System.

 

Norm pioneered some of the first attempts to collect comprehensive data on indigent defense around the nation. In the wake of the 1963 Gideon decision, Norm knew that states were underfunding public defense – he just couldn’t prove it.  Norm’s 1982 study showed that defender systems in some states were funded less than 5% of the D.C. public defender’s  per-capita budget.

 

Through his life, Norm’s message on indigent defense policy was consistent. “Current financing is woefully insufficient,” and America’s legal system must do better.  Reflecting on the 50th anniversary of Gideon, Norm quoted John Lennon’s Imagine and described himself as a ‘dreamer.’ But Norm was remarkable precisely because he wasn’t just a dreamer.  Instead he was a “true believer.” From the courtroom to the classroom, Norm understood the importance of proof both inside and outside the courtroom.  He led public defense – and public defenders — into the brave new world of evidence-based reform. 

 

Norm’s work is unfinished. But every day, as they stand beside their clients in court, tens of thousands of lawyers carry on his legacy. At the Deason Center, located in the Dedman School of Law at Southern Methodist University in Dallas — as public defense researchers (and former public defenders) – we live and work in Norm’s long shadow. It is quite difficult to imagine what public defense will look like without him. But to honor him, we must try.

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172943 https://historynewsnetwork.org/article/172943 0
Can testimony against Epstein still have an impact without him present?

Jennifer Araroz speaks on the courthouse steps outside the Jeffrey Epstein trial

 

 

Jeffrey Epstein’s suicide on 23 July 2019 ended the possibility of his criminal trial for the extensive sex trafficking of young girls. It also robbed his many accusers of the chance to confront him publicly. On 28 August, however, the victims were allowed to speak in court thanks to Judge Richard Berman’s decision to allow their public testimony. Though welcome, the decision was also cold comfort as Epstein’s chair at the defense table sat empty. “The fact that I will never have a chance to face my predator in court,” said Jennifer Araoz, “eats away at my soul.” Others were more hopeful. “I refuse to let this man win in death,” said Chauntae Davis. “I have found my voice now.” 

 

But will the testimony of Epstein’s victims have the same lasting impact that it might have had with the defendant present? 

 

The 1987 trial of Klaus Barbie offers historical context to this question. Barbie was the Gestapo chief in occupied Lyon from 1942 to 1944. Like Epstein, Barbie had cheated justice for decades owing to his escape to Bolivia in 1951. On the third day of his belated trial, Barbie exited the courtroom, comparing the proceedings to a lynching. The court, fearing the image of an old man in chains, refused to compel the former Gestapo chief’s presence, and so he remained in his cell for virtually all witness testimony.

 

International reporters fled Lyon quickly, “like a flock of wild ducks,” according to one attorney. As one commentator put it, the press had come to see “the monster, the monster’s face, the monster’s eyes,” without which there was no reason to stay. French-Jewish intellectual Alain Finkielkraut lamented that, “the concern with the sensational and the obsession with the scoop” had taken “precedence over the demands of the truth.” Barbie’s victims were also dismayed. “I am revolted, outraged” said 86-year old former resistor Lise Lesèvre, who Barbie had tortured horribly. “I would like to have accused him face to face. I would like for him to have heard the truth.” Anne-Marie Lenoir, another former resistor, commented: “He does not have the courage to look me in the eye.”

 

Nonetheless, the witnesses testified before an empty defendant’s box. Many were women whose torture at the hands of Barbie and his henchmen in Lyon Gestapo headquarters four decades earlier generally contained sexual elements. Thoughtful veteran journalists who remained, such as Le Monde’s Jean-Marc Théolleyre, noted that the onus was now on the public. Without easily digestible images of the accused, they had to make the effort to hear. Barbie’s absence, he wrote, “forces us to listen [to the witnesses] more closely without worrying about how [Barbie] would have heard them, looked at them, or the answers he might have given….  Is that not the main reason for the trial?” French intellectual Bernard-Henri Lévy had a similar view. The media he said, reveled in “images and counter-images… effects and counter-effects.” Now, the trial made new demands on the public because iconic representations of the defendant smirking as his victims testified would not be available. But the absence of the accused, Lévy said, should remind everyone that the witnesses “are the primary parties in this business…. It is they who have my interest. It is they who move me. It is their words … that I want to hear first of all.” 

 

And in the end, it was the witnesses who saved the trial. They not only recounted the horrors they experienced. They also somehow conjured the defendant’s likeness, not as the broken old man from 1987, but as the ruthless Gestapo chief of 1944. Lise Lesèvre had been waterboarded repeatedly after being forcibly undressed, and she was savagely beaten over the course of nineteen days before being deported to Ravensbrück. Permanently crippled, moving painfully through the courtroom with a cane, she refused to sit in the witness box when offered a chair. Tightly gripping the rail, she stood, she straightened her back, she spoke, and she demanded the attention of all who dared to listen. She described the defendant, from his “striking eyes” to his “beautiful shining boots” to the “terrifying dread” that his image still caused her. She was a mix, said one reporter, “of fragility and strength.” And on hearing the testimony of Lesèvre and other victims, said one Paris daily, “we listened to them … we cried with them … we understood suddenly why … it was necessary that this trial take place.”

 

Many have argued that justice in the Epstein case is diminished by the defendant’s suicide, and indeed they are correct. The confrontation between the accuser and the accused lay at the heart of the criminal justice system. But justice also exists to reach the truth. In so doing it offers the victims their moment to speak, while also demanding, despite our current reliance on images and sound bytes, that the victims be heard.

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172907 https://historynewsnetwork.org/article/172907 0
Searching for El Dorado: The Colonial Fantasy of the Amazon

 

When Walter Raleigh wrote back to his London financiers about his 1595 colonial incursions into the area around South America’s Orinoco River, he described the region and its rainforest as an earthly paradise. Writing of the northern Amazon region in The Discovery of Guiana, Raleigh says that “we passed the most beautiful country that ever mine eyes beheld.” The courtier, explorer, poet, and exploiter gave litanies of the continent’s flora and fauna, while also imbuing his subject with the quality of the fantastic. In Raleigh’s estimation (or at least what he presented to his creditors), this land wasn’t merely lush, it was unsullied; it wasn’t simply fecund, it was virginal; it wasn’t just pristine, it was paradisiac. For Raleigh, as for the Spaniards who preceded him, the language best used to describe the American tropics wasn’t one of just geography, but rather something which required a religious rhetoric – the vocabulary of Eden. 

 

Even the foliage seemed endowed with transcendent meaning, for the “grass short and green, and in divers parts groves of trees by themselves, [were] as if they had been by all the art and labour in the world so made of purpose.” But Raleigh wasn’t looking for divinity, he was looking for gold, specifically in the promise of the mythical city of El Dorado. The legend was perhaps loosely based on the actual Muisca people of Colombia who were adept in metallurgy and gold-working, but more importantly Raleigh’s obsession with El Dorado reflects much of how Europeans have understood South America in general and the Amazon in particular – as a place simultaneously Edenic and ripe for exploitation, a dream based on a half-truth and transformed into a nightmare by colonialism. The horrible irony of calling an earthly place a paradise is that we can only understand such Edens by recourse to paradise lost. 

 

The Amazon has more biodiversity in plant and animal life than any other place on Earth; its dense forests regulate the planet’s climate, and it represents a storehouse of the world’s biological legacy that goes back 55 million years. So vital is the Amazon’s role in the regulation of Earth’s ecology that if it were to disappear, the implications would be literally apocalyptic. Few other locations would deserve to be called “paradise” as much as the Amazon, and yet the cruel paradox is such idealization is partially what’s responsible for the refusal to be stewards of the forest. Instead, the Amazon is considered  a storehouse for human desire and fulfillment, whether in the form of Raleigh’s gold, or the acres of forest being cleared today for cattle ranchers. There is a direct ideological thread that runs from the early explorers of the Caribbean and South America—men like Raleigh and Francisco de Orellana, the conquistador who fist traversed the length of the Amazon—and the current environmental catastrophe unfolding in the Brazilian rainforest as fire burns its way across the length of a land which was once called paradise. 

 

The more than 74,000 wild fires this year that have been enabled by the rapacious Laissez-faire policies of Brazilian president Jair Bolsonaro threaten to immolate that ecosystem on which so much of the Earth’s delicate maintenance depends while the toxic smoke turns the hemisphere’s largest city Sau Paulo’s days into night. In The Atlantic Robinson Meyer explains that “During his campaign, [Bolsonaro] promised to weaken the Amazon’s environmental protections – which have been effective at reducing deforestation for the past two decades – and open up the rainforest to economic development.” As a result, since January over 1,330 square miles of rainforest have been lost to fires purposefully ignited so as to clear land for ranchers, an 80% increase since last year. Such an area is, as reported by Eliana Brum in The Guardian, equivalent to the loss of an area the size of Manhattan every day; of the London metropolitan area every three weeks. 

 

These fires were intentionally set and are indeed the fulfillment of Bolsonaro’s campaign promises (that they followed recent legal victories for indigenous Indians regarding the use of their own land isn’t a coincidence either). As Brum notes, “Bolsonarism’s number one power project is to turn public lands that serve everyone – because they guarantee the preservation of natural biomes, the life of native peoples and regulate the climate – into private lands that profit a few.” Such burning is not a repudiation of the ethos of a Raleigh, but rather the final fulfillment of that exploitive colonial nightmare. Bolsonaro might be speaking with his characteristic mocking vitriol when he said “I used to be called Captain Chainsaw. Now I am Nero, setting the Amazon aflame,” but there is still a malignant truth in what he’s saying. Brum writes that “Bolsonaro is not just a threat to the Amazon. He is a threat to the planet, precisely because he is a threat to the Amazon.” With the rainforest’s central role in regulating the oxygen of the planet, and thus the climate of the planet, Bolsonaro’s polices in the Amazon and his inaction in the face of the fires could have catastrophic implications, especially at this ecological hinge moment in history. As superficially different as fire may be from genocide and ethnic cleansing, the collapse and deaths that could result over the long-term from Bolsonaro’s Amazon policy could ultimately end up ranking him alongside history’s worst monsters. 

 

Bolsonaro is a dangerous part of the same trend towards authoritarianism and illiberalism that is exemplified in politicians from Vladimir Putin and Matteo Salvini, to Viktor Orban and Donald Trump. Such authoritarianism is in some sense baked into the worst parts of the Western project, and Bolsonaro’s privatization schemes for the Amazon, which Meyer’s reminds us is a “store of carbon… fundamental to the survival of every person,” is a continuation of the colonialism which started a half-millennia ago. Brum writes that for “500 years, this has been a place of ruins. First with the European invasion, which brought a particularly destructive form of civilization… More recently, with the clearance of vast swaths of forest and all life within it.” It’s worth noting that one ideological current which connects colonialism five centuries ago with unfettered capitalism today is the deployment of a particularly gendered language. When Raleigh espied Guyana he spoke of it as not just paradise, but specifically as a woman. Land which had been home to indigenous people for millennia, and a profound diversity of life for millions of years, was now understood as a “country that hath yet her maidenhead, never sacked, turned, nor wrought; the face of the earth hath not been torn… never conquered or possessed.”   

 

Literary scholar Roland Greene writes in his study Unrequited Conquests: Love and Empire in the Colonial Americas that early modern explorers used the language of sexuality to justify the commodification of their “discoveries.” Greene notes that during the early modern period, the discourse about the Americas exemplified “the semantic and moral disruptions that take place in love poetry around desired female objects.” There is a connection between Raleigh’s rhetoric of paradise and his vocabulary of gendered conquest – both types of language posit the land in idealized terms, and both envision a privilege on the part of the colonist to exploit that land. Most importantly, such language barely conceals its rage and violence; Raleigh’s explicit conjunction of a virginal land that “hath yet her maidenhead” and which has been “never conquered or possessed” is a clear call to do just those things. Again, the line between Raleigh and a Bolsonaro who once told an elected congresswoman that “I would never rape you, because you don’t deserve it” is horrifically clear, as is the reality that for all of their differences, the new crop of illiberal authoritarians blighting global politics are united in rank misogyny. 

 

What’s to be done in the immediate present is to try and put out the Amazon fires by any means necessary. The paltry twenty million dollars offered by the G7, which it’s not even clear Bolsonaro will accept, is at least a start. It’s unclear what the Brazilian military, tasked by Bolsonaro with putting out the conflagration after international pressure was put on his government, has accomplished much. More locally and immediately, for those readers who wish to contribute, the Earth Alliance has an Amazon Forest Fund where 100% of the proceeds will go to fighting the fire. 

 

But even after the flames have been quenched, there is the issue of abolishing the very discourses that have brought us to this moment; the rhetoric of exploitation, ownership, and rape that has facilitated the Amazon’s burning. Raleigh may have thought of the forest as virginal and dismissed its previous inhabitants as merely part of the natural landscape, but the reality is that the indigenous lived in oftentimes quite sophisticated civilizations that had a complex relationship to the Amazon based in mutual stewardship. Journalist Charles C. Mann writes in 1491: New Revelations of the Americas Before Columbus that “some anthropologists say [that] the great Amazon forest is also a cultural artifact – that is an artificial artifact.” But not all artificial artifacts are the same, for in the forest currently being destroyed there are the ruins of those with different values who were able to consistently live within that nature. As the Amazon burns, what lessons are being lost with it? 

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172908 https://historynewsnetwork.org/article/172908 0
Why Conservatives Are So Upset About the 1619 Project

 

The New York Times launched its 1619 project on August 14th of this year, and within days the project was a centerpiece of editorial boards across the political spectrum. Liberal media commentators like J. Bryan Charles of Vox noted that the “series has largely earned praise from academics, journalists, and politicians alike.” Conversely, those conservative media figures such as National Review’s Rich Lowry referred to the project’s editors as “Garrisonians… who thought the Constitution was a pact with the devil, a view that I think was incorrect and a practical dead end.” The 1619 Project reignited conservative media’s interest in the national ‘history wars.’

 

Jim Geraghty’s National Review article “What the 1619 Project Leaves Out” provides one of the best examples of right-leaning media’s conceptions of America’s past. Geraghty argues that the “1619 Project’s effort to ‘reframe American history’ requires cropping out some significant figures in African-American history,” such as war heroes, national leaders, etc. This response illustrates conservative media’s conceptions of the past as: determined by the actions of “great men,” defined by heroic and noble acts of patriotism from its citizens, and generally memorable for its unifying moments not its shameful ones.

 

There is nothing inherently incorrect about the Great Men approach to American history, and the idea has long been essential to the right’s discourse about history. Historian Andrew Hartman explained this in The War for the Soul of America, noting that for the right, “there were certain eternal truths, such that America was a beacon of freedom embodied in the great men of the American past.” The stories of great people who exemplified American exceptionalism provide conservative media with a more digestible and glorious version of the country’s history. Geraghty makes this idea a central part of his article; “the number of prominent figures who never even get mentioned or who get only the most cursory treatment is pretty surprising.” Without these figures, conservative powerbrokers lose the icons necessary to uphold a cleaner and more wholesome national history. 

 

High-profile conservative media’s emphasis on the great men and women of history dovetails nicely into a view that glorious patriotism is the defining feature of the nation’s legacy. The stories of American patriots, both black and white, undeniably deserve praise from us all. Conservative commentators like Geraghty, promote these stories as the central narrative of America’s history: “African-American heroism on the battlefield doesn’t really fit the narrative that the 1619 Project is trying to tell.” 

 

To a certain extent Geraghty’s take would be valid, if the ‘1619 Project’ was about the broader African-American experience. However, emphasizing heroism on the battlefield, obscures the defining story of the 1619 Project – slavery and its enormous national legacy. In the thick of the 1990s culture wars, Michael Sherry explained this practice in the book History Wars, and argued that for the right, consistent acts of righteous patriotism “defeat perceived foes at home and advance its vision of American cultural purity.” Geraghty’s writing, like that of conservative thinkers before him, weaves unadulterated patriotism into the center of the country’s historical quilt – a move that conceals the broader ups and downs of the past.

 

Conservative media also extensively focuses on the unifying parts of history at the expense of divisive and shameful moments in the past. Geraghty floated this idea when he observed that the “horrors of the Tuskegee Study of Untreated Syphilis in the Negro Male are discussed, but the Tuskegee Airmen are never mentioned.” The story of the Tuskegee Study disrupts the theme of American greatness and unity. In the 2002 best-seller A Patriots History of the United States, Larry Schweikart argued that “over the last 40 years, people have told the story of this country’s past dishonestly. They have over-exaggerated racism and sexism.” Conservative powerbrokers prefer to showcase national triumphs because they promote a country that was consistently ready to change for the better with little resistance from its citizens. For the right, the present becomes the culmination of remarkable actions throughout the country’s past. Consequently, their histories must downplay the influence of the country’s mistakes and controversial events are frequently relegated to the footnotes and margins of American history. 

 

Geraghty and National Review generally responded to the “1619 Project” in a more measured manner than most of the conservative mediasphere. The article acknowledged that “would the country as a whole be better off with a greater understanding of slavery and its legacy in American history? Absolutely.” To Geraghty’s credit, he recognized that the 1619 Project was one well worth undertaking – an opinion not necessarily echoed by those throughout his ideological circle. Newt Gingrich, for example, took to Twitter and postulated that “The NY Times 1619 Project should make its slogan ‘All the Propaganda we want to brainwash you with.’ it is a repudiation of the original NY Times motto.” The response from Gingrich, among others, lacked the sort of nuance required to discuss history in an even-handed manner. Geraghty and National Review thus deserve praise for avoiding these sorts of hyperbolic responses.

 

Despite this, Geraghty’s article concludes by saying that the “1619 Project argues, with considerable justification, that most of us been seeing only one part of the portrait of the founding, formation, and growth of our country . . . and then ‘reframes’ the portrait to leave out some of the most consequential and under-discussed African Americans in our history.” Geraghty thus concludes by centering conservative media’s view of the past – one of Great Men, exceptional patriotism throughout history, and a past where the nation’s unity almost always outweighs its discords. Conservative media far too often sees the past through a narrative of nearly inevitable progress – but the scars of the country’s failures cannot or should not be downplayed.  

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172905 https://historynewsnetwork.org/article/172905 0
Restricting Automatic Citizenship to Children Born to Americans Serving Abroad Is Only the Latest Effort by Trump to Dismantle Birthright Citizenship

 

Wednesday, August 28, the Trump administration announced a new policy limiting the extension of citizenship to children born abroad to military service members and government employees. The announcement was immediately met with a massive wave of condemnation and confusion. Due to its vague wording, many initially believed the policy would mean every child born to this group of citizens abroad would no longer automatically be a citizen. U.S. Citizenship and Immigration Services quickly tried to control the damage and assure people that most children would be unaffected by the changes. Nonetheless, the policy is an attempt to curtail the number of foreign-born children who can have access to U.S. citizenship. 

 

This move comes just one week after Donald Trump renewed his attacks against birthright citizenship, and once again claimed that he could end the constitutionally protected status by simply issuing an executive order. The current policy move by the administration is only the latest attempt by Trump and conservatives to enact a wholesale constitutionalrevolution by dismantling birthright citizenship. 

 

Citizenship has always been a fairly amorphous status. Like a liquid, it can take the shape of its surroundings at a moment’s notice. Over the course of American history, citizenship has expanded to include and contracted to exclude. To this day, citizenship still lacks a formal legal definition and is only mentioned a handful of times in the Constitution. It was originally imagined to have very few actual rights associated with it. Nonetheless, throughout this country’s history, citizenship has been seen as a beacon on a hill, or a North Star, to ensure rights and protections. 

 

Such was the case for African-Americans for nearly a century between the adoption of the Constitution in 1789 and the ratification of the Fourteenth Amendment in 1868. Scholars such as Martha S. JonesElizabeth Stordeur PryorAndrew Diemer, and a host of others, have shown how contested citizenship was throughout the first half of the 19th Century. When the Fourteenth Amendment was ratified following the end of the Civil War and the emancipation of this country’s millions of enslaved people, citizenship was fundamentally changed. The first section of the amendment read, in part, “All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside.” The point of this section was to ensure, in the face of a long history of racism and chattel slavery in America, that Black Americans could not be denied formal inclusion within the United States. 

 

However, just as quickly as citizenship was finally extended within this country, the promises it had once held out to Black Americans were withheld. For a century, Black Americans amounted to, at best, second-class citizens, until the end of legal Jim Crow segregation and discrimination, the extension of civil rights, and the protection of voting rights. 

 

What citizenship did for people, however, was to give them a place from which to make a stand for further rights. Citizenship has continually been a means to an end, a path to the promises embedded in the Declaration of Independence: “All men are created equal.” Birthright citizenship radically altered this opportunity, by extending to everyone born within this country a status that could not be taken away from them. If citizenship is a status of potential futures, birthright citizenship opened new futures to millions.

 

These very futures, however, are under attack. Donald Trump has been quite vocal about his disdain for birthright citizenship. Trump made waves during his presidential campaign when he said the children of undocumented immigrants weren’t citizens. Just last year, Trump claimed he could end birthright citizenship by executive action, something he has reiterated in recent days. 

 

His condemnation of birthright citizenship, however, is not out of the ordinary for conservatives. Five years before Trump kicked off his campaign, Senator Lindsey Graham went on Fox News to say “birthright citizenship I think is a mistake…We should change our Constitution and say if you come here illegally and you have a child, that child’s automatically not a citizen.” Other Republicans, including Mitch McConnell and John McCain signaled their own approval by supporting calls for Senate hearings on the matter. 

 

Even those conservatives who seem to criticize Trump’s actions or his comments often fall short of going against the core assault on birthright citizenship. Take, for example, Former Speaker of the House Paul Ryan’s response to Trump’s October 2017 remarks that he could end birthright citizenship through executive action. It garnered widespread condemnation from both political parties. Calling such a move “unconstitutional,” Ryan nonetheless found common ground when he said that conservatives wanted to address the problem of illegal immigration. The rhetorical move on Ryan’s part, of tying birthright citizenship to the problem of illegal immigration, still presented the status as a root cause of the problem. Ryan’s lukewarm criticism of Trump’s opinions on the Constitution have since given way to increasingly upfront attacks on birthright citizenship. 

 

In January, during his confirmation hearings, Attorney General William Bar refused to acknowledge birthright citizenship was even part of the Fourteenth Amendment. When asked by Senator Mazie Hirono about his opinions on the matter, Barr attempted to punt the issue by claiming he hadn’t “looked at” the issue. Yet, he also claimed Congress could legislate on the issue, betraying his former claim to not be familiar with the issue. Only a person who was invested in dismantling birthright citizenship and had actually looked at the issue would have made such a claim, considering the widespread acceptance by everyone besides conservatives of its centrality to the Fourteenth Amendment.  

 

As conservatives have increasingly embraced an extra-constitutional solution to birthright citizenship, Trump and his administration continued to institute harsher and harsher immigration policies. While it may appear at first glance that these policies are aimed at undocumented migrants, they actually serve a dual purpose. 

 

One only needs to look at the numerous cases of U.S. citizens who are picked up and detained by the government to know this administration’s policies are not only meant to curtail unwanted migration of immigrants, but are also aimed at destroyingbirthright citizenship. Since its inception of U.S. Immigration and Customs Enforcement (ICE) in 2005, the agency has falsely detained U.S. citizens. A recent study found ICE, from 2005 to 2017, likely detained thousands of citizens in Texas alone. The number of citizens detained is likely to increase in the future, as the Trump administration has just announced a new policy, expanding a practice known as “expedited removal.” Expedited removal allows ICE agents to decide a person’s immigration status, often without the review of a judge, depending on what documents that person can produce at a moment’s notice. While expedited removal had previously only applied to people within 100-miles of the border, the administration has now empowered ICE to use the process across the entirety of the United States. Expedited removal leaves people to the whims of individual ICE agents, removing the possibility of due process through a court hearing. Considering ICE’s blatant refusal to accept proper documentation when detaining U.S. citizens, it is only logical to assume this new policy will subject untold numbers of citizens, and lawful residents, to being falsely swept away. 

 

Why are Trump, his administration, and conservatives so invested in undermining birthright citizenship? Besides the obvious racism, the answer is becoming increasingly clear with time: power. The end goal was shown recently when Trump and Attorney General Barr held a press conference on the defeated census citizenship question. While speaking about the administration’s efforts to continue compiling citizenship data on the population, Barr explained that  “there is a current dispute over whether illegal aliens can be included for apportionment purposes. Depending on the resolution of that dispute, this data [the number of citizens] may be relevant to those considerations.” In other words, Barr said conservatives want to use citizens, and only citizens, when counting the number of people within a district, a state, or even the nation. By doing this, conservatives could continue in their efforts to hold onto political power, even though they are the numerical minority. 

 

This admission by Barr came days after the Supreme Court played into conservative’s hands by declaring it was not within the federal judiciary’s jurisdiction to rule on political gerrymandering. While many people rightly pointed out this decision was a disaster for democracy, the Supreme Court also signaled an opening for conservatives in their fight against birthright citizenship. By taking a hands off approach to political gerrymandering, the Court has all but said that if, and when, conservatives enact the sort of laws Barr spoke of, they can count on the Court’s implicit, if not explicit, sanction. Such a move by the Court would pave the way for conservatives to change the very definition of citizenship through policy alone.  

 

This, however, would not be the first time this has happened. Take, for example, Puerto Rican citizenship. As scholar Sam Erman has shown in his new book on Puerto Rican citizenship, at the turn of the 20th century the U.S. government took similar steps to define who could and could not be a citizen of the country. Largely through policy and legislative actions, the U.S. government was able to ward off attempts by Puerto Ricans, who had just been forcefully included within America’s growing oversees empire, to claim citizenship. For decades, the government attempted to deny the constitutional extension of citizenship to the island’s residents. The Supreme Court largely acquiesced to this strategy, refusing, on many occasions, to rule on the matter. Considering the behavior of the Court in recent years, it is not too much to think the Justices would revive this strategy and either refuse to get involved, or rule so narrowly as to allow overall policies to stay intact. 

 

The long term strategy of conservatives illustrated here also indicate, once again, their true thoughts on who constitutes the citizenry of the United States, i.e. white Americans. Nothing better illustrates the links between birthright citizenship, immigration policy, partisan gerrymanding, and the long held racism of Trump and his fellow conservatives than Trump’s own attacks on four Congresswomen of color. In a series of tweets aimed at Illhan Omar, Alexandria Ocasio-Cortez, Rashida Tlaib, and Ayanna Pressley, Trump declared that those who were unhappy with the United States should “go back and help fix the totally broken and crime infested places from which they came.” Even though only one of these Congresswomen, Omar, was born outside the United States, Trump and conservatives have defended attacks on “The Squad” as they argue those who criticize the United States are “un-American” and unpatriotic. Trump’s racism towards these Congresswomen is pointed directly at the idea of birthright citizenship, and citizenship more generally. Even if he doesn’t say anything about the status, the attack on U.S. born and naturalized citizens is undoubtedly a manifestation of the racist idea that “citizen” means “white.” Furthermore, the refusal, by anyone, to condemn these attacks as racist simply feeds into the dismantling of birthright citizenship. 

 

Birthright citizenship is supposed to be an anti-racist status. Established after a civil war, fought over slavery, birthright citizenship was meant to keep the shackles of enslavement off. Even as that promise was not upheld, it nonetheless gave Black Americans and scores of other marginalized people room to breathe, and assert their place within this country. Now, conservatives wish to pull the rug out from under the Constitution, and craft this nation into the image of a white republic they wish existed.  

 

Birthright citizenship is not where this stops. One of the most common places totalitarian regimes begin their assaults on society is by defining who can and can’t legally be a member of said society. It quickly spreads from there. Constitutions and laws bind the government, even as they empower them. Totalitarian regimes have no interest in being bound by the law, even the “supreme law of the land.” They are only interested in power. If the very status Americans are born with can be thrown aside in the name of racism and political power, then our entire government canbeas well. 

 

Related links: 

-Adding a Citizenship Question to the Census Will Return It to Its Racist Origins

-William Barr Needs a History Lesson

 

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172904 https://historynewsnetwork.org/article/172904 0
The Latest G-7 Summit Showcases Trump's Foreign Policy Failures

 

The just concluded G-7 summit featured a tour de force in diplomatic dexterity and leadership, as French President Emmanuel Macron strove desperately to bring President Donald Trump back into the real world. 

 

On all the key issues of the day—climate change, Iran, and the trade war with China—Macron used all the Parisian charm he could muster to nudge Trump from the arrogant and amateurish perch which has thus far characterized his diplomacy. Macron succeeded in raising the possibility of renewed negotiations with Iran and China, but whether anything comes of this after the unpredictable US President re-crosses the Atlantic is anyone’s guess.

 

Unless Macron or someone else can succeed in bringing to the surface a thus far deeply internalized sense of realism in Trump, he is well on his way to being nothing less than arguably the biggest foreign policy disaster ever to inhabit the white House. Here are the top three reasons why:

 

Reason No. 1: Withdrawal from the United States from the Paris Agreement (2015) on climate change.

 

President Trump’s empty chair at the G-7 summit meeting on climate change spoke volumes. When civil war-battered Syria signed on to the Paris Treaty in November 2017, Trump’s announcement of unilateral US withdrawal from the accord earlier that year made it official that the United States alone among all the nations of the world would not participate in combating the scientifically verified, potentially devastating, and daily snowballing effect of global warming. This is the single most reckless, damaging, and rudderless action taken by Trump. If he is reelected, the American people will be voting for nothing less than the destruction of life as we know it on the planet Earth.

 

Reason No. 2: Abandoning arms control treaties with Iran and Russia.

 

One of the great achievements of the right-wing Reagan presidency was the INF Treaty, signed with Soviet President Mikhail Gorbachev in December 1987, which eliminated an entire class of intermediate range missiles in Europe and moreover established rigorous verification procedures on compliance. One of the great achievements of the centrist Obama presidency was the 2015 Iran nuclear treaty, more formally the JCPOA, or Joint Comprehensive Plan of Action in which Iran agreed to rigorously verifiable limitations on its ability to enrich uranium for bomb-making capacity in return for lifting of US-sponsored international economic sanctions.

 

The born-rich real estate tycoon President with zero foreign policy experience summarily terminated both treaties. 

 

Other members of the UN Security Council (China, Russia, Britain, France) as well as the European Union signed off on the treaty with Iran, which is why Macron is trying desperately to get it back on track. To its credit, Iran appears willing to reopen negotiations as well. Thanks to the French president and the Iranians, Trump has a chance to rethink his precipitous and reckless action.

 

Trump’s termination of the INF Treaty frees his pal Russian President Putin as well as the venerable US military-industrial complex to rev their engines and restart the nuclear arms race, which has been quiescent for decades. Another critical treaty with Russia, the New START Treaty, also denounced by Trump, expires in 2021.

 

Termination of these arms control treaties rank near the top of Trump’s foreign policy failures because, well, as the old bumper sticker read, “One nuclear bomb can ruin your whole day.”

 

Reason No. 3: Recognizing Jerusalem as the “eternal capital” of Israel and the Golan Heights as Israeli territory.

 

Because of the overweening influence over the US Congress and the American public of AIPAC, the American Israel Public Affairs Committee—one of the top two or three lobbies in the country—all too many Americans may not find these actions, or the termination of the Iran treaty, objectionable, but anyone with broader knowledge of the Middle East conflict understands that Trump is playing with dynamite.

 

Jerusalem is a holy city for Christians and Muslims as well as Jews and cannot be dominated by any one if there is ever to be hope of peace. Under the moribund two-state solution, East Jerusalem was to be the capital of a Palestinian state.

 

As the UN and the international community have repeatedly affirmed, Israel has no legitimate claim to either the Golan Heights or the West Bank, both of which it nonetheless has been settling for decades in blatant violation of international law. Trump is not the first president to bow to AIPAC and Christian fundamentalists on Middle East policy, but he has taken it to a new level by signing off on Israel’s sole occupation of Jerusalem.

 

 Alas, There’s More . . . 

 

The top three above strike me as the most serious Trump foreign policy failures because their consequences can be catastrophic. But there is a litany of failure on the part of this president.

Trump has alienated all of Africa by calling it a continent full of “shithole” countries; he has done nothing to calm the India-Pakistan dispute playing out today in Kashmir, at the risk of escalation between two nuclear-armed powers; he has given little rhetorical support to the pro-democracy movement in Hong Kong while launching a trade war with China; he has anointed a new ruler who can’t actually come to power in Venezuela; and he has inspired neo-fascist movements all over the globe, as men such as Brazilian President Jair Bolsonaro and Philippine president Rodrigo Duterte praise his leadership.

 

There is still hope that Trump can make a breakthrough with North Korea’s Kim Jong Un but his diplomacy thus far--if that’s what you call the three highly publicized meetings with Kim--has done nothing but give visibility and legitimacy to a ruthless autocrat. A deal can be secured if the United States and its allies offer a trade along the lines of terminating off-shore military maneuvers, which make Kim feel threatened, in return for denuclearization, but for reasons known only to Trump he has chosen to strike up a pointless friendship legitimating a petty dictator while achieving no tangible results.

 

Where on this list, some may wonder, is Trump’s indifference to Russian meddling in American elections? While clearly it is true that Russia does meddle—the fact of the matter is, so do we--all over the world. So, while it is not “fake news,” Russian meddling is also not an existential threat to world peace as are the issues discussed above. Sorry, Democrats, Russia did not decide the election--the Electoral College and too many naïve American voters did that to themselves.

 

There is still time for Trump to reverse his legacy of diplomatic ineptitude. If he fails to do so he may well go down in history as the most feckless foreign policy president in American history.

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172910 https://historynewsnetwork.org/article/172910 0
The Fight against Climate Change Must Become the New Abolitionism

Activist Greta Thunberg made headlines last week after she traveled to a UN climate summit in a zero-emissions sailboat

 

The planet continues to fry. Yet, America is pumping more fossil fuel than it has in decades. And the U.S. government does nothing to slow the damage. Just the opposite — Washington is now promoting oil, gas and coal while attacking clean energy.

We’ve fought against climate change for at least thirty years. Yet, it seems that activists have failed to reverse or even slow the flow of the greenhouse pollution that’s destroying the world’s climate.

Why has it taken so long for the climate movement to accomplish so little? And, since the clock is ticking to curb runaway global heating, how can we do better in the future?

To answer these questions, leaders in the fight against climate change from Al Gore to Bill McKibben to Naomi Klein have gone back to history.

They’ve compared today’s campaign to cut greenhouse gas pollution to the trans-Atlantic movement in the nineteenth century to abolish slavery.

It’s about politics, not science.

The idea is that fighting climate change today is going to be as hard politically as it was to free millions of enslaved people in the nineteenth century. So, we should accept just how big a political and social movement we’ll need to save our climate. And then we should learn how the abolition movement attacked the equally big problem of slavery in the past–and how they won against great odds.

For example, in This Changes Everything, Klein writes that abolition was a social movement that “succeeded in challenging entrenched wealth in ways that are comparable to what today’s movements must provoke if we are to avert climate catastrophe”:

The movement for the abolition of slavery in particular shows us that a transition as large as the one confronting us today has happened before — and indeed it is remembered as one of the greatest moments in human history. The economic impacts of slavery abolition in the mid-nineteenth century have some striking parallels with the impacts of radical emission reduction.

 

Klein recommends a 2014 essay by MSNBC on-air personality Chris Hayes, “The New Abolitionism: Averting planetary disaster will mean forcing fossil fuel companies to give up at least $10 trillion in wealth,

I agree that Hayes’ essay is brilliant and that it should be required reading for anyone who cares about the climate crisis. So let me summarize his argument in some detail below on the assumption that it’s well worth hearing what Hayes has to say.

So Much Goddamn Money

Unlike famous social movements in American history such as women’s suffrage or LGBTQ rights, abolition was about much more than religious values or personal prejudices about people who seemed different from the dominant norm.

For Hayes, what makes climate change much harder to deal with than other social issues is the amount of money that powerful people would stand to lose by abolishing fossil fuels. This makes them fight harder against outlawing their product, just as southern slaveholders fought hard against abolition in the nineteenth century.

Before I go into Hayes’ comparison of slavery and climate change, it’s important to share a disclaimer. He’s not making a moral equation between humans who suffered under slavery and anything to do with climate change: 

“There is absolutely no conceivable moral comparison between the enslavement of Africans and African-Americans and the burning of carbon to power our devices. Humans are humans; molecules are molecules. The comparison I’m making is a comparison between the political economy of slavery and the political economy of fossil fuel.”

When it comes to political economy, abolition was about very big money. That’s the problem.

“So much goddamn money,” as Hayes puts it. Since abolishing fossil fuels is also about very big money, Hayes contends that it’s “impossible to point to any precedent other than abolition”:

The leaders of slave power were fighting a movement of dispossession. The abolitionists told them that the property they owned must be forfeited, that all the wealth stored in the limbs and wombs of their property would be taken from them. Zeroed out. Imagine a modern-day political movement that contended that mutual funds and 401(k)s, stocks and college savings accounts were evil institutions that must be eliminated completely, more or less overnight. This was the fear that approximately 400,000 Southern slaveholders faced on the eve of the Civil War.

 

Hayes estimates the value of slave “property” (it’s obscene to refer to human beings as property today, but that was indeed the issue to slaveowners in the nineteenth century) at $10 trillion in today’s money. Just before the Civil War, that represented 16% of the total value of the U.S. economy and fully 50% of the economy in the southern states.

“In 1860, slaves as property were worth more than all the banks, factories and railroads in the country put together,” Civil War historian Eric Foner told Hayes. “Think what would happen if you liquidated the banks, factories and railroads with no compensation.”

Keep It in the Ground

To have any hope of a livable world in the future, according to Bill McKibben’s calculations in “Global Warming’s Terrifying New Math,” the world’s governments must limit average worldwide temperature increases to 2 degrees Celsius (3.6 degrees Fahrenheit). McKibben published those numbers in 2012, and things have gotten worse since then.

Even more so today, to hope to keep the climate under any safe limit of temperature rise, most of the remaining known fossil fuel reserves around the world will have to remain in the ground, unburned. In 2012, McKibben estimated 80%. Today, the number will surely be higher. Whatever your figure, asking oil, gas and coal companies to take most of their product off the market is going to be a very hard sell, as Hayes explains:

Proceeding from this fact, McKibben leads us inexorably to the staggering conclusion that the work of the climate movement is to find a way to force the powers that be, from the government of Saudi Arabia to the board and shareholders of ExxonMobil, to leave 80 percent of the carbon they have claims on in the ground. That stuff you own, that property you’re counting on and pricing into your stocks? You can’t have it.

 

By coincidence, Hayes conservatively values those fossil fuel reserves at $10 trillion, the same amount as all the slaves in the south were worth just before the Civil War. The implications are scary:

The last time in American history that some powerful set of interests relinquished its claim on $10 trillion of wealth was in 1865—and then only after four years and more than 600,000 lives lost in the bloodiest, most horrific war we’ve ever fought.

 

Interestingly, just as slavery faced its biggest threat is the same time when southern politicians became most aggressive about defending the Peculiar Institution.

 

In earlier days, slave-owning Founding Fathers including Thomas Jefferson and Patrick Henry had been ambivalent and even apologetic about keeping people enslaved in a republic dedicated to personal freedom. George Washington referred to slaveholding as “the only unavoidable subject of regret” in his life and freed all his slaves in his will.

“Very few people at the time of the Revolution and the Constitution publicly affirmed the desirability of slavery,” according to historian Foner. “They generally said, ‘We’re stuck with it; there’s nothing we can do.’”

A few decades later, things changed as slavery became more lucrative, due both to the end of the international slave trade in 1808 and the invention of the cotton gin, which made growing cotton using slave labor many times more profitable. As a result of a declining supply and increased demand, slaves became more valuable.

“Between 1805 and 1860, the price per slave grew from about $300 to $750, and the total number of slaves increased from 1 million to 4 million—which meant that the total value of slaves grew a whopping 900 percent in the half-century before the war,” Hayes explains.

Predictably, slaveowners’ ethics followed the dollar. South Carolina Senator John C. Calhoun, the most famous defender of southern planters, argued that the United States should abandon its former policy of restricting slavery to areas where it was already established and keeping it out of new territories in the west.

Instead, slavery should be expanded to new territories because the system was nothing to be ashamed of. Indeed, for Calhoun, slavery was a benevolent institution in which Americans should take pride, “a positive good”:

I hold that in the present state of civilization, where two races of different origin, and distinguished by color, and other physical differences, as well as intellectual, are brought together, the relation now existing in the slaveholding States between the two, is, instead of an evil, a good—a positive good.

 

Ironically, just as slavery became more profitable, the growth of the abolition movement made slavery more endangered. “On the eve of the war, slavery had never been more lucrative or more threatened,” writes Hayes. “That also happens to be true of fossil fuel extraction today.”

 

Ever since the advent of new technologies including deepwater drilling and fracking, America has started to produce more oil and gas than it has in decades. In 2018, the U.S. overtook its peak of oil production in 1970 and surpassed both Saudi Arabia and Russia to become the world’s leading oil producer. The state of Texas alone will pump more oil than either Iraq or Iran in 2019.

 

At the same time, the politics of climate change have gotten ever more polarized. After about 2008, Republican politicians who had once joined their Democratic counterparts in vowing to fight climate change suddenly flipped and became climate science deniers.

 

 

As Republicans flipped from offering their own climate solutions to denying climate science, Newt Gingrich disavowed his Alliance for Climate Protection TV spot with Nancy Pelosi.

 

 

For example, in 2008 Newt Gingrich appeared with Nancy Pelosi in a TV ad urging action on climate change. Then, just a year later, Gingrich referred to that ad as the “dumbest single thing I’ve done in years,” rejected the science and embraced denialism. Hundreds of Republicans followed suit.

 

A few years later, Republicans had gone even further and were now celebrating fossil fuels as a positive good. Texas Representative Steve Stockman tweeted in March 2013 that “the best thing about the Earth is if you poke holes in it oil and gas come out.”

 

Yet, just when oil and gas (sorry, coal!) started to become more lucrative than ever, the carbon lobby was more threatened than ever by climate change activists, as Hayes explains:

 

In the same way that the abolition movement cast a shadow over the cotton boom, so does the movement to put a price on carbon spook the fossil fuel companies, which even at their moment of peak triumph wonder if a radical change is looming around the corner.

 

After showing that it might be as hard to deprive fossil fuel companies of $10 trillion in reserves in the future as it was to deprive southern slaveholders of $10 trillion in free human labor in 1860, Hayes counsels his reader not to despair. It won’t take another Civil War to abolish fossil fuels.

 

Hayes reassures us that several factors will cut into profits of fossil fuel companies and thus, reduce their political clout:

  • Costs to extract unconventional oil and gas will rise
  • Opposition to infrastructure projects like pipelines will increase
  • Challenges on Wall Street including divestment, demands for higher dividends and shareholder revolts will put pressure on management to reform

Taken together, Hayes argues that these challenges to the fossil fuel business model will make oil, gas and coal producers much less profitable in their waning days than southern plantations were before the Civil War. And that will reduce the political power of dirty energy companies and make it easier for activists to get government to make the companies leave their remaining reserves in the ground.

A Kinder, Gentler and Quicker Abolition

Death by a thousand cuts? Unfortunately, arguing that market forces and activism alone will be enough to pressure fossil fuel companies into agreeing to write off $10 trillion in stranded assets sounds like wishful thinking.

Given the massive and lasting political power of fossil fuel companies, as shown by their ability even now to retain the largest share of government subsidies given to any industry in the U.S., it seems unlikely that such puny efforts as Hayes lists will be sufficient to abolish fossil fuels anytime soon.

At this rosy conclusion I have to part company with Hayes’ brilliant argument. Instead, I have to agree with New York Times writer Josh Barro that “Slavery Is Not Like Carbon Emissions.

Whatever moral argument abolitionists made about the evil of slavery, economically there was no carrot for slaveowners, only a stick. Since slaveholders saw abolition as a program offering all pain and no gain to themselves, people who owned slaves determined to fight abolition to the death.

If you recognize that the problem of fossil fuels is like the problem of slavery, in that the problem-causers have a lot of money to lose if the problem is solved through straight abolition, then to make the solution easier for powerful opponents to accept, you should consider a way to neutralize the opposition of the property owners in each case.

Ron Paul once suggested that the United States government should have bought out slaveholders to prevent the Civil War.

This is exactly how slavery was abolished in the British Empire starting in 1833 — and it was done peacefully. The Americans tried compensated emancipation on a very small scale. In 1862, Lincoln signed an act to free 900 enslaved people in the District of Columbia by paying $300 for each of them to their owners. Unfortunately, Lincoln’s efforts to do the same in border slave states within the Union and even in southern states that had joined the Confederacy failed. As a result, most enslaved people in the south were freed at the end of a Union army bayonet.

Barro suggests that we can avoid the climate change equivalent of civil war with a compensated and gradual approach to abolish fossil fuels:

An effective carbon limitation policy should bring large economic gains to people who are not in the business of fossil fuel extraction, in the form of reduced economic disruption due to climate change. While owners of fossil fuels have a strong economic impulse to extract, the rest of us should have a strong economic impulse to limit extraction — and we should be willing to buy off the resource owners, if necessary, to enforce those limits.

 

To reduce opposition to gradually abolishing fossil fuels, Barro suggests policies such as a cap-and-trade system that would let fossil fuel producers reap the rewards of higher energy prices.

 

Cap-and-trade has declined in popularity over the last few years, and a simpler way to price carbon such as a carbon tax appears to be the best solution to reduce emissions effectively. A carbon price will also be more popular politically if revenues are refunded to citizens, as in recent proposals for a carbon dividend.

 

But Barro’s point remains sound: paying polluters to stop polluting may be morally distasteful but it may also be the shortest distance to cut fossil fuel emissions fast.

 

“These approaches look like a giveaway, but it’s worth making the giveaway if that’s what brings the benefits of stable temperatures.”

 

While galling to many people, compensating fossil fuel companies to leave most of their remaining oil, gas and coal in the ground might be the best way to avoid the climate change equivalent of a civil war.

 

Today, such a war would not just pit brother against brother in the United States or any other single nation. Runaway global heating could set the world aflame both environmentally and politically, sparking decades of conflicts across the globe and killing millions or even billions of people.

 

It was horrific for Americans to lose more than 600,000 of our countrymen by the time Lee surrendered to Grant at Appomattox in April of 1865. The number of potential deaths from climate wars in the future will be unthinkable. If it’s possible to avert that level of suffering by holding our noses and paying off the bad guys, then it might be a tradeoff for which future generations will thank us.

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172912 https://historynewsnetwork.org/article/172912 0
Where Are the Hong Kong Protests Headed?

Protestors in Hong Kong

 

After ten weeks of protest by students and others in Hong Kong the crisis seems to be spiraling toward a tragic climax. Beginning as a protest against a proposed law to permit the extradition of criminal suspects to mainland China, the demonstrations have evolved into broader calls to safeguard—or, perhaps more accurately, restore—the semi-autonomous territory’s democracy. With China’s government now employing rhetoric like that which preceded the 1989 Tiananmen Square Massacre, Hong Kong’s pro-democracy protesters—and, indeed, its democracy—might well be in grave danger.

 

As the unrest moves on, the Chinese government’s toleranceis running short—and its warnings are growing more ominous. The People’s Liberation Army garrison in Hong Kong is, is in the words of its commander, Chen Daoxiang, “determined to protect national sovereignty, security, stability and the prosperity of Hong Kong.” To emphasize his point, a promotional video showing Chinese military officers in action was released with his statement.

 

Yang Guang, a spokesperson for the Chinese government in the Hong Kong and Macau Affairs Office, has echoed this sentiment warning the protestors—whom he called criminals “criminals”—not to “mistake restraint for weakness”.  He then re-stressed the government’s “firm resolve” to protect the prosperity and stability of Hong Kong.

 

Zhang Xiaoming, the director of this office took matters a step further, declaring that China’s government “has sufficient methods and enough powerful means to quell all sorts of unlikely unrest (dongluan)”. This came only two months after China’s defense minister argued that China’s stability, since the Tiananmen crackdown, proved that the government had made the “correct” choice.

 

Beijing’s increasingly harsh warnings against Hong Kong protesters point to both a hardening of positions, and to the rise of figures in the Chinese government who favor establishing complete control over the territory. Their position has been reflected in the response from the police force, which has been using rubber bulletsand tear gas with increasing frequency. Hundreds of protesters have been arrested; forty-four have been charged with “rioting.”

 

Yet far from being deterred, the protesters are challenging the Chinese government with increasing determination. In July they vandalized the outside of the Chinese government’s liaison office in the center of the city. Last week they started a general strike at the local airport that nearly paralyzed the city, one of Asia’s most important commercial hubs, causing bedlam and the cancellation of more than 150 flights. Worrying about going too far in their protests and alienating the Hong Kong citizenry, the protest leaders ended these airport strikes. Perhaps counterintuitively, this radicalization has broadened support for the movement, with members of Hong Kong’s middle class—including lawyers and civil servants—now openly supporting the protesters. 

 

Seeing that their stark warnings are ineffective, China leaders may well be sensing that the best—or maybe the only way to restore their authority in Hong Kong, is by force. Of course, if Xi Jinping chooses the option of violently suppressing the protests, the international optics for him and his country would be terrible. China would look like a heavy-handed oppressor. Or, he may choose to wait until the celebrations for the 70th anniversary of the founding of the people’s republic on October 1 to act.  Is a crackdown now or in two months the answer?

 

Moreover, there is no guarantee that such a crackdown would succeed. To begin, Hong Kong’s 31,000 strong police force is not up to the task. It lacks manpower and its officers might refuse to use deadly force. After all, there is a significant difference between firing rubber bullets at a crowd and slaughtering civilians. China would have to use the local People’s Liberation Army (PLA) garrison or transfer tens of thousands of paramilitary soldiers, the People’s Armed Police (PAP) from the mainland. (Chinese state media has released video showing that a large PAP contingent has been deployed in the city of Shenzhen bordering on Hong Kong).

 

Hong Kong residents, (only 10% of whom consider themselves Chinese), almost certainly would resist. The resulting clashes would probably result in a high number of civilian casualties and effectively mark the official end of the “one country, two systems” agreement of 1997. Beijing would be forced to assert full and direct control over Hong Kong’s administration.

 

Once the Hong Kong’s governments legitimacy is destroyed, the city would immediately become ungovernable. Civil servants would quit their jobs in great numbers. Hong Kong’s complex communications and logistics systems would also be at risk for disruption by defiant locals.

 

After the Tiananmen crackdown, the Chinese Communist Party’s (CCP) ability to re-establish control rested not only on the presence of tens of thousands of PLA troops, but also on the mobilization of party members. In Hong Kong, where the CCP has only limited organizational presence (officially, it claims to have none) This would be impossible. Moreover, given the large number of Hong Kong residents employed in private businesses, Beijing could not control them as easily as mainlanders who depend on the government for a living.

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172906 https://historynewsnetwork.org/article/172906 0
The Carter Presidency Reconsidered

 

There are at least two compelling reasons why this is a good time to reassess the presidency of Jimmy Carter. First, he is rapidly approaching his 95th birthday. On October 1, one month from today, he will solidify his record as America’s longest living former president. Second, during a brief but revealing dust-up between Donald Trump and Jimmy Carter recently, the President of the United States told us he thought his predecessor was a “terrible” and “forgotten” president.  Having served four years in the Carter-Mondale White House, I believe strongly both assertions are dead wrong, and will argue here that Carter’s was one of the most consequential presidencies in recent history, particularly in his commitment to human rights.

 

When Americans think of Jimmy Carter today, they often hasten to commend his work after he left the White House. He and his wife Rosalynn founded the Carter Center in Atlanta dedicated to promoting democracy in emerging countries, to resolving international disputes peacefully, and to eradicating, especially in Africa, chronic and deadly diseases such as guinea worm. He has set a new standard for former presidents by selflessly dedicating himself to help people around the world improve their lives.

 

As worthy as Jimmy Carter’s post-presidency has been, it shouldn’t overshadow his time in office, which has been too often overlooked, and which stands in sharp contrast to what we see in the White House today. President Carter was well known for tackling almost every tough issue that came his way, usually regardless of the political cost:  

 

*Carter struggled with a chronic energy crisis, but in the end he put the country on a clear path to energy independence.        

 

* By deregulating natural gas and appointing Paul Volcker to head the Federal Reserve, he brought inflation under control, where it remains.  

 

* Carter appointed more women, African Americans and Hispanics to judgeships and senior positions than all of his 38 predecessors coimbined.

 

*He created new departments of Energy and Education, but perhaps the most significant structural change he made was the creation of “the modern vice presidency,” which he and Walter Mondale shaped to enable the nation’s number-two elected official to reside just steps from the Oval Office, with complete access to the president and the White House information flow, and to be available to the chief executive for advice and/or special assignment. This model has been replicated, with appropriate modifications, by almost every subsequent administration. 

 

*With use of administrative tools and the cooperation of Congress, President Carter successfully lead the effort to protect, incredibly, 140 million acres as new parklands, wildlife refuges, national forests and wilderness areas in Alaska.

           

Carter’s accomplishments in the international arena were equally impressive:

 

* He brought about a lasting peace between Israel and Egypt after 13 intense days at Camp David with Anwar Sadat and Menachem Begin.  

 

* Carter proposed transferring control of the Panama Canal to the people of Panama, perhaps the toughest issue of all, by his own account.  The canal today under the Panamanians is an unqualified success story.

 

*He also reached an arms control agreement with the Soviet Union and normalized diplomatic relations with China. The list goes on, all of it spelled out in Stuart Eizenstat’s splendid and thorough new book, “President Carter.”  Other Carter biographies are in the works, ensuring that his presidency will not be “forgotten.” 

            

In the administration’s final days, Vice President Mondale famously summarized the four years: “We told the truth, we obeyed the law, and we kept the peace,” words Carter had inscribed on the wall of The Carter Center. In 2015, in the introduction of his then-new book, “A Full Life,” he repeated the Mondale quote, and then wrote these words, “I would add. ‘We championed human rights.’”

 

It was an appropriate addition to his legacy, because, as he said later, “I decided that human rights would be the centerpiece of our foreign policy.”  More than any other president since Abraham Lincoln, Carter consistently embraced human rights, whatever other issues required attention.  Even Panama was a human rights issue because U.S. control of the canal embodied America’s flirtation with imperialism in an earlier era, and thus impeded our ability to promote democracy in a region enamored by authoritarianism.

 

After years of American neglect, the president dispatched his U.N. ambassador, Andrew Young and others to Africa to bring aid and American values, a combination that won many new friends. He sent Vice President Mondale to meet with Prime Minister Vorster of South Africa to state America’s unequivocal opposition to apartheid. Mondale also sought Vorster’s assistance in bringing majority rule to Zimbabwe/Rhodesia. Virtually all of these Carter initiatives were ultimately successful. 

 

He ordered a reluctant U.S. Navy to rescue refugees fleeing Southeast Asia in unseaworthy boats on the high seas, seeking safe havens hundreds of miles away. The rescue operation likely saved thousands of lives.

             

It was this kind of principled and courageous leadership that earned Jimmy Carter the Nobel Peace Prize for his “untiring effort to find peaceful solutions to international conflicts, to advance democracy and human rights, and to promote economic and social development.” 

 

As impressive as all this is, the administration’s record was not without setbacks. In the early years, the American people were hurt by high prices and long gas lines, painful reminders of a staggering economy and an uncertain energy future. By the end of his term, however, solid policies were in place in both sectors, and prosperity was in sight, if not yet fully in place. 

 

Carter was committed to achieving universal health care, but believed it had to be achieved incrementally so he proposed a universal catastrophic plan ensuring that no family’s resources would be depleted by a serious illness or injury.  But Senator Ted Kennedy, who planned to challenge the president for the 1980 nomination, insisted that the nation move immediately to universal coverage. Negotiations between the two broke down; Kennedy refused to compromise even after he lost his campaign for the nomination; sadly, the nation had to wait another 30 years to advance the cause of universal health care.

 

When the Soviet Union invaded Afghanistan, Carter imposed stiff economic sanctions, a controversial decision in the farm belt. He began a buildup of American military might  -- continued by President Reagan -- that the Russians were compelled to match, eventually bringing on the economic crisis that caused the Soviet Union to disintegrate. 

 

Perhaps most damaging to Carter, Iranians took 52 diplomats and others from the U.S. embassy in Tehran hostage for 444 days. Working patiently to secure their release through diplomacy with no results, he finally ordered the military to plan and execute a risky rescue mission, which tragically fell victim to desert sand fouling the helicopter engines on which the mission depended. Dejected but still determined, Carter continued to negotiate for the release of the hostages, which he ultimately secured, although the Americans did not clear Iranian air space until minutes after Carter officially left office.  

            

If Jimmy Carter had a flaw as president, it wasn’t in the areas of policy or principle, but rather in the politics surrounding an issue.  He believed the campaign was over and he hated hearing political arguments in policy discussions.  I remember sitting in the cabinet room early on when someone made a political point about welfare reform; the president looked at him with his steely blue eyes and said, “I want to hear substantive arguments, I’ll take care of the politics.” Well, sometimes he did and sometimes he didn’t, but I saw him soon come to recognize that effective governance requires a mix of both policy and politics, in the right measure at the right time.  He continued to grow in the office, and never strayed from what he thought was best for the country.  Through four years in the White House, I never failed to feel proud to be there and to work for this president.  

 

It was true what Mondale had said, as amended: “We told the truth, we obeyed the law, we kept the peace and we championed human rights.”  Those words deserve serious reflection, because they bear virtually no resemblance to what we see in the presidency today, certainly not on our southern border.

                             

 

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172909 https://historynewsnetwork.org/article/172909 0
Sparta and the Collapse of Greece

 

Of the two most powerful states in classical Greece, Athens was a forward-looking democracy with a far-flung naval empire, Sparta a land-locked mixed government heading a league of nearby states.  In 431 BC, the long-simmering rivalry between them erupted into open warfare.  The protracted war that followed consumed the entire Greek world.  That war—known as the Peloponnesian War, or the Atheno-Peloponnesian War, because of Sparta’s power base in the peninsula of the Peloponnesus—is commonly believed to have ended in 404 BC with the surrender of Athens. In my book The Plague of War:  Athens, Sparta, and the Struggle for Ancient Greece, I argue that the war did not truly end until 371 BC, when the crack infantry of its erstwhile ally Thebes shattered Sparta’s military supremacy at Leuctra. Grave as its consequences were,  the Peloponnesian War did not sap the strength of the Greeks and thus lead in a direct line to its ultimate defeat at the hands of Philip of Macedon, father of Alexander the Great.  Rather, the explanation lies in the subsequent conduct of the Spartans, whose pigheaded aggression against their own allies throughout the decades after their victory over Athens weakened Greece immensely.

 

Neither the course nor the conclusion of the Peloponnesian War was foreseeable at its outset.  Strong in infantry, the Spartans neglected to build a navy until almost twenty years into the war; the Athenians dramatically weakened their imperial fleet by becoming involved in an internal dispute in Sicily. The fortunes of war swung dizzily back in forth for decades until, bolstered by financing from Persia, the Spartans under their charismatic admiral Lysander defeated the Athenians at the Battle of Aegospotami in what is now northern Turkey and captured all but a handful of their 180 ships.  The next year the Athenians surrendered.

 

The war had done immense damage to Athens.  It had lost its fleet and its empire its land  had been ravaged, its economy badly disrupted.  The Spartans’ success in the war seemed to leave them holding all the cards, as the opportunity to unite Greece in an expanded Peloponnesian League now lay ready to hand.  Their skills, however, lay not in diplomacy but in fighting; they had planted the seeds of future trouble in signing a peace with the Athenians that did not take into consideration the wishes of their important allies Corinthand Thebes.  So overbearing were they, indeed, that to their old enemies the Athenians they soon added new ones:  their own former allies—and Persia. 

 

Free of their war with the Athenians, in 401 the Spartans moved at last against their nearby ally Elis in the western Peloponnesus; they had been nursing a grudge against the Eleians for nearly twenty years after a conflict over access to the Olympic Games in their territory.  Attempting to expand Spartan influence in the east, Sparta’s new king Agesilaus soon alienated Persia, and the Athenian admiral Conon, one of the few who had escaped the carnage at Aegospotami, soon joined the Persians in defeating the Spartans soundly in a naval battle at Cnidus off the southwest coast of Turkey. This victory undid many of the gains the Spartans had won a decade before at Aegospotami.

 

Meanwhile back in the Greek mainland Agesilaus’ meddling in the internal affairs of Sparta’s key ally Thebes angered not only the Thebans but other Greeks as well. Encouraged by an infusion of Persian gold, a new alliance of Thebes, Corinth, Athens and Sparta’s old rival Argos went to war with Sparta in 395.  Although Sparta was victorious, the Spartans proved better at winning wars than winning the peace.  The treaty that ended the war, dictated by the Persian king, stipulated that all the Greek cities of Asia Minor were to belong to him but that all other Greek state were to remain autonomous. Having made up his quarrel with the Spartans, the King appointed them the guarantors of the peace.   Agesilaus made the enforcement of the autonomy clause his personal crusade and promptly went about dissolving any promising union of Greek states in the name of “autonomy.”  He also scandalized his fellow Greeks by winking at the seizure of the Thebans’ citadel by one of his generals, an outrage that could in no way be rationalized as a defense of autonomy, as well as at a similar attempt against the Athenians by another Spartan commander.  

 

Together Thebes and Athens fought Sparta on and off throughout the 370s, but it was the Thebans who finally dealt their former ally a death blow at Leuctra in 371.  Only then did the Peloponnesian War finally end. Defecting to Sparta after his recall from Sicily, the Athenian Alcibiades has suggested to the Spartans that when they proved victorious over Athens they would enjoy a hegemony over Greece grounded not in force but in goodwill.  But events proved that the Spartans preferred to be feared than loved. This fatal preference would dictate their actions in the decades after Aegospotami, and in the end their downfall was brought about not by the old enemy they had fought for so long but a new one created by their shortsighted foreign policy.

 

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172915 https://historynewsnetwork.org/article/172915 0
The Value of History Podcasts: An Interview with Historian and Podcaster Mike Duncan

 

Mike Duncan is both a podcaster and a New York Time best-selling author. His podcasts include the award-winning “The History of Rome” and “Revolutions”, while his written works are The History of Rome: The Republic and The Storm Before the Storm.

 

What sparked your interest in history, both Ancient Roman and in general?

 

I have always been interested in history. It goes all the way back to childhood. I can't point to anyone thing that sparked my interest. It's just always been there. It's in my blood. I used to read the World Book Encyclopedia for fun and the old civilizations were especially fascinating: The Romans, Greeks, Mayans, Egyptians, or Incas. They were like aliens from a whole different world. And I’m as fascinated by them today as I was when I was a kid. 

 

What inspired you to create the award-winning History of Rome podcast?

 

It was combination of two things. First, I personally discovered podcasts back in 2006. I was inspired by all the shows I was listening to back then, but particularly by Lars Brownworth’s 12 Byzantine Rulers podcast. It was simple, informative, and readily available at the click of a button. It made me say to myself, “I would like to make something like that.” At the same time, I had gotten really into reading the ancient historians like Plutarch, Livy, and Polybius. The stories I was reading were fantastic, but they were locked up in very dry texts that most people never read. So I sat down one day and decided to start a podcast where I take all of this wonderful information that is buried in those ancient histories and repurpose them to tell a complete narrative history of the Roman Empire from beginning to end. I had no idea what I was getting myself into.

 

What do you think makes podcasts effective in getting people interested in and learning about history?

 

Mostly it’s a matter of accessibility. You can listen to a podcast while you're stuck in traffic, on the subway, doing chores, or exercising. Podcasts fit into the times in our daily lives where we would dearly love our brains to be be engaged with something far more interesting than whatever mechanical chore we’re doing. Then of course there is the magical fact that we all have smart phones now podcasts are ready and waiting to be listened to all the time. There's no barrier. If you see something that interests you, you can just subscribe and listen. And let’s face it, it can be hard for people to carve out time to read a stack of books about history. This is modern life. Podcasts are easy, free, and fit into our lives. 

 

With new technology and new mediums of communication, how do you think the field of history Is best suited to adapt to changing times?

 

History is always going to be well-suited to any new media come along. Great history has been produced for every major advancement in media and communication. From cave paintings, to ancient scrolls, to handwritten books, to modern mass-produced books, radio, television, feature film, and now to all forms of digital media, YouTube videos and, of course, audio podcasts. Whatever the next thing is that comes along—neural interface plug-ins or Holodeck style virtual reality or whatever—there will be a place for history.  Someone will always say: how can I use this to teach history? Someone else will say: how can I use this to learn history? So wherever we’re headed history will always thrive.

 

What lessons did you learn from the History of Rome that have helped you with your Revolutions podcast?

 

It was mostly just a matter of being able to start fresh with all tricks and tools and methods that I fumbled my way to acquire in the early days of The History of Rome. My narration was stronger and more confident. The writing was better. The research methods were better. Revolutions was launched from a foundation of five years worth of trial and error from doing the History of Rome. But also, I just wanted to keep doing what worked in The History of Rome: Don’t worry about frills and gimmicks, produce an episode once a week, put it out on time, and then keep doing that forever. It worked then. It still works now.  

 

You have also written two books on Ancient Rome. How different was the process for writing your books from creating your podcasts?

 

There were a couple of big differences. With the podcast, I am writing and publishing a discrete chapter each week. It is not a system that leaves much room for going back and revising earlier work in light of things I say later on. So a great thing about writing the book was when I finished the manuscript, I could go back and revise earlier sections. To make better connections and introduce ideas or characters at times that fit more naturally. There was more opportunity to make specific changes after reflecting on the totality of the work. The other big difference is that when I write for the show I know I will be personally narrating this material. So if a particular sentence is convoluted or confusing, I’ll be able to read it to the audience so it makes sense. But with the book, I had to rely on the words themselves to do that work without me—to carry the story forward without me. So I was much more careful and precise about the language, so the reader never drifted away from confusion or boredom. 

 

What books are you currently reading?

 

Not counting books I’m reading for either Citizen Lafayette or the Revolution podcast, I’m currently reading Population Wars by Greg Graffin. It’s a reflective book about geography, evolutionary biology, human culture and how populations interact with each other. I have learned a lot about the geological layers under the Finger Lakes region of upstate New York. Also how plumbing works. 

 

What is your favorite story, anecdote, or lesson from your studies of history?

 

When we study the English Revolution it is clear King Charles I got his head chopped off because he was too inflexible and uncompromising. So we take a lesson from that: don’t be so hard. Be willing to negotiate. But then we get to the French Revolution and it’s clear King Louis XVI got his head chopped off because he was too flexible and compromising. He had no backbone. He was too easily swayed. He kept changing his mind and giving mixed signals. So we take the opposite lesson: Don’t be so soft. Don’t just back down all the time. The point being…there is no monolithic LESSON OF HISTORY. History has a million different lessons to teach us and they are often contradictory. It is never going to be 100% clear which lesson we should be learning. We just have to do our best. 

 

We understand you are writing a new book on the Marquis de Lafayette. What else is on the horizon for you?

 

Citizen Lafayette will come out in the summer of 2021 and that will coincide with the end of the Revolutions podcast. I have at least a dozen ideas for what I might do next, but I’m not going to reveal them here because I don’t want to lock myself into anything or set expectations for a project that doesn’t pan out. I’ll keep telling everyone about the past, but I’ll let my future remain a mysterious mystery.

 

Lastly, do you have any advice for historians interested in starting a podcast? 

 

I think you should do it. That is my advice. If you are passionate and knowledgeable about a subject, please share that knowledge and passion with us. Bear in mind that nobody will listen to you for at least a year, but if you work hard and do a good job, your audience will find you. And we will all be better off with your voice in the mix. 

 

 

 

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172913 https://historynewsnetwork.org/article/172913 0
Writing Fiction about Real People

 

Biographers can report what happened to their subject and when; they can also suggest reasons why it happened. But only a novelist can climb inside the subject’s head and describe their innermost thoughts and insecurities. It’s in that secret place, hidden behind the bare facts of a life, that I like to write.

 

The recent trend for biographical novels about strong historical women has produced some cracking reads: Stephanie Dray and Laura Kamoie’s My Dear Hamilton, Stephanie Marie Thornton’s American Princess, and the works of Paula McLain, of which my favourite is The Paris Wife.  In the UK Hilary Mantel and Philippa Gregory are perennially popular with their insider stories of the Tudor and Stuart monarchies, and many other novelists have dipped their pens in the biographical inkwell. None of them is attempting to rewrite history – it is always clear they are writing fiction – but they want to go deeper than the history books allow.

 

The best novels about real people make us re-evaluate the subject and perhaps alter our preconceived ideas. Thomas Cromwell, for example, is softer and more human in Wolf Hall than he is usually portrayed. I tried to do this with Wallis Simpson in my novel Another Woman’s Husband. She’s had a bad press from biographers, who focus on her alleged affairs and rumours that she ensnared the Prince of Wales using special sexual techniques learned in Chinese brothels. I used a trope many novelists have adopted before me when I viewed her through the eyes of her lesser-known best friend. Mary Kirk met Wallis at summer camp when they were both fifteen and remained close right up to the abdication crisis, so she had a unique viewpoint. History has judged Wallis for the effect she had on the British monarchy; I judged her for the way she treated her schoolfriend.

 

There are loads of dangers and pitfalls for novelists writing about real people, especially if their story is within living memory and already well known. When you are attributing made-up thoughts and dialogue, perhaps adding a few tics and quirks, there will always be some who object: “He/she would never have done that/said that.” I’m sure Joyce Carol Oates received a few such comments when she dared to write her brilliant novel Blonde, about Marilyn Monroe. 

 

Do novelists need to tell the truth about real people? I don’t think so. Personally, I try to stick closely to the historical facts, not because I feel an obligation to do so, but because I am curious to reach a version of the character that feels emotionally true, with enough authentic detail to be convincing for readers. I will omit facts that don’t contribute to the story and might play around with the timeline, but I always include a historical afterword where I confess what was made up, and credit the sources I used. But if a novelist chooses to reinvent someone entirely – for example, making Abraham Lincoln a vampire, or letting Hitler win the Second World War – that can also make a good novel. 

 

The libel laws (tougher in the UK than the US) can put the brakes on creative imagination. You can’t libel the dead, but you need to be super careful about the living, even if they’re only subsidiary characters. I thought all the protagonists were dead when I wrote The Affair, which takes place as Elizabeth Taylor and Richard Burton fall in love during the making of Cleopatra in Rome in 1962, so I was alarmed to receive a post-publication email from the wife of the director, Joe Mankiewicz. I had covered the story of the beginning of her relationship with Joe in the novel. Fortunately she approved of my version; it could have been awkward (and expensive) if she hadn’t.

 

How do you step into the shoes of a historical character? First, I go back to primary sources and start with their own writings, if any survive. In my latest novel, The Lost Daughter, I write from the point of view of Maria, middle child of Nicholas II, the last tsar of Russia. Her letters and diaries (translated from Russian by Helen Azar) don’t give much away but memoirs by the family’s tutors are more productive, and the hundreds of photographs and home movies the family shot (available online) are revealing.

 

I had to be careful not to view Maria from a 21st century perspective. One of the most important things about the Romanovs was their devotion to the Orthodox religion; another was their love of the mother country, which may have stopped them trying to escape. Since I am neither Russian nor Orthodox, and wasn’t born royal, this required a leap of imagination. 

 

After primary sources, I read the most highly acclaimed history books. Some historians disapprove of fictionalizations produced by novelists, who are seen as piggybacking on their original research. “Why write novels about the Romanovs when the truth is so compelling?” one historian asked me. My answer is that we have different jobs. The historian’s job is to present all possible interpretations of events in an even-handed and accurate way, set against the greater picture of the era; novelists get to pick sides, to have opinions about their characters, to narrow the focus and manipulate the facts in order to create a great story.

 

When writing about real people, I always hope I’ve done them justice – but that’s not the primary objective. The main obligation of the historical novelist is to write an entertaining novel that readers want to read. Apart from that, and the libel laws, there are no other hard-and-fast ‘rules’.

 

 

 

Gill Paul’s novel The Lost Daughter is published by William Morrow.

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172914 https://historynewsnetwork.org/article/172914 0
A New History of Islamic Empires Defined Through Illustrious Cities

 

Justin Marozzi is a travel writer and historian. He is the author The Man Who Invented History: Travels with Herodotus, Tamerlane: Sword of Islam, Conqueror of the World, Baghdad: City of Peace, City of Blood, and other books. His latest book, scheduled for release on August 29, is Islamic Empires: Fifteen Cities that Define a Civilization, From Mecca to Dubai. Visit his website. Find him on Twitter @justinmarozzi.

 

 

What books are you reading now?

 

I’m reading Frank Dikötter’s How to be a Dictator: The Cult of Personality in the Twentieth Century, pretty timely, Elie Wiesel’s The Night Trilogy, Peter Pomerantsev’s This is Not Propaganda: Adventures in the War Against Reality and, for light relief, Patrick O’Brian’s Blue at the Mizzen. For years I’ve been resisting the last couple of books in his Aubrey-Maturin series, for fear of finishing this epic roman-fleuve, but I think it’s time to bite the bullet. And as a fellow O’Brian-obsessive consoled me the other day, you can always read them again.

 

What is your favorite history book?

 

There is only one contender. Herodotus’ Histories. It’s a one-volume masterpiece, still in print 2,500 years after he wrote it. If you were stranded on a desert island and were only allowed one book for the rest of your life, this is it.

 

Why did you choose history as your career?

 

I’m not sure I ever chose it. I’ve always had an interest in travel and history and the two work so well together. I strongly believe that wherever possible, a historian should “walk the ground”, get out of the armchair or lecture room and visit the places he or she is writing about. This is what I have tried to do in my latest book, a history of the Islamic world told through 15 cities across 15 centuries. It’s taken many years living and working in these places to even begin to understand them.

 

What qualities do you need to be a historian?

 

Imagination, discipline, patience, an enquiring mind and a sense of wonder all help. As does the ability to write! A willingness to do your own thing, whatever other people around you are doing. And an acceptance that if you choose the independent route, history is unlikely to pay the bills. 

 

Who was your favorite history teacher?

 

I was lucky enough to have several good ones at school and university. At Cambridge, I was taught by some wonderful historians like Neil McKendrick, Vic Gatrell, David Abulafia and John Adamson. Latterly I became great friends with Paul Cartledge, doyen of Cambridge classicists, while researching my book The Way of Herodotus.

 

What is your most memorable or rewarding teaching experience?

 

I don’t teach history but from time to time I tutor aspiring writers on residential courses in the UK. It’s immensely rewarding – mentoring and encouraging and helping them get closer to what they want to achieve.

 

What are your hopes for history as a discipline?

 

In the words of the great twentieth-century Greek writer Nikos Kazantzakis: “I hope for nothing. I fear nothing. I am free.” History will never go out of fashion. Look at how history is being written today. It’s come such a long way from its single-minded obsession with kings and queens, statesmen, generals, battles and wars, laws and constitutions. These days it demonstrates its interest in the ordinary lives of ordinary people, as well as those of our rulers. It’s the most inclusive subject there is. Historians look at sex, sport, culture, domestic lives, economies, revolutions, migrants and the marginalised, as well as the more traditional political and constitutional matters. History constantly refreshes and reinvents itself and that is its abiding power. It has a built-in resistance to obsolescence.

 

Do you own any rare history or collectible books? Do you collect artifacts related to history?

 

I have a few old editions of favourite books by favourite writers. Soldiers and explorers like T.E. Lawrence, Doughty, Barth, a Hemingway or two. I have a few mementoes from countries I have lived and worked in and written about. A broken padlock from the notorious Abu Selim prison in Tripoli, which I visited just after its liberation during the 2011 revolution; pieces of carved wood from one of Saddam Hussein’s Baghdad palaces which I use as bookends. A stone from the huge mountain-cairn of stones which legend says were left by Tamerlane’s soldiers en route to war with China in the early fifteenth century. And no end of old Persian rugs!

 

What have you found most rewarding and most frustrating about your career?

 

Working alone – and working alone! Writing is an inherently lonely business and if you’re an independent historian, not attached to a university, you have all the freedom – no boss, no publishing requirements, etc. – and none of the support that comes with a large institution. On balance I’ll always err on the side of complete, untrammeled freedom. 

 

What are you doing next?

 

My new book is Islamic Empires: Fifteen Cities That Define a Civilization, published this summer. It uses fifteen of the most glorious cities within the Islamic world to tell its history, starting from Mecca in the seventh century and ending in Doha in the twenty-first. Along the way we look at Damascus under the Umayyads, Baghdad in its ninth-century heyday under the Abbasids, the First Crusade in Jerusalem in 1099, Marinid Fez, Tamerlane’s Samarkand, Ottoman Sultan Mehmed’s capture of Christian Constantinople in 1453, Babur’s Kabul, Isfahan under Shah Abbas, the extraordinary rise of Beirut in the nineteenth century and the creation of Dubai by the Al Maktoum family in the twentieth. One of the most important lessons these cities show us is the importance of tolerance and moderation. Each of these cities was astonishingly cosmopolitan – Jews, Christians and Muslims coexisting more or less happily – in its heyday. Today, places like Baghdad, Damascus and Tripoli have been hollowed out, their rich, mixed populations hollowed out by external interventions and internal strife. That’s something to reflect on. On a more positive note, much of the world is still flocking to Dubai in a modern-day version of the waves of immigration to Baghdad more than a thousand years ago. Cities are like living organisms. They need to be nurtured.

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172911 https://historynewsnetwork.org/article/172911 0
The Problem with The Rosa Parks Barbie

 

This week Mattel proudly announced two new dolls in The Inspiring Women Series of Barbies.  Civil rights activist Rosa Parks and astronaut Sally Ride join a line that has previously featured aviator Amelia Earhart, artist Frida Kahlo, and NASA mathematician Katherine Johnson. The description of Rosa Parks though is a bit lacking as it claims that she “led an ordinary life as a seamstress until an extraordinary moment on December 1, 1955.” She is described as having a “quiet strength” that “played a notable role in the civil rights movement…”   The problem with these descriptions is that they reduce her once again to the tired woman on the bus, an image that can be found in the 1957 comic book The Montgomery Story designed by the Fellowship of Reconciliation to promote nonviolent activism. In the comic book, Parks “quietly” refuses to move because she was tired and her feet hurt. 

 

In reality, Rosa Park’s moment on the bus was a planned, strategic act and her life before it anything but ordinary.  In her early 40s, she was an established local activist working with the Montgomery branch of the NAACP.  As Danielle L. McGuire has described in At the Dark End of the Street, Rosa Parks worked to investigate rape cases. This included the 1944 rape of Recy Taylor, featured in a recent documentary.  Parks had also previously resisted unfair treatment on the busses; the same bus driver removed her from a bus years before.  That previous summer, she attended Highlander Folk School, an institution that once focused on labor organizing, but began to focus on segregation issues in the 1950s. 

 

None of this is groundbreaking information about Rosa Parks.  Historians have been discussing her prior activism for well over a decade. A great short summary of her long term activism can be found in Jeanne Theoharis’ chapter “’A Life History of Being Rebellious:’ the Radicalism of Rosa Parks” from 2009. The problem is that the more in-depth narrative that historians have worked hard to reconstruct is continually lost in public consumption.

 

The history of the civil rights movement includes national narratives and local narratives.  The national narratives typically follow large organizations or noted figures such as Martin Luther King, Jr.  Taylor Branch’s series on America In the King Years tends to fit the national narrative. The local narratives focus on individuals or groups who were more important on a regional level. John Dittmer’s Bancroft prize winning Local Peoplewas one of the earliest examples of a historical work that examines the stories of those working in Mississippi. Most civil rights scholars would recognize that there is value in both, and that the civil rights movement could not have been as effective without either level.   However, the national narrative continues to dominate discussions in the larger public sphere.  

 

This national narrative is decidedly a masculine one.  This is partially because civil rights leaders made sure that the public face of the movement was masculine.  Civil rights leaders sought to ensure that black men, often deprived of leadership roles in the workforce and broader community, had the opportunity to be recognized leaders within the movement.  This was driven to some degree by Cold War era beliefs about proper gender roles within the home that impacted all Americans. Some women in the movement accommodated this leadership vision, some challenged it, and others worked around it, but at the end of the day they were rarely portrayed as leaders. For example, no women spoke at the March on Washington and James Meredith did not invite women to join March Against Fear.

 

This masculine national narrative is what appears most commonly in our school curriculums and textbooks.  Take for example the History-Social Science Content Standards for California Public Schools.  Rosa Parks is the only female civil rights advocate listed in the examples for 11.10.4.  Similarly, in the Alabama Course of Study, Parks is the only woman listed for the 4th grade (standard 14) and 6th grade (standard 9) requirements.  Alabama does, however, extend the 11th grade list (standard 14) to include Autherine Lucy and Vivian Malone Jones. Students in Alabama might then receive a more nuanced civil rights education, one that at least in the upper grades makes room for a local vision which includes more women.  However, textbook companies publish texts which will match a number of state curriculums so a story that would be of interest in Alabama may not sell in the more heavily populated California. A national narrative then is convenient but woefully incomplete.

 

Herein lies the problem with the Barbie version of Parks. It is focused on her role in the national narrative of the civil rights movement.  Her given narrative is that she, out of nowhere, has an “extraordinary moment” and then “quietly” moves to the edges of the picture. Her earlier work, her time as an advocate for rape victims, her previous attempts to change the bus system, and her other NAACP work is only visible if we stop and think about her as part of a local narrative, a story which is admittedly harder to sell given the public’s lack of familiarity with it.  Mattel has an opportunity here to do more. We can only hope they will.  

 

 

 

 

 

 

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172882 https://historynewsnetwork.org/article/172882 0
Roundup Top 10!

Teaching in the Age of Me Too

by Eric S. Yellin

Eric S. Yellin was nervous about teaching a course based on a false rape accusation but found he was wrong to be so anxious -- and that the experience offered four lessons for instructors.

 

Graduate Worker Organizing is Scholarly Praxis

by Hannah Borenstein

For many inside and outside of academia the notion that graduate students are indeed workers is not readily clear. In large part, I came to see this as mirrored through the reproduction of academia’s lack of emphasis on scholarly praxis.

 

 

Why anti-immigration politics hurt white workers

by Inés Valdez

No, immigrants aren’t taking your job — but vilifying immigrants helps undermine worker protection.

 

 

Under Trump, the deficit has ballooned, exploding a GOP myth

by Julian Zelizer

Once again, President Trump has exposed a myth in American politics. That myth? That Republicans are the party of fiscal responsibility.

 

 

Democratic candidates are finally talking about domestic terrorism. Here’s why that matters.

by Alaina E. Roberts

When Beto O’Rourke referred to the Tulsa massacre, he was correcting the record on racial violence.

 

 

Bret Stephens launches a foolish Twitter war

by David M. Perry

Being called a bedbug just isn't a big deal. Writing to a provost about the actions of an academic on Twitter, which Stephens said he did because "managers should be aware" how "their people...interact in the world," is the big deal.

 

 

Shedding light on secret laws governing presidential power

by Mary Dudziak

This system of secrecy curtails the ability of historians like me to research some of the most consequential opinions justifying executive actions.

 

 

It's time the public had a share in the past again

by Diána Vonnák

What we can learn about heritage-based urban renewal from Lviv, Ukraine.

 

 

World War II Didn’t Begin in 1939. It Began in 1936 — In Spain.

by Paul Richard Huard

Revisiting world history might be one way to break Spain’s national silence about its civil war.

</

 

The Weaponization of History

by Wilfred M. McClay

Ignorantly invoking slavery or the Holocaust is an affront to those who seriously study the past.

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172903 https://historynewsnetwork.org/article/172903 0
The Rabbi, the Telegram, and the Holocaust

 

Seventy-seven years ago today, a telegram bearing a horrifying and unforgettable message reached America’s foremost Jewish leader. It revealed the first comprehensive details about the systematic mass murder that would come to be known as the Holocaust.

 

* * *

 

The author of the fateful message was Gerhart Riegner, a 30 year-old attorney serving as the Geneva representative of the World Jewish Congress, an international Jewish rights organization.

 

From his post in neutral Switzerland, Riegner had closely followed the scattered reports about the massacres of thousands, then tens of thousands, of Jews carried out in conjunction with the German invasion of the Soviet Union in the summer and autumn of 1941. But he could not imagine that those atrocities were part of some broader genocidal strategy.

 

In early 1942, Riegner began hearing reports about the mass deportation of Jews to unknown destinations "in the East," never to be heard from again. Then, in August, a German industrialist with close ties to the Nazi hierarchy revealed shocking new information to a Jewish associate in Switzerland, who then passed it on to Riegner. The source said the deportations were part of a plan by Hitler to concentrate "all Jews [from] countries occupied or controlled [by] Germany" in eastern Europe, where they would be "exterminated at one blow" through poison gas.

 

Riegner did not yet realize that the gassing was not merely a plan, but had been underway in Auschwitz, Belzec, and other Nazi death camps for many months. Still, the new information was crucially important because it revealed that the killings were not random atrocities but rather part of a systematic, organized campaign to murder every Jew in Europe.

 

Riegner asked a U.S. consular official in Geneva, Howard Elting, to forward the information to the Roosevelt administration and to Rabbi Stephen S. Wise, America’s most prominent Jewish leader. Elting found the news difficult to believe, but he knew Riegner to be "a serious and balanced individual," as he wrote to the U.S. consulate in Bern, recommending that Riegner's message be sent to Washington. 

 

Some other American officials took a different view. Jerome Huddle, at the consulate in Bern, was worried that "the recent agitation" in America about the fate of Europe’s Jews indicated that "there may be a big play soon to get all these people to the U.S.A." Riegner’s telegram might increase demands for admitting more refugees, something that the Roosevelt administration strongly opposed.

 

Huddle’s colleague, Leland Harrison, grudgingly agreed to send Riegner's news to Washington, but only with a cover note characterizing it as "war rumor inspired by fear." That language was expanded to "wild rumor inspired by Jewish fears" when the State Department later sent a summary of the information to the Office of Strategic Services.

 

When the telegram reached Washington on August 11, Roosevelt administration officials decided to withhold it from Rabbi Wise. Elbridge Durbrow of the State Department cited what he called "the fantastic nature of the allegation" and "the impossibility of our being any assistance" even if it proved true. In other words, since the administration had already decided it would not admit more refugees or take other steps, providing any kind of assistance was necessarily "impossible."

 

Embarrassing Repercussions

 

The Americans, however, were not the only ones whom Riegner had contacted. He also asked the British consulate in Geneva to send his telegram to Sidney Silverman, a Jewish member of the British parliament. 

 

Frank Roberts of the British Foreign Office warned his colleagues that the telegram might ”provoke embarrassing repercussions." Other officials agreed that drawing the public's attention to the Allies' inaction regarding European Jewry might be "embarrassing," but they feared withholding information from a member of parliament could be more trouble than it was worth. The Foreign Office chose to send Riegner’s cable on to Silverman. He, in turn, forwarded it to Rabbi Wise, on August 28, seventy-seven years ago today.

 

Shortly after receiving the shocking message, Wise telephoned Undersecretary of State Sumner Welles. Welles, feigning surprise and never letting on that the State Department had tried to stop Wise from receiving the news, poured cold water on Riegner's telegram. He told Rabbi Wise the Jews were being deported for "war work."

 

Welles and his colleagues had already received reports about Nazi mass killings of unprecedented proportions. On July 21, the U.S. consulate in Stockholm, Sweden, sent the State Department a report from Polish officials listing the number of Jews "executed by the Germans" during the previous year: 60,000 in Vilna, 40,000 in Latvia and Estonia, 84,000 in White Ruthenia, 100,000 in Kiev. Still, that was not the same as the kind of systematic, carefully-organized annihilation that Riegner was describing.

 

Welles asked Rabbi Wise to keep the Riegner information out of the press while the State Department investigated its veracity. Rabbi Wise believed he had no choice but to comply with the request, since he had no way of authenticating the information on his own, and he would need the cooperation of the State Department if the worst turned out be true.

 

Welles and his colleagues did indeed begin investigating, but the pace of their efforts suggests they did not take the matter very seriously. It was not until three weeks later, on September 23, that the State Department asked the Vatican if it had any information about the mass killings. A curt reply from Rome mentioned only unverified reports about unspecified "severe measures against non-Aryans."

 

It was not until early October --more than a month after his conversation with Rabbi Wise-- that Welles asked the U.S. consul in Bern, Leland Harrison, to contact Riegner for more information. Harrison then took two weeks to reply. 

 

Fears Confirmed

 

In the meantime, Riegner had obtained letters from a Jewish refugee in German-occupied Warsaw who eluded the Nazi censors by using Hebrew phrases as codewords. The letters reported that "Mr. Jager [hunter, i.e. the Germans] told me that he will invite all relatives of the family Acheinu [our brothers, i.e. the Jews]; that "Uncle Gerusch [deportation] works in Warsaw; he is a very capable worker" and that "his friend Miso [death] works together with him." Riegner gave the letters to the U.S. consulate in Geneva on September 28. Yet they did not reach the State Department until October 23. 

 

Finally, on November 24, nearly three months after their first conversation about the Riegner telegram, Welles called Rabbi Wise "to confirm your deepest fears." At the same time, Welles made it clear that Wise should not attribute the confirmation of the news to the State Department. Officials feared that if the U.S. government verified the news, “the way will then be open for further pressure from interested groups for action” by the Allies to aid the Jews. R. Borden Reams, head of Jewish affairs in State’s European Division, confided to colleagues. “The plight of the unhappy peoples of Europe including the Jews can be alleviated only by winning the war.”

 

 Wise immediately organized a press conference in Washington that evening. He told the assembled reporters that Hitler had “ordered the annihilation of all Jews in Europe by the end of the year,” and that “this news is substantiated in documents furnished to me by the State Department this afternoon.” 

 

The New York Times, which did not send a reporter to the press conference, published five paragraphs about it from the Associated Press on page 10, tacked onto the end of another story. The following day, the Times published additional details in six paragraphs at the end of a related story, which it relegated to page 16. The Atlanta Constitution buried Wise’s press conference on page 20, next to the train schedules. The major radio networks ignored it altogether.

 

In the months and years to follow, many excuses would be heard from the Allies as to why Europe’s Jews could not be rescued. But nobody could plead ignorance any longer. The news was not given the prominence it deserved—but the wall of secrecy had been shattered.

 

Although 77 years have passed since that fateful day when the Riegner telegram reached Rabbi Wise, the issues raised by that episode continue to resonate. How the U.S. government responds to genocide abroad, how the administration interacts with refugee advocates, and how the news media cover such events, all remain matters of pressing concern.

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172880 https://historynewsnetwork.org/article/172880 0
50 Years ago, the KGB weaponized the fire at Jerusalem’s al-Aqsa mosque to sow discord between Israel and the United States

 

A notorious, murderous intelligence agency of one country finances and organizes a covert operation in another country, ostensibly against a third country but actually targeting a fourth. Such was international intrigue during the Egyptian-Israeli War of Attrition (1969-1970), and the cast of this hot regional drama of the Cold War were the Soviet Union, the United States, India and Israel.

 

Fifty years ago, on 28 August 1969, KGB Chairman and future Soviet leader Yuri Andropov wrote a memorandum to the Central Committee of the Communist Party. It suggested fomenting agitation in the Moslem world – specifically, in India – following the fire that broke out a few days earlier in Jerusalem’s al-Aqsa mosque. The means was to be a rent-a-crowd demonstration staged and budgeted by the agency, which perceived several advantages in such an operation – primarily fueling Indian resentment again the United States and Israel. But the document is intriguing from many other angles too – from the barefaced lie it builds upon to the timing and purposes of its promulgation.

 

“The arson by Israelis at the Moslem mosque of Al-Aqsa in Jerusalem,” Andropov wrote, has caused a severe response in the Indian public. On 26 August, this issue was debated in the Indian Parliament, and Foreign Minister Dinesh Singh declared that ‘the Government of India is profoundly shocked by this incident.’ The KGB rezidentura [station] in India has the capability to organize, in this context, a protest demonstration in front of the US Embassy in India, with the participation of up to 20,000 Moslems. The cost of implementing this demonstration would be 5,000 rupees [about $650 at the time] and would be covered from the funds budgeted by the Central Committee of the Communist Party for special operations for 1969-1971. Request evaluation. Andropov.” In the left-hand margin, a handwritten note adds: “To be implemented. [General Secretary Leonid] Brezhnev,” the top Soviet leader.

 

After the demise of the USSR, the document was presented as evidence in Russian President Boris Yeltsin’s attempt to outlaw the Communist Party. The Constitutional Court rejected his suit, an so it was that the Party leader Gennadi Zyuganov remained a perennial, if unsuccessful, candidate for president. It was at the time of this trial that the document was obtained by Gideon Remez and Dr. Isabella Ginor, now associate fellows of the Hebrew University’s Truman Institute, who included it in the mass of evidence for their recent book The Soviet-Israeli War, 1967-1973 (Hurst/Oxford University Press, 2017). 

 

“Andropov wrote what he and the memo’s recipients knew to be a lie,” Remez stressed to Walla News. The Russian language has no definite article, so he might be imputing the “arson” at al-Aqsa to unidentified Israelis or to the Israelis, meaning the state or its agents. But by the time of writing, it was already clear that no Israelis had set the mosque on fire on 21 August. Rather, it was a deranged young Australian, Denis Michael Rohan. Falsely exploiting an incident is what states, and certainly intelligence agencies, do all the time – but why lie in an internal communication? Every such Soviet document, let alone one from Andropov to the Central Committee, was checked and re-checked. The answer can be found in instructions that then-Soviet Foreign Minister Andrei Gromyko issued to his underlings: some day the archives will be opened and their contents will be seen. So, any papers must be phrased as if for public view – that is, to reflect the propaganda line. 

 

India was, at the time, no adversary of the USSR. On the contrary, the “non-aligned” orientation of Prime Minister Indira Gandhi and the socialist policies of her Congress Party were actually friendly to Moscow, and had significant domestic Moslem support (as against the Hindu-nationalist Janata Party – now BJP – which took power later and is now led by Prime Minister Narendra Modi). Consequently, Israel had no diplomatic ties with India; their relationship improved, and relations were established, only in the 1990s after the disintegration of the Soviet Bloc.

 

So, for lack of an Israeli embassy in New Delhi to picket, it had to be the American one -- but the KGB’s target was not there anyway. “This involved the superpower struggle,” said Ginor, “The arena was not merely India or the Moslem world, nor was it about al-Aqsa. It was the Suez Canal. It was the Cold War, and specifically its Egyptian-Israeli theater in the War of Attrition. What the USSR sought was first and foremost to use the fire as a wedge issue between the United States and Israel, to cast Israel as a burden on Washington by staging mass demonstrations throughout Islam against Israel for supposedly burning down a Moslem holy site.”

 

The War of Attrition was raging along the Suez Canal, with Israel getting US support while the USSR sponsored Egypt, but the opposing forces were not symmetrical. After being hammered since March 1969 by the numerically far superior, Soviet-supplied Egyptian cannon, Israel turned the tables in late July – a month before Andropov’s memo -- by fielding its air force as “flying artillery.” This effectively countered Egypt’s guns and caused heavy casualties as well as collapsing morale in Egyptian formations. It also destroyed Egypt’s likewise Soviet-made SAM-2 anti-aircraft batteries. The Kremlin had already resolved to send in newer, Soviet-manned SAM-3s – but the Israelis bombed out their newly-dug emplacements before they could be deployed. However, this campaign, carried out largely by the A-4 Skyhawk tactical bombers that Israel began to receive in late 1967, also created a political problem. The US had provided these planes on condition that they not be used for “offensive operations.” Was hitting Egyptian positions that had been firing on Israeli forces in Sinai, which Israel had captured from Egypt in the Six-Day War of June 1967, by way of an offensive or defensive operation? That depended on the eye and address of the beholder, in Jerusalem or Washington. One of President Lyndon Johnson’s last decisions in office was to approve the sale to Israel of the newer, powerful F-4 Phantom jets, much feared by the Egyptians and Soviets. It was up to the fledgling administration of Richard Nixon to implement – or not -- the scheduled supply, at the slow pace of two planes at a time, which was scheduled to begin in September. 

 

Faced with the turning tide against its client in the War of Attrition, the Soviet leadership resolved not only to dispatch better weapons to Egypt but also to man them with regular Soviet crews, as distinct from the advisers that were hitherto attached to Egyptian units. “On 1 August, recruitment and training of Soviet fighter pilots for action in Egypt was begun,” stated Remez. Ginor added: “previous scholarship held that the direct Soviet military intervention, which included those SAM-3s too, started only in January 1970, after the first Phantoms to arrive in Israel began deep-penetration bombings in and around Cairo and a desperate President Gamal Abdel Nasser supposedly flew secretly to Moscow to plead for help. We have shown that it actually began as early as the previous summer.”

 

This set the scene for Andropov’s memo. The USSR had every interest to legitimize its massive, direct intervention. What could be better than to smear Israel – in Moslem view, of course, but mainly as seen from Washington, which had become Israel’s sole source for weapons, especially those vital planes? The effect of each specific “special measure” could not be measured, but a steady drumbeat might – Andropov hoped – cumulatively achieve a result. Thousands of screaming protesters outside the US Embassy in India would not only resonate in the media, but doubtless be reported by the ambassador to his home office.

 

The United States had just landed a man on the moon, but it was deeply mired in the mud of Vietnam and torn apart by domestic dissent. Moscow had reason to believe that capitalism was approaching its final phase and needed just another push. Western Europe was barely recovering from the upheaval of May ’68, and was still at odds with Uncle Sam. Leanings in “nonaligned” Latin America, Africa and Asia were more pro-Soviet than pro-American. The KGB, along with its dedicated junior partners such as the Romanian Securitate and the East-German Stasi, were fostering anti-western terrorist groups from Palestinian and Irish nationalist organizations through ultra-left urban terrorists such as the Italian Red Brigades and the (West-)German Red Army or Baader-Meinhof Gang. Through this prism, it was not too hard to portray Israel as damaging US efforts to improve relations with the Moslem world. The KGB constantly underwrote anti-Israeli and anti-Western subversion – the notorious “Jackal” Carlos was recruited and financed by the Soviets through Palestinian allies and much of his activity was aimed at Israel. The superpower contest was waged by proxy wars in the Middle East, Latin America, Asia and any other scene of opportunity from the space race to the Olympic Games.

 

The Phantoms did arrive in Israel and went into action, but so many of them were shot down by the Soviet regulars’ SAMs that Israel was constrained to accept a cease-fire, ending the War of Attrition a year after Andropov’s initiative. The USSR immediately resumed preparation of the Egyptian Army for the complex operation of crossing the Suez Canal; more importantly, the Egyptians and Soviets  violated the truce right away by advancing the anti-aircraft missile shield up to the canal bank, creating the protective envelope that would enable the cross-canal attack in the Yom Kippur War three years later.

 

One can’t tell whether Andropov considered those 5,000 rupees well spent. This was one of many small operations that the fearsome Soviet superpower conducted against real or perceived adversaries. We do know that since soon after Israel’s establishment in 1948, it was a major target for the KGB, and remains one for the latter’s successor agency in foreign intelligence, the SVR. Andropov progressed from spy chief to the top leadership of the Soviet Union; decades after the USSR broke up, another KGB product now occupies the same Kremlin office in post-Soviet Russia. History has come full circle.

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172856 https://historynewsnetwork.org/article/172856 0
A Voting History of American Jews From 1916 to Today

Last week, Donald Trump said that American Jews who vote Democratic show “either a total lack of knowledge or great disloyalty.” The statement has incited the vast majority of the American Jewish community. Many Jewish organizations combined forces to criticize Trump’s Tweets. AIPAC (America-Israel Public Affairs Committee), a strongly pro-Israel lobbying group, even joined with J Street, a competing organization often critical of Israel’s government, to criticize Trump’s language. To many, the assertion that Jews had an obligation to support Israel echoed the “dual loyalty” trope that Nazi Germany, Czarist and Stalinist Russia, and other nations in earlier times utilized to  promote anti-Semitism.

 

The historical record shows that American Jews have a long history of supporting Democrats since statistics began to be kept in 1916. The Jewish population primarily migrated to Northern and Midwestern cities in the late 19th and early 20th centuries. Many found the Democratic party political machines to be receptive to their needs, beginning with New York Governor Alfred E. Smith, who ran for President in 1928. Once Franklin D. Roosevelt and the New Deal came along in the 1930s, the alliance of American Jews and the Democratic Party was sealed.

 

Many Jews became engaged in state and local Democratic politics, worked in Congress, and even served as advisers to Democratic Presidents. The Republicans largely did not work to gain the support of the Jewish community, and fewer Jews participated in Republican Party causes, which tended to be much more conservative, and opposed to the New Deal and the later Great Society initiatives under President Lyndon B. Johnson.

 

Historically, Democratic presidential candidates have received a majority of the Jewish vote. Woodrow Wilson received 55% of the Jewish vote in 1916 after he promoted the appointment of Louis Brandeis, the first Jewish Supreme Court Justice.  In 1920, Socialist Eugene Debs won 38 percent of the Jewish vote.  Democrat James Cox received 19 percent of the Jewish vote which combined showed that a minority of Jews voted for Republican nominee and future President Warren G. Harding.  In 1924, Democrat John W. Davis won 51 percent of the Jewish vote, Progressive Robert La Follette Sr. won 22 percent, and Republican President Calvin Coolidge only won 27 percent of the vote. 

 

After 1924, Democrats won wide percentages of the Jewish vote. Alfred E. Smith won 72 percent in 1928.  Franklin D. Roosevelt won 82, 85, 90 and 90 percent of the Jewish vote in 1932, 1936, 1940, and 1944, respectively. In 1948, Harry Truman won 75 percent and Progressive Henry A Wallace won 15 percent, leaving Republican nominee Governor Thomas E. Dewey of New York only 10 percent of the Jewish vote. Even against popular war hero Dwight D. Eisenhower, Democrat Adlai Stevenson won 64 and 60 percent of the Jewish vote in 1952 and 1956, respectively.  John F. Kennedy won 82 percent, Lyndon B. Johnson 90 percent, Hubert Humphrey 81 percent, George McGovern 65 percent, and Jimmy Carter 76% of the Jewish vote in the elections of 1960, 1964, 1968, 1972, and 1976, respectively.

 

In 1980, Carter only received 45 percent of the Jewish vote as many felt he had been too critical of Israel’s policies regarding the Palestinians. Nevertheless, when combined with Independent John Anderson’s 15 percent of the Jewish vote, Republican nominee and future President Ronald Reagan still received only 39 percent of the Jewish-American vote. In 1984, Walter Mondale, Carter’s former Vice President,  received 57 percent against Reagan’s 31 percent. After that, a vast majority of Jewish Americans voted for the Democratic presidential nominee: Michael Dukakis (64 percent); Bill Clinton (80 and 78 percent); Al Gore (79 percent); John Kerry (76 percent); Barack Obama (78 and 69 percent); and Hillary Clinton (71 percent) between 1988 and 2016.

 

It is clear that Trump’s attack on the Jewish vote will backfire, and that the Democratic nominee for President, no matter who it is in 2020, will likely gain at least 80 percent of the American Jewish vote especially considering that in the Midterm Congressional elections of 2018, 79 percent of American Jews voted for Democrats.  Nothing is likely to change the dedication of the American Jewish community to the Democratic Party, continuing the long loyalty and commitment they have with the party that has promoted their basic social and economic views.

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172878 https://historynewsnetwork.org/article/172878 0
The Cultural History of Woodstock and a Message of Hope

 

The 1969 event in upstate New York that would become known as Woodstock was originally billed as “three days of peace and music.”  But as Harlan Lebo, author of 100 Days: How Four Events in 1969 Shaped America (Amazon,Barnes & Noble) explains, Woodstock would become more than just another rock festival. This is the first in a two part series on Woodstock. 

 

In the early morning hours of Monday, August 18, 1969, the Woodstock weekend concluded in spectacular fashion with Jimi Hendrix’s performance.  Hendrix played for nearly two hours – the longest set of his career.

 

Forty-five minutes into his set, Hendrix broke into his own rendition of “The Star Spangled Banner” injecting the national anthem with new meaning for a new generation. 

 

“The guitarist performed his most famous solo, channeling the atmosphere of beauty and love amid anger and aggression that defined the culturally-tumultuous era,” said music writer Andrew O’Brien. 

 

“You can hear the Air Force dive bombers staking their lives for the country in Vietnam through Jimi’s whammy bar dives,” said O’Brien. “You can feel the mourning of American mothers and fathers in the fragments of military funeral hymnal ‘Taps’ he added near the song’s end. You can hear the nation’s chaos in the atonal distortion. And you can hear the hope shine through as Hendrix hits the anthem’s final notes with optimistic purpose.”

 

The concert officially closed with comments by stage announcer Chip Monck, imploring the stragglers to grab a plastic bag and help clean up–to do “anything you can do to give us a hand to leave this area somewhat the way we found it. I don’t think it will ever be quite the same.”

 

Monck was right; it was never quite the same.  The Woodstock weekend marked the beginning of a new era in America, a new consciousness about the way the world could be.  The 400,000 people who attended had arrived enthusiastic and dry; they left exhausted, hungry, and soaked – and part of the Woodstock Generation.

 

The new reality of a changed world

 

For many of those who had been there or had watched on television, Woodstock had shown them that their world had changed.

 

“Everyone dropped their defenses and became a huge extended family,” said concert promoter Michael Lang, “Joining together, getting into the music and each other, being part of so many people when calamity struck – the traffic jams, the rainstorms – was a life-changing experience. 

 

“None of the problems damaged our spirit,” said Lang, “in fact, they drew us closer. We recognized one another for what we were at the core – as brothers and sisters, and we embraced one another in that knowledge.” 

 

For Saturday Review writer Ellen Sander, Woodstock meant that popular culture “could no longer be overlooked or underrated. It’s happening everywhere, but now it has happened in one place at one time so hugely that it was indeed historic.” 

 

What happened, wrote Sander, was “the largest number of people ever assembled for any event other than a war lived together, intimately and meaningfully and with such natural good cheer that they turned on not only everyone surrounding them but the mass media, and, by extension, millions of others, young and old, particularly many elements hostile to the manifestations and ignorant of the substance of pop culture.”

 

Woodstock-the-film: creating an icon of the age

 

Despite its notoriety, Woodstock might have been remembered as just a pleasant shared moment had it not been for the March 1970 release of Michael Wadleigh’s documentary. Titled simply Woodstock, the Academy Award-winning film gave the concert and its attendees an iconic vibe. It became the link between the significance of the concert for those who attended and how the rest of the world would begin to respond to it. 

 

“I chalked up my appreciation of the festival to my enjoyment in attending,” said attendee Patricia Tempel, who later became a university professor and journalist.  “But when the movie was released, all of a sudden Woodstock became truly significant – it just exploded onto the culture.”

 

The documentary also clearly demonstrated that those who had attended represented only a fraction of the young Americans who were bringing new attitudes and ideas to the nation.

“After the movie came out, it seemed like youth and counterculture burst into the mainstream,” said Tempel. “In many respects, Woodstock was the beginning of seeing youth and counterculture values as this incredible market.”

 

Indeed, the concert and the movie solidified the increasing awareness of young people as the bellwether for cultural change, as well as a market to be tapped. That view was a long time coming; before the 1960s, the interests and needs of Americans under 21 as a separate entity were homogenized into broader U.S. culture, and thus marginalized to the point of non-existence.  

 

As one example among many of this lack of significance ofAmerican youth, consider the pre-1960s motion picture industry, which created little of relevance for the under-21 market as a separate audience.  As a result, for the first 60s years of the movie business, issues affecting young people were nearly invisible.   

 

By the mid-1960s, youth culture had not only risen in prominence, but began to move toward center stage in marketing, fashion, and entertainment. Woodstock – both the concert and the film – reinforced the change of direction toward the interests and purchasing power of the young as primary catalysts for all things social and commercial.

 

This shift toward younger audiences in marketing and cultural experiences helped shape the creation of  a wide variety of large-scale youth-focused events that were unheard of before the summer of 1969.  Today’s concerts are not only large in size, but also broad in their scope, including music events such as the Coachella Valley Music and Arts Festival, or Summerfest in Milwaukee; as well as events that reflect broader personal interestsin digital technology and community, such as Burning Man, Comic-Con, and South by Southwest.  

 

Such events cater to today’s lifestyles and viewpoints that are as relevant in the 21st century as Woodstock was in its era; all owe much to the “three days of peace and music” in 1969.

 

The challenges to come

 

For promoter Lang, Woodstock also represented the challenges and opportunities that new generations would face.

 

“Woodstock declared that a young generation could take on the issues of personal freedoms, stopping an unjust war, creating respect for the planet, and work for human rights,” said Lang.  “Woodstock showed that the world can be a better and more peaceful place, and that view keeps resonating.” 

 

Woodstock attendee Jim Shelley, who was not politically involved in the ‘60s, said that for him Woodstock continues to be a reminder of the unceasing care required for the fragile victories won in that era.

 

“Because of Woodstock, I’m constantly aware that the issues we thought had been taken care of in the ‘60s will always need attention,” Shelley said. “Like when the Civil Rights Act was signed in 1964, we thought the right of everyone to vote was taken care of, but that issue needs attention now more than ever. We made a lot of progress on the environment, but now the issues about global warming and new forms of pollution are growing. 

 

"There was a level of shared consciousness that occurred that weekend – that we need to stay involved, and be sure that the next generation knows that it’s their turn to be involved.” 

 

Power just starting to be imagined

 

The lasting impact of Woodstock is not the tangible change that resulted from the weekend, but the broad philosophical viewpoint it inspired. As the aura of Woodstock continued to grow, America would learn that the message of the concert weekend had little to do with LSD, swimming naked in Filippini Pond, or even the music; what truly mattered was the spontaneous group cooperation on an unprecedented scale.

 

In spite of the rain, lack of food, and limited sanitation, those at Woodstock found earnest respect, kindness, and unconditional acceptance of others. When eyewitnesses recall their memories of Woodstock, they seldom – if ever – linger on the drugs or the sex; what they do treasure are the three days of unity – chatting happily with local cops, or sharing oranges with strangers, or standing on a street corner handing out lollipops – the simple human dignity of sharing and caring. 

 

As one idealistic young vendor at the event recalled, “The power to the people is just starting to be imagined – the things we never could have believed!”

 

A message of hope

 

The impact of the great growth of the counterculture and social consciousness spawned in the 1960s would ebb and flow over the decades, in tandem with the ongoing seesaw of political viewpoints and national agendas across the nine presidential administrations since then.

 

Perhaps the most important change in the counterculture that peaked shortly after Woodstock was the transition of counterculture issues into mainstream America.  In the decade after Woodstock, much of the energy of the 60s had changed– in part happily – because some of the major goals of the 60s, such as expanding national social programs, the environment, and civil rights, had been at least partially achieved – or perhaps more important, had moved into the ongoing mainstream discussion of America’s political and social concerns. Counterculture of the 1960s in its endlessly evolving forms continues today, now as a broad influential force in a spectrum of social movements and cultural expression.

 

How Woodstock fit into this discussion was contested for years; the event was a lightning rod in the post-1960s clash of counterculture and The Establishment; to some, the concert would become the ultimate demonstration of new social accord on a mass scale, while to others it represented yet another excuse to avoid adult responsibilities and the conventional demands of a middle-class lifestyle. 

 

But for many Americans – one could dare say most – the issue that has remained steadfast is the philosophy of a better world that was embodied in Woodstock.

 

Woodstock vividly represents the intangible best qualities of the American experience. A month earlier, Apollo 11 demonstrated the most tangible expression of U.S. achievement; Woodstock symbolized the possibilities and dreams of a new generation – whether those dreams could be achieved or not. 

 

"What Woodstock represented, and what it still represents today, is hope,” said Barry Levine, who photographed the entire weekend for the documentary team. “Woodstock gave the hope that things could be different.”  

While broad definitions are often used to describe the generations since 1969 – “baby boomers,” “millennials,” and “xennials,” among others – the term “Woodstock Generation” has taken a special meaning – a description that applies to a group of any age or orientation still committed to the ideals of the 1960s. 

 

Moreover, “Woodstock” has become part of the American vernacular, not just as the name of an event, but also a term synonymous with uplifting and inspiring. When Barack Obama was inaugurated almost forty years after the concert, the Wall Street Journal called the event attended by more than one million people “Washington’s Woodstock.”      

“Woodstock showed that people can take care of each other,” said concert promoter Artie Kornfeld. “For that reason alone, it reaffirmed my faith in people."

 

 

 

]]>
Sat, 21 Sep 2019 10:51:26 +0000 https://historynewsnetwork.org/article/172879 https://historynewsnetwork.org/article/172879 0