The End of Gangs

The End of Gangs

By Sam Quinones

Los Angeles gave America the modern street gang. Groups like the Crips and MS-13 have spread from coast to coast, and even abroad. But on Southern California’s streets they have been vanishing. Has L.A. figured out how to stop the epidemic it set loose on the world?

In 2007, when housing prices were still heated, factory worker Simon Tejada put his home on the market. It was a well maintained three-bedroom in the Glassell Park district of Northeast Los Angeles, and the structure was appraised at $350,000. (Tejada had bought it for $85,000 in 1985.) But only one offer came in: $150,000. “Your house is fine,” the guy told Tejada. “The neighborhood’s awful.”

I met Tejada a few months later. I had been writing about gangs in Los Angeles since 2004, when, after 10 years as a writer in Mexico, I’d returned home to take a job with the Los Angeles Times. My reporting took me into scores of working-class neighborhoods and cities within Southern California, places like Pacoima, Watts, Azusa, Hawaiian Gardens, Florence-Firestone, and Harbor Gateway.

Gangs ravaged all these locales. Walls were covered with graffiti. Shootings were constant. In many of these neighborhoods, Latino gangs had taken to attacking and killing random black civilians, turning themselves into the leading regional perpetrators of race-hate crime.

Yet no place was as scary as the densely built two-block stretch around the corner from Simon Tejada’s house in Northeast L.A. Along those blocks, just south of the Forest Lawn cemetery, ran Drew Street, a stronghold of the Avenues, a Latino gang. I sat with Tejada in his home, shrouded behind a high wall and thick hedges, as kids in hoodies stood among densely parked cars outside and nodded at potential customers driving by. They carried on an unceasing trade in crack cocaine, with the apartments as their base. Graffiti was everywhere. When the police drove down Drew Street, it was two cars at a time, to the sound of gang whistles and Nextel chirps.

Six years later, on a sunny afternoon, I went back and found that Simon Tejada never sold his house. I met him this time outside the home, and we walked the street freely. Gone were the thugs in hoodies. Gone was the graffiti. As we strolled, Tejada waved to neighbors, some of whom had just bought houses. “Now I don’t want to leave,” he told me.

The transformation of Drew Street is not unique. In the past few years, street gangs have been retreating from public view all over Southern California. Several years ago, I spent a couple of days in the Florence-Firestone neighborhood, in an unincorporated part of Los Angeles County, interviewing some Florencia 13 gang members. One nearby garage was never free of graffiti for more than a few minutes a week. (This was the amount of time it took after the graffiti clean-up truck left for the 76th Street clique of Florencia 13 to re-deface the thing.) That garage wall has now been without graffiti for more than four years. I go by it every time I’m in the neighborhood.

Fifteen miles southeast of Florence-Firestone, much of the tiny city of Hawaiian Gardens used to be scarred with the graffiti of HG-13, a local gang that absorbed several generations of the town’s young men. The last three times I’ve been to Hawaiian Gardens, I’ve seen nothing on the walls, and young black men freely visit taco restaurants on the main drag, something that would have been inconceivable a few years ago. In Oxnard’s Colonia Chiques neighborhood in Ventura County, the decades-old neighborhood gang is not outside, and their graffiti is gone.

Some of this is a state and national story, as violent crime declined by about 16 percent in both California and the nation from 2008 through 2012. But the decline has been steeper in many gang-plagued cities: 26 percent in Oxnard, 28 percent in Riverside, 30 percent in Compton, 30 percent in Pasadena, 30 percent in Montebello, 50 percent in Bell Gardens, 50 percent in El Monte.

Santa Ana once counted 70-plus homicides a year, many of them gang-related. That’s down to 15 so far in 2014, even as Santa Ana remains one of the densest, youngest, and poorest big cities in California. “Before, they were into turf,” says Detective Jeff Launi, a longtime Santa Ana Police gang investigator. “They’re still doing it, but now they’re more interested in making money.”

No place feels so changed as the city of Los Angeles. In 2014, the Los Angeles Police Department announced that gang crime had dropped by nearly half since 2008. In 2012, L.A. had fewer total homicides (299) citywide than it had gang homicides alone in 2002 (350) and in 1992 (430). For the most part, Latino gang members no longer attack blacks in ways reminiscent of the Jim Crow South. Nor are gangs carjacking, assaulting, robbing, or in a dozen other ways blighting their own neighborhoods. Between 2003 and 2013, gang-related robberies in the city fell from 3,274 to 1,021; gang assaults from 3,063 to 1,611; and carjackings, a classic L.A. gang crime born during the heyday of crack, from 211 to 33.

This has amounted to an enormous tax cut for once-beleaguered working class neighborhoods. Stores are untagged, walls unscarred. Graffiti, which sparked gang wars for years, is almost immediately covered up. Once-notorious parks—El Salvador Park in Santa Ana, Smith Park in San Gabriel, Bordwell Park in Riverside are a few examples—are now safe places for families.

Above all, with gangs far less present and active, people can move about with less fear. “There’s not so much of the hanging on the corners,” says Chris Le Grande, a pastor at the Great Hope Missionary Baptist Church and Youth Center in Florence-Firestone. A decade ago, gang wars gave the neighborhood one of the highest homicide rates in the region. “It’s a whole different attitude in the area.”

The shift has happened fast. “I don’t know if it’s a cultural shift or what, but being the member of a gang doesn’t have the same panache that it did,” says George Tita, a University of California-Irvine criminologist who has researched gangs and their use of neighborhood space. “Things have changed radically in the last five years.”

LITERATURE ABOUT STREET GANGS in the United States dates back at least to the 19th century, when diverse groups of immigrants began settling en masse in the tenements of New York. The reformer and muckraker Jacob Riis, who spent decades among the city’s poor, saw the gang as a temporary product of dislocation, something that “appears in the second generation, the first born upon the soil—a fighting gang if the Irishman is there with his ready fist, a thievish gang if it is the East Side Jew—and disappears in the third,” as he wrote in the Atlantic Monthly in 1899.

In Southern California, street gangs had a later start, and many histories trace them to the 1920s, when groups of Mexican American teenagers began to band together in shared ethnic alienation. California’s early Chicano gangs usually restricted their violence to the use of fists, chains, and knives. Guns were rare, and shootings were seen as unmanly. Black gangs, which began to form following a great migration of African Americans to Los Angeles after the Second World War, were also subdued in comparison to their later incarnations.

Several forces conspired to make gangs more volatile. The Watts riots of 1965 sparked massive middle-class flight from the cities of southern Los Angeles County and left a poorer, often jobless, underclass. A surge of immigration, both legal and illegal, caused arrivals to ally with one another in new ways. And illegal drugs became an immense market. Crack, emerging in the 1980s, offered enormous profits, which in turn led to increasingly violent turf protection.

A gang culture specific to Southern California took root. Gangs took over public spots—parks, markets, apartment buildings, and especially streets. A Southern California gang’s reputation depended on how well it controlled its territory, and it showed residents, especially youngsters coming up, who was really in charge. In Chicago, gangs had names that seemed right out of West Side Story—the Latin Kings, the Vice Lords. In Southern California, gangs named themselves after streets. Anonymous thoroughfares grew infamous: Grape Street, 18th Street, Hoover Street, 38th Street, Piru Street. Even Main Street had an eponymous gang.

Gangs formed within numerous ethnic groups (Armenians, Vietnamese, Cambodians, Laotians, and many others), but most were made up of African Americans or Latinos. Members of the Crips and the Bloods, two black Los Angeles gangs that took off in the early 1970s, split into ever more violent feuding subdivisions through the 1980s. Different Crip cliques emerged every 10 blocks or so east of the 110 freeway, from Downtown south to Watts, a confederation that came to be known as the East Coast Crips—and feuded ferociously with Crips west of the freeway. Gangs also formed in scores of isolated Mexican American barrios, from Pacoima to Pomona, and grew larger as Latinos emerged as the region’s dominant population.

Compounding all this was an explosion of apartment construction in the late 1980s. More than 187,000 units in Los Angeles County went up between 1984 and 1989, mostly on lots where single-family homes once stood. The dense complexes replaced working-class homeowners with low-income apartment renters. The new structures offered gangs more places to hide and recruit. Sweepers couldn’t get through the streets, now lined with cars. Departing tenants left sofas and old chairs on the sidewalks, a daily reminder of blight and decay.

By 1988, Los Angeles County had an estimated 45,000 Latino gang members and 25,000 black gang members, and dozens of neighborhoods were becoming open-air drug markets. When a young graphic designer was shot to death that year by a Crip (who had been firing at a rival) in the prosperous Los Angeles neighborhood of Westwood, near the University of California-Los Angeles, the city awoke to the problem roiling in L.A.’s low-income neighborhoods. Among the police and the press, 1988 became known as “The Year of the Gang.”

ONE COP WHO STEPPED into this mayhem was an Irish American newcomer to the Los Angeles Police Department named William Murphy. He arrived in 1988 with four years of policing in Massachusetts behind him. “I wasn’t sure if I would stay,” he recalls. But the job grew on him and he warmed to the city. Over the next quarter-century, as the region went from better to worse to better again, Murphy saw it all.

DANGER ZONE: At the corner of Avenue 32
and Drew Street, Los Angeles. (Photo: Sam Quinones)

The LAPD of the late 1980s was an organization both brutal and beleaguered. Its chief was Daryl Gates, a famously tough-talking cop’s cop, and the department reflected the man. Street operations and arrests were what cops did, and community outreach was regarded as soft-headed stuff. The department’s numbers hovered around 8,000 officers, a tiny force by major-city standards, amounting to more than 425 city residents per cop in 1990. (New York at the time had more than 30,000 sworn officers, or fewer than 250 city residents per cop.) As crime rates tacked higher amid the crack and gang epidemics, LAPD officers had little time to do more than go from call to call.

Bill Murphy’s first assignment was to the department’s Harbor Division, 20 miles south of downtown Los Angeles. There, too, gangs were on the rise, and density was adding to the problem. In one neighborhood, some 70 single-family houses were replaced with 477 apartment units that attracted mostly low-income renters. “There was no plan,” he says. “They were putting apartments in left and right.”

Crime statistics kept getting worse. In 1992, Los Angeles County saw 2,040 homicides, 803 of which were gang related. In the city, robberies numbered 39,222, nearly matching the previous year’s record high. The percentage of murders committed in public places climbed, too, reaching 82 percent in 1992. The drive-by shooting, dormant since Chicago during Prohibition, was revived. Some 6,300 drive-by shootings took place in Los Angeles between 1989 and 1993.

L.A.’s gang culture proved exportable, and movies like Colors, Boyz N the Hood, and American Me captured the imaginations of young men around the country. Bloods and Crips warred in Portland, Oregon. In Stockton, California, outside of one theater showing Colors, a newly made Blood killed a young black man wearing blue, the color of Crips. Crips showed up in Minnesota, and Sur 13 graffiti showed up in Atlanta, Louisville, and the state of Michoacan, Mexico. Much of the violence plaguing El Salvador and Honduras today is between MS-13 and 18th Street, gangs transplanted from L.A.

Very little of what the LAPD did then was driven by hard numbers. No one could precisely evaluate what problems afflicted a neighborhood. Patrol officers simply chose which crimes to target. A shortage of data also meant a shortage of accountability, either from officers or their higher-ups. Murphy says he almost never saw a Captain III, the division commander, around the station house.

Nor was Southern California law enforcement collegial. The LAPD did not ask for help—either from the community or from anyone else. The Sheriff’s Department was seen as a rival; smaller departments disliked both the LAPD and the Sheriff’s Department; and nobody liked the FBI.

Over the next decades, Bill Murphy witnessed the cataclysms that rocked the LAPD. He survived crossfire during riots that convulsed the city in 1992 after a jury acquitted four LAPD officers in the beating of motorist Rodney King. He worked in the Rampart Division for a couple years before it was shaken by a corruption scandal that ended with a federal consent decree, placing it under the supervision of Washington. But the constant through his career was the rise and entrenchment of L.A. gangs in neighborhoods that could least afford them.

“A whole generation of cops like myself saw that,” he says. “We knew firsthand what didn’t work. We had to come up with something different.”

IN 2002, BILL BRATTON, a veteran of law enforcement who had spent 1994 to 1996 as police commissioner of New York City, accepted a job offer from Los Angeles mayor Jim Hahn to take over as chief of the LAPD. If Bill Murphy and other LAPD veterans were hoping for a change, this was one. During a generally terrible decade for the LAPD, the NYPD, during and after Bratton’s tenure, had gone from success to success, cutting New York’s murder rate by more than half.

Central to Bratton’s approach in New York was a system called CompStat, short for computer statistics. CompStat involved the real-time statistical monitoring of crime reports, giving cops, and their chain of command, data to which they could be held accountable. Bratton also believed in what has been called the broken windows theory, based on an argument (still much contested) put forward in 1982 by social scientists James Q. Wilson and George L. Kelling, that broken windows or other forms of decay beget further deterioration, and that preventing serious crimes requires a focus on combating blight and petty forms of lawlessness.

AT HOME: Simon Tejada near his Glassell
Park house in Los Angeles. (Photo: Sam Quinones)

When Bratton brought CompStat to the LAPD, it showed commanders where to deploy resources, and it meant the police, and especially division captains, could be evaluated according to reductions in crime in their territory. To fight chronic understaffing at the LAPD, Bratton lobbied for more hiring. Under mayors Richard Riordan and Jim Hahn, the LAPD had grown to 9,000 officers. Bratton and mayor Antonio Villaraigosa took it to 10,000.

The LAPD also began to make use of a tool that had previously been used sparingly: the gang injunction, essentially a ban on gang members hanging out together in public. The gang injunction spent much of the 1990s in court before being narrowly ruled constitutional, but law enforcement valued it. Today, Los Angeles alone has at least 44 injunctions against 72 street gangs. Gang members seen on the street together can be jailed on misdemeanor charges. Other towns and counties followed LAPD’s lead.

All this had a major effect: It drove gang members indoors. Drug dealing continued, and so did other forms of crime, including identity theft. Gang members became more adept at using the Internet to promote their gangs and belittle rivals. But boasting and threatening online doesn’t require the commitment or violence of classic L.A. street gang-banging, nor does it blight a neighborhood. “When you don’t have kids hanging out on the street,” says George Tita, the UC Irvine criminologist, “there’s no one to shoot or do the shooting.”

COMPSTAT AND BROKEN WINDOWS were technocratic tools that Bratton could import from New York without too much modification, even if they were controversial. But his approach to public relations had to be different. Vast swaths of Angelenos hated and feared their own police department.

For years, the term community policing had enjoyed popularity as a buzzword without translating into major changes on the streets of Los Angeles. But while the department had been taking cautious steps toward getting officers out of their cars and regularly patrolling beats on foot, things sped up under Bratton.

Community policing changed the job description of every LAPD officer, but perhaps none more so than that of the division commander—Captain III. Under the new philosophy, an LAPD Captain III became a community organizer, half politician and half police manager, rousing neighbors and fixing the broken windows. Captains even began to lobby the city for services—street sweeping and tree trimming—that had nothing to do with law enforcement, transforming themselves into a miniature city government for neighbors who didn’t know who to call. They started to recognize that bringing crime rates down—their ticket to promotion—could happen only through alliances with the community. So Captain IIIs began to spend much of their time among pastors, librarians, merchants, and school principals. “We can’t arrest our way out of the problem” became their startling new mantra.

Supporting the new approach to gangs was City Hall, which worked with the police through the Mayor’s Office of Gang Reduction and Youth Development, or GRYD. Whereas the city had previously disbursed gang-intervention funds evenly across all the city council districts, the new approach took a more targeted approach. “We narrowed it down to the young people who were really at risk,” recalls Jeff Carr, who was deputy mayor and director of GRYD under Mayor Villaraigosa from 2007 to 2009. “Like the police, we were going to concentrate our resources in the neighborhoods where the problems were most acute.”

The L.A. City Attorney’s office noticed the headway being made with community policing and placed prosecutors out in communities, where they heard residents talk about what really concerned them. Previously, prosecutors had simply taken cases and argued them before courts. Now, like Captain IIIs, they were partially taking on the role of community organizers, helping neighborhoods identify threats and finding ways to combat them.

The result: A 2009 Los Angeles Times poll that showed more than two-thirds of black (68 percent) and three-quarters of Latino (76 percent) residents had a favorable view of the LAPD.

IN APRIL 2006, THE U.S. Drug Enforcement Agency’s public affairs office in Los Angeles issued a press release under the headline “Joint Investigation Knocks-Out Two Los Angeles Area Gangs.” Federal prosecutors had indicted dozens of members of the HLP gang in Highland Park and the East Side Wilmas in Wilmington, and the press release ended with a thick list of 25 law-enforcement agencies—federal, state, and local, from the FBI and LAPD to the Covina Police Department—whose help had been “invaluable.” The media covered the press release only perfunctorily, but hidden in its 996 words was another sea change in gang enforcement.

The Racketeer Influenced and Corrupt Organizations statute was enacted by Congress in 1970 and best known for its use against Italian Mafia dons. But the RICO statute had also been used a couple of times in Los Angeles in the 1990s to go after the Mexican Mafia, a notorious California prison gang that had extended its influence to the streets, where it controlled the activities of Southern California Latino gang members.

In the 1990s, in meetings billed to the press as negotiations for a gang truce, members of the Mexican Mafia ordered Latino street gangs to stop drive-by shootings. They also ordered gangs to start taxing drug dealers in their neighborhoods and kicking the proceeds to Mexican Mafia members and their associates. That system created the first region-wide crime syndicate in Southern California history, turning scruffy neighborhood street gangs into tax collectors and enforcers. (As M is the 13th letter of the alphabet, many gangs took the number 13 as a symbol of their obedience to the Mexican Mafia.) It also made them vulnerable to federal conspiracy prosecution—the RICO statute in particular.

The HLP prosecution had begun as a drug trafficking case, but the federal prosecutor, Chris Brunwin, was also finding evidence of insurance fraud, immigrant smuggling, extortion, and witness intimidation—the sort of criminal activity that RICO was written to combat. So Brunwin expanded the case and charged the gang under the statute, netting 43 convictions.

The 2006 case against HLP was the first in Los Angeles to use RICO statutes on foot soldiers as well as gang leadership. Street gangs had previously been seen as small fry, but, by the mid-2000s, “the culture changed in terms of using this great tool,” says Jim Trusty, chief of the U.S. Justice Department’s Organized Crime and Gang section in Washington, D.C.

RICO cases also required interagency cooperation—federal budgets and wiretapping capabilities with local cops’ knowledge. Federal prosecutors and district attorneys began meeting, sharing information, and putting aside old turf rivalries. Today, federal agents and local police officers routinely work together on cases. On the day of arrests, officials—local cops, sheriffs, agents from the DEA, FBI, IRS, and others—will spend several minutes of a half-hour press conference recognizing one another’s cooperation.

Prosecuting street gangs has meant abandoning the previous focus on kingpins. “‘Cut off the head and body dies’ just isn’t true” when it comes to Southern California street gangs, says Brunwin. “You have to go after everyone—anyone who had anything to do with, supported, or touched the organization. You have to have an effect on the structure, its daily operation. The only thing that works is adopting a scorched-Earth policy.”

Since 2006, there have been more than two dozen RICO indictments in Southern California, targeting Florencia 13, Hawaiian Gardens (HG-13), Azusa 13, Five-Deuce Broadway Gangster Crips, Pueblo Bishop Bloods, and many more of the region’s most entrenched and violent gangs. Most of the indictments have dozens of defendants; the Florencia case had 102, while Hawaiian Gardens, in 2009, was one of the largest street-gang indictments in U.S. history, with 147. Some of these indictments once provided news fodder for days. Now they’re so common that they no longer earn the Los Angeles Times’ front page. A recent RICO indictment against 41 members of the El Monte Flores gang, detailing alleged extortion, drug taxation, and race-hate crimes dating back more than a decade, didn’t even warrant a press conference.

RICO prosecutions arouse skepticism among some scholars. Lawrence Rosenthal, a law professor at Chapman University, notes that many RICO prosecutions have left a power vacuum and increased short-term violence. But this seems more true of cases in which only a few leaders have been taken down—as in the case of Italian Mafia prosecutions in New York or the El Rukn gang in Chicago in the 1980s—or in which the top leader is among those prosecuted, as happened in 2005 in Santa Ana with a RICO case that sent Orange County Mexican Mafia boss Peter “Sana” Ojeda to prison.

Most of the Southern California RICO prosecutions have instead swept up large numbers of street gang members. Leaders of prison gangs like the Mexican Mafia usually aren’t even charged in these prosecutions, and are referred to as “unindicted co-conspirators.”

“In prosecuting the members, you make [prison-gang leaders] powerless,” Brunwin says. “If no one’s out there on the street doing their work, then they’re just guys in cells.”

Southern California RICO cases have sent large numbers of street-gang soldiers to prisons in places like Arkansas or Indiana, where no girlfriend is coming to visit. In California prisons, inmates usually serve only half their time before getting out on parole, but federal prison sentences are long and provide for no parole.

gangs-2SIGNS OF THE TIMES: Streets that were made notorious by the gangs that took their name. (Photo: Sam Quinones)

To my eye, the effects of most RICO prosecutions against Southern California gangs have been dramatic, as if a series of anthills had been not just disturbed but dug up whole. Hawaiian Gardens has seen a 50 percent in drop in violent crime since the prosecutions of 2009. The neighborhoods that spawned Azusa 13 and Florencia 13 seem completely changed. I’ve seen similar post-RICO transformations across Southern California.

THE CHANGES I’VE DESCRIBED arose in part thanks to new thinking in law enforcement. But they dovetailed with other factors and coincidences that lay beyond police control.

For example, the Mexican Mafia, with its taxation schemes and, later, long lists of “greenlights”—death warrants—has alienated thousands of gang members. For years, the California Department of Corrections and Rehabilitation needed only a wing of a prison yard to house all protective custody inmates, the ones who were at high risk of violence or death from fellow inmates. Now it needs a dedicated prison and many entire yards across the state, euphemistically called sensitive needs yards. The largest single sensitive needs population comprises Latino street gang members from Southern California, many fleeing the Mexican Mafia.

Internal strife also helps explain why the streets seem safer even in places like San Bernardino, where crime rates remain relatively high (if much lower than 15 years ago). Gangs are less likely to be fighting rivals across town, creating blight. They are more likely to be hiding in the corners, devouring their own. San Bernardino’s West Side Verdugo gang—numbering more than a thousand members, divided into various cliques—seems to be expending most of its energy killing fellow WSV members over drug or gun deals gone bad. “This gang, for whatever reason, seems like it kills more of its own people than anybody else does,” says Sergeant Brian Harris, a supervisor with a San Bernardino police gang unit. “Its internal politics have become toxic to its own existence.”

Gangs out of sight remain sinister, of course, but, in retreating from the streets, they become less of an immediate danger. In the city of Bell, according to one longtime gang leader and drug dealer, several families work as distributors for drug-trafficking organizations based in Tijuana and northwest Mexico. These criminal families have legitimate front businesses, go to church, maintain prim middle-class houses, and want nothing to disturb their dope business.

“The saying is, ‘Don’t be the reason; don’t be the cause.’ Don’t do things that will call attention to yourself,” he told me. “People just understand that this is how it’s going to be. Gang-banging makes your neighborhood look like crap. People just want to make their money now.”

The market for real estate has been the second unguided force impinging on gangs. Some Southern California gang neighborhoods were once so self-contained that they resembled rural villages. Working-class people lived together, married, had children, gossiped, fought, loved, and went about life. Most men left the streets only for the military or prison. Gangs incubated in this insularity. But rising real estate prices have made properties in even the toughest neighborhoods valuable, and value has created peace. In place of insular barrios, for better or worse, neighborhoods have emerged in which people don’t know each other, and street life is nil. New arrivals—often white hipsters or immigrants from other countries—possess none of the history, or gang connections, of the departing families.

In the Highland Park district of Los Angeles, gentrification over the last decade has pushed gang families out of the area to towns where they have few connections. The La Rana neighborhood of Torrance, which spawned the Grajedas, a family of numerous gang members and three members of the Mexican Mafia, is now a business park. An extended family of Toonerville gang members once lived in several houses on two blocks of Bemis Street in the Atwater Village district. Now, those who aren’t dead or in prison have moved to Redlands, 60 miles east. In the Canta Ranas neighborhood in West Whittier, a friend showed me the lone gang house in the area, which once had a dozen ne’er-do-well families. Many of the gang families that formed the core of Azusa 13 have moved to towns east; so, too, the families at the heart of HG-13 in Hawaiian Gardens.

This has created the only-in-L.A. phenomenon of commuter gangs: guys who drive a long way to be with their homies at the corner where the gang began. (In the 204th Street neighborhood in the Harbor Gateway, I met gang members who drove in from Carson, the San Gabriel Valley, and even Palm Springs.)

Meanwhile, Latino home-buyers have been replacing black populations in Inglewood, Compton, and South Central Los Angeles. Like many other migrant groups, blacks have moved out, to the Inland Empire, 50 miles east of downtown Los Angeles, or to Las Vegas, or to the South. Compton, the birthplace of gangster rap, was once 73 percent black and is now nearly 70 percent Latino. This has often meant that Latino gangs replaced black gangs, and, while that might seem like nothing more than one violent group displacing another, the central role of the Mexican Mafia has often made these newer gangs easier to prosecute.

HOW MUCH OF WHAT’S changed in Southern California should be credited to hard-nosed enforcement (the “scorched-Earth policy” of Chris Brunwin), to softer stuff like community policing (“We can’t arrest our way out of the problem”), or to broader forces such as real estate, underworld dictates, and demographics?

gangs-1PAINTING BY NUMBERS: Former police chief William Bratton, left, and mayor Jim Hahn (not pictured) painted walls in the Boyle Heights area to kick off their fight against graffiti. (Photo: Jean Marc Bouju/AP)

Jacob Riis believed gangs incubated in hardship, so he advocated for greater compassion and a “prophylactic approach,” arguing that incarceration was “like treating a symptom without getting at the root of the disease.” Many, perhaps most, scholars of gang violence seem to have inherited this outlook. In the 1960s, the sociologists Richard A. Cloward and Lloyd E. Ohlin wrote that gang membership was due to “the disparity between what lower class youth are led to want and what is actually available to them,” and recommended increased social services that became part of federal anti-poverty programs.

When the 1970s and ’80s saw a surge of gang violence, tougher theories became more popular—as did politicians who called for harder approaches. This was when scholars like James Q. Wilson gained attention for arguing that loss-gain incentives influenced choices more than social services, and that suppression must play a strong role.

The very few scholars who are aware of what has happened in L.A. over the past few years are reluctant to pinpoint any one explanation for the city’s transformation. The change in public gang culture “is head-scratching,” says George Tita, of UC Irvine, “and I haven’t seen anybody in academia really put the careful work into studying it.”

Which is why I lean toward the explanations and prescriptions of those who have dealt with gang culture daily—and who understand the hard-nosed and softer approaches to policing and how they play off of each other. “Kicking in the doors: You have to do that and remove the bad players,” says the LAPD’s Bill Murphy in a nod to scorched Earth. “But what you do afterwards in building those community relationships and trust—that is absolutely critical. Probably more so, in my opinion.”

Murphy had a chance to demonstrate this on Drew Street.

FEW PLACES SHOW MORE clearly how CompStat, community policing, RICO indictments, and shifts in the real estate market can come together to alter a terrain than those nasty two blocks in Northeast L.A. around the corner from where Simon Tejada lives.

In the spring of 2008, responsibility for Drew Street fell to Murphy, who was a newly christened Captain III. He had spent a year as Captain I in the 77th Division. Now he was going to take command of the Northeast L.A. station house, a couple blocks from the heart of the Avenues gang. It was Murphy’s chance to put what he’d learned into practice as a division commander.

When Murphy first got to Drew Street, he literally found broken windows. He also found kids in hoodies lurking between parked cars servicing drug buyers. A terrified Neighborhood Watch group met at the police station behind closed doors. Meter readers asked for police protection; pizza places didn’t deliver. Gang members shot at officers while they drove through, and kicked in the doors of newly arrived renters to tell them how things worked on the street. Kids couldn’t get any sleep for the late-night gunfire and came to school weary.

Numerous dense apartments had gone up on Drew Street during the late 1980s. They were mostly filled by immigrants from the town of Tlalchapa, Guerrero, in a particularly violent part of Mexico known as the Tierra Caliente—the Hot Land. The street matriarch of Drew Street was Maria “La Chata” Leon. An illegal immigrant from Tlalchapa, Leon was the mother of 13 kids. Several of her sons ran the street’s drug trade, along with a network of cousins and aunts and uncles, from what became known as Leon’s Satellite House—a house with trip wires, video surveillance, and a large satellite dish.

First, Murphy went to work winning over community players. He turned to the clergy, enlisting them as department liaisons. He set up a Police Athletic League, which led to the formation of a Northeast Boxing Team, and rustled up donations for a tent and boxing ring. Murphy expanded the police advisory board, which in turn provided money for public cameras around the area and money with which to repair them.

As Murphy introduced himself around, he kept private the knowledge that the department was preparing a massive RICO indictment against the Drew Street gang. This had come about because an LAPD officer who had patrolled Drew for several years had paid a visit to Chris Brunwin, the federal prosecutor. The two had worked together on the Highland Park gang case. In years past, the idea that an LAPD street officer would even know a federal prosecutor, let alone visit one, was hard to imagine. But the door was now open.

Thus, in June 2008, three months into Bill Murphy’s tenure as a Captain III, thousands of police officers poured onto Drew Street and into nearby areas. SWAT teams from as far away as the East Coast came in to help. Seventy gang members were indicted.

As the SWAT trucks moved out that afternoon, city street cleaners moved in. They covered the graffiti, removed trash, cut down sneakers hanging from telephone wires, and swept streets that hadn’t been swept in a year. They repaired a fountain in the pocket park at the end of Drew Street. Officers began walking foot beats, and kept at it for the rest of the summer.

A grim setback came on August 2, when Drew Street gang members shot to death a deputy sheriff as he prepared for work one morning, but the murder proved the gang’s last gasp. Another RICO indictment quickly followed, with 90 members of the Avenues gang sent off to federal and state prison. Landlords, facing hefty legal penalties for allowing criminal activity on their properties, began evicting gang tenants. Then came the unheard-of: Residents started tipping off officers on which gang members had committed a series of robberies.

By the end of the year, kids were playing in the street. The Northeast Division grew adept at social media, using Twitter to announce crimes that had just happened.

Peace unlocked value. A new neighbor of Simon Tejada’s paid $350,000 for a house last year. Graffiti still occasionally pops up on Drew, but is quickly painted over. The incessant crack trade is gone, as are the menacing kids in hoodies lurking behind cars. Families no longer fall asleep to the sound of gunfire, helicopters, and screeching tires. The area has attracted several Filipino families with young children and no gang affiliations. In conversations I had with them, they seemed only vaguely aware of the street’s notorious history.

The city attorney’s office took possession of the Leon family’s Satellite House in a nuisance abatement lawsuit, and the city brought in massive machinery that devoured it in what amounted to a public exorcism. A community vegetable garden went up on the lot.

DREW STREET, HOBBLED BY overbuilding, remains vulnerable. Dense apartments keep attracting new low-income renters, many of them service workers from Tlalchapa, Guerrero, whose sons, like so many dislocated young men, may be drawn to gang life. Suspicion of the police is still strong. The same tenuous peace holds in many neighborhoods across Southern California.

California’s prisons have been releasing thousands of convicts early, and many of them will no doubt cause trouble. Proposition 47, recently approved by California voters, reduces the use of hard drugs like cocaine and heroin to misdemeanor status, and may likewise lead to increases in crime.

Yet I can’t imagine how Southern California, where modern street-gang culture was born, would return to the wild old days. The changes on Southern California streets over the last few years are unlike anything I’ve seen in my decades of writing about gangs. For the first time, it seems possible to tame a plague that once looked uncontrollable—and in doing so allow struggling neighborhoods, and the kids who grow up in them, a fighting chance.

Monopoly Was Designed to Teach the 99% About Income Inequality

Monopoly Was Designed to Teach
the 99% About Income Inequality

By Mary Pilon

The story you’ve heard about the creation of the famous board game is far from true

In the 1930s, at the height of the Great Depression, a down-on-his-luck family man named Charles Darrow invented a game to entertain his friends and loved ones, using an oilcloth as a playing surface. He called the game Monopoly, and when he sold it to Parker Brothers he became fantastically rich—an inspiring Horatio Alger tale of homegrown innovation if ever there was one.

Or is it? I spent five years researching the game’s history for my new book, The Monopolists: Obsession, Fury, and the Scandal Behind the World’s Favorite Board Game, and found that Monopoly’s story began decades earlier, with an all-but-forgotten woman named Lizzie Magie, an artist, writer, feminist and inventor.

Magie worked as a stenographer and typist at the Dead Letter Office in Washington, D.C., a repository for the nation’s lost mail. But she also appeared in plays, and wrote poetry and short stories. In 1893, she patented a gadget that fed different-sized papers through a typewriter and allowed more type on a single page. And in 1904, Magie received a patent for an invention she called the Landlord’s Game, a square board with nine rectangular spaces on each side, set between corners labeled “Go to Jail” and “Public Park.” Players circled the board buying up railroads, collecting money and paying rent. She made up two sets of rules, “monopolist” and “anti-monopolist,” but her stated goal was to demonstrate the evils of accruing vast sums of wealth at the expense of others. A firebrand against the railroad, steel and oil monopolists of her time, she told a reporter in 1906, “In a short time, I hope a very short time, men and women will discover that they are poor because Carnegie and Rockefeller, maybe, have more than they know what to do with.”

The Landlord’s Game was sold for a while by a New York-based publisher, but it spread freely in passed-along homemade versions: among intellectuals along the Eastern Seaboard, fraternity brothers at Williams College, Quakers living in Atlantic City, writers and radicals like Upton Sinclair.

It was a Quaker iteration that Darrow copied and sold to Parker Brothers in 1935, along with his tall tale of inspired creation, a new design by his friend F.O. Alexander, a political cartoonist, and what is surely one of U.S. history’s most-repeated spelling errors: “Marvin Gardens,” which a friend of Darrow’s had mistranscribed from “Marven Gardens,” a neighborhood in Atlantic City.

Magie, by then married to a Virginia businessman (but still apparently a committed anti-monopolist), sold her patent to Parker Brothers for $500 the same year, initially thrilled that her tool for teaching about economic inequality would finally reach the masses.

Well, she was half right.

Monopoly became a hit, selling 278,000 copies in its first year and more than 1,750,000 the next. But the game lost its connection to Magie and her critique of American greed, and instead came to mean pretty much the opposite of what she’d hoped. It has taught generations to cheer when someone goes into bankruptcy. It has become a staple of pop culture, appearing in everything from One Flew Over the Cuckoo’s Nest and “Gossip Girl” to “The Sopranos.” You can play it on your iPhone, win prizes by peeling game stickers off your McDonald’s French fries, or collect untold “Banana Bucks” in a movie tie-in version commemorating Universal’s Despicable Me 2.

As for Magie, I discovered a curious trace of her while searching through newly digitized federal records. In the 1940 census, taken eight years before she died, she listed her occupation as a “maker of games.” In the column for her income she wrote, “0.”

How Apple Pie Became ‘American’

How Apple Pie Became ‘American’

“Pie is…the secret of our strength as a nation and the foundation of our industrial supremacy. Pie is the American synonym of prosperity. Pie is the food of the heroic. No pie-eating people can be permanently vanquished.”

The New York Times, 1902

In America, we often qualify our patriotism in strange terms.

According to Budweiser, a “true patriot” drinks beer from flag-embossed cans. WWE wrestling would have us believe that the spirit of the United States lies in roided-out biceps, scantily-clad women, and cut-off t-shirts. But above all else — ice cream, baseball, and hot dogs included — we take great pride in being “as American as apple pie.”

A brief search of The New York Times archive (which dates as far back as 1851) yields more than 1,500 results for the phrase “American as apple pie.” According to their journalists, here’s a short-list of things that qualify as such: Civil disobedience, hardcore pornography, corruption, lynching people, poverty and misfortune, political espionage, dirty tricks, antifeminism, President Gerald Ford, gadflies, juicy steak, urban Jews, Social Security, bologna, reproductive choice, marijuana, infidelity, racism, clandestine activities, “human guinea pigs”, censorship, President Bush’s mom, immigrants, addictive products, self-help books, and last but not least, Richard Simmons.

But there’s just one little problem with all this: apple pie really isn’t all that “American.”

Tracing the Origins of Apple Pie

The first known written recipe for apple pie dates back to an English cookbook published in 1381 (original source here).

First things first: the apples we enjoy today are not native to American soil. As far back as 328 BCE, Alexander the Great wrote of modern-day Kazakhstan’s “dwarfed apples”, and brought them back to Macedonia for cultivation. For thousands of years prior to American colonization, apples were integrated in Asian and European cuisine.

By the late 14th century, apple tarts and pies were a common delicacy in England — albeit, they were quite different than modern pies. Due to the scarcity and high cost of sugar (which cost two shillings per pound, or about US$50 per pound in 2014 dollars) in today’s prices), English apple pies did not contain crust, and were instead served in “coffins”, or inedible pans made from natural ingredients.

Apple pies were beloved by the English — so much so, that pastoral writers and poets often alluded to them in romantic soliloquies (“thy breath is like the steame of apple-pyes,” wrote Robert Green in 1590). In the early 1500s, Dutch bakers, who shared this passion, took the concept of the apple pie and pioneered the lattice-style crust we’re used to today; over the course of a century, the pies were ubiquitous throughout France, Italy, and Germany.

It wasn’t until the mid-1600s, through complex sea trade routes, that edible apples made their way to North America. Even then, they came in the form of trees, and required extensive pollination to bear fruit; as such, the fruit didn’t flourish until European honey bees were introduced decades later. Only one type of apple — the malus, or “crabapple” — was native to North America prior to this, and it was incredibly sour and foul-tasting.

A grossly inaccurate portrayal of Johnny Appleseed

Nonetheless, a stream of inaccurate folklore perpetuated the myth that apples were American — mainly stemming from the legend of John Chapman, or “Johnny Appleseed”. While today’s children’s books and fables depict Chapman as an apple-munching, carefree pioneer, every reliable historical source clarifies that he mainly cultivated crabapples for use in hard cider. “Really, what Johnny Appleseed was doing was bringing the gift of alcohol to the frontier,” writes Michael Pollan in The Botany of Desire. “He was our American Dionysus.”

So the question lingers: how in the world did apple pie — a dessert containing a fruit far removed from North America’s landscape — become the reigning symbol of our patriotism?

How Apple Pie Became American

While apple pie was being consumed in Europe in the 14th century, the first instance of its consumption in America wasn’t recorded until 1697, when it was brought over by Swedish, Dutch, and British immigrants. In his bookAmerica in So Many Words: Words that have Shaped America, linguist Allen Metcalf elaborates:

“Samuel Sewall, distinguished alumnus of Harvard College and citizen of Boston, went on a picnic expedition to Hog Island on October 1, 1697. There he dined on apple pie. He wrote in his diary, ‘Had first Butter, Honey, Curds and cream. For Dinner, very good Rost Lamb, Turkey, Fowls, Applepy.’ This is the first, but hardly the last, American mention of a dish whose patriotic symbolism is [abundantly] expressed.” 

Throughout the 1700s, Pennsylvania Dutch women pioneered methods of preserving apples — through the peeling, coring, and drying of the fruit — and made it possible to prepare apple pie at any time of year. In the vein of many things American, settlers then proceeded to declare the apple pie “uniquely American”, often failing to acknowledge its roots. For instance, in America’s first-known cookbook, American Cookery, published in 1798, multiple recipes for apple pies were included with no indication of their cultural origins.

Several centuries later, the apple pie had become inextricably linked with American lore.

It was a process that began at the turn of the 20th century. Shortly after an English writer suggested that apple pie only be eaten “twice per week” in 1902, a New York Times editor retorted with what must be one of the most passionate defenses of pie ever written:

“[Eating pie twice per week] is utterly insufficient, as anyone who knows the secret of our strength as a nation and the foundation of our industrial supremacy must admit. Pie is the American synonym of prosperity, and its varying contents the calendar of changing seasons. Pie is the food of the heroic. No pie-eating people can be permanently vanquished.”

With that, the seeds of apple pie patriotism were planted.

A shop proudly advertising apple pies during World War II (c.1940s)

The primary origins of “as American as apple pie” are difficult to pinpoint, but it was used as early as 1928 to describe the home-making abilities of Lou Henry Hoover (President Herbert Hoover’s wife). The next result we could dig up, a Times article in which the phrase is enlisted to describe lynchings, comes nearly a decade later. It is fair to assert that though the phrase was floating around in the early 20th century, it was seldomly used.

It wasn’t until the 1940s, when the United States entered World War II, that “as American as apple pie” truly took off. When journalists at the time asked soldiers why they were willing to fight in the war, the typical response was “for mom and apple pie.” Even back then, as the phrase emerged, cultural observers were skeptical of it. “The old saying ‘As American as apple pie’ is a bit misleading,” writes a columnist in 1946, “for the chief ingredient of our national dessert is American only by adoption.”

Regardless, news archive search results indicate a tremendous upswing in the use of the saying in the 1960s, and apple pie continued on to establish itself as the reigning symbol of American patriotism.

Can We Call Apple Pie American?

Despite the efforts of Johnny Appleseed, the United States produces only about 6% of the world’s apples today. China, on the other hand, cultivates more than 35 million tons, or about 50%, of the world’s apples. But when it comes to integrating these fruits into pies, it is difficult to argue against America’s pride for the pastry.

According to the American Pie Council, Americans consume $700 million worth of retail pies each year — and that doesn’t include those that are home-baked, or sold by restaurants and independent bakers. Of those who responded to surveys, 19% of Americans — some 36 million people — cited apple is their favorite flavor. That’s a lot of apple pie.

Though we’ve made the case here that apple pie isn’t so American after all, one could argue that just because something originated somewhere else doesn’t mean that it shouldn’t become a source of national pride elsewhere. America took the apple pie to heights it had never seen before — elevated it as a treasured part of its lore and history. And though it wouldn’t be fair to call apple pie “American” without acknowledging its past, the baked good seems to be just at home here as anywhere else in the world.

God Is On the Ropes

God is on the ropes: The brilliant new science that
has creationists and the Christian right terrified

by Paul Rosenberg

A young MIT professor is finishing Darwin’s task — and
threatening to undo everything the wacky right holds dear

The Christian right’s obsessive hatred of Darwin is a wonder to behold, but it could someday be rivaled by the hatred of someone you’ve probably never even heard of. Darwin earned their hatred because he explained the evolution of life in a way that doesn’t require the hand of God. Darwin didn’t exclude God, of course, though many creationists seem incapable of grasping this point. But he didn’t require God, either, and that was enough to drive some people mad.

Darwin also didn’t have anything to say about how life got started in the first place — which still leaves a mighty big role for God to play, for those who are so inclined. But that could be about to change, and things could get a whole lot worse for creationists because of Jeremy England, a young MIT professor who’s proposed a theory, based in thermodynamics, showing that the emergence of life was not accidental, but necessary. “[U]nder certain conditions, matter inexorably acquires the key physical attribute associated with life,” he was quoted as saying in an article in Quanta magazine early in 2014, that’s since been republished by Scientific American and, more recently, by Business Insider. In essence, he’s saying, life itself evolved out of simpler non-living systems.

The notion of an evolutionary process broader than life itself is not entirely new. Indeed, there’s evidence, recounted by Eric Havelock in “The Liberal Temper in Greek Politics,” that it was held by the pre-Socratic natural philosophers, who also first gave us the concept of the atom, among many other things. But unlike them or other earlier precursors, England has a specific, unifying, testable evolutionary mechanism in mind.

Quanta fleshed things out a bit more like this:

From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life.


It doesn’t mean we should expect life everywhere in the universe — lack of a decent atmosphere or being too far from the sun still makes most of our solar system inhospitable for life with or without England’s perspective. But it does mean that “under certain conditions” where life is possible — as it is here on Earth, obviously — it is also quite probable, if not, ultimately, inevitable. Indeed, life on Earth could well have developed multiple times independently of each other, or all at once, or both. The first truly living organism could have had hundreds, perhaps thousands of siblings, all born not from a single physical parent, but from a physical system, literally pregnant with the possibility of producing life. And similar multiple births of life could have happened repeatedly at different points in time.

That also means that Earth-like planets circling other suns would have a much higher likelihood of carrying life as well. We’re fortunate to have substantial oceans as well as an atmosphere — the heat baths referred to above — but England’s theory suggests we could get life with just one of them — and even with much smaller versions, given enough time. Giordano Bruno, who was burnt at the stake for heresy in 1600, was perhaps the first to take Copernicanism to its logical extension, speculating that stars were other suns, circled by other worlds, populated by beings like ourselves. His extreme minority view in his own time now looks better than ever, thanks to England.

If England’s theory works out, it will obviously be an epochal scientific advance. But on a lighter note, it will also be a fitting rebuke to pseudo-scientific creationists, who have long mistakenly claimed that thermodynamics disproves evolution (here, for example), the exact opposite of what England’s work is designed to show — that thermodynamics drives evolution, starting even before life itself first appears, with a physics-based logic that applies equally to living and non-living matter.

Most important in this regard is the Second Law of Thermodynamics, which states that in any closed process, there is an increase in the total entropy (roughly speaking, a measure of disorder). The increase in disorder is the opposite of increasing order due to evolution, the creationists reason, ergo — a contradiction! Overlooking the crucial word “closed,” of course. There are various equivalent ways of stating the law, one of which is that energy cannot pass from a cooler to a warmer body without extra work being done. Ginsberg’s theorem (as in poet Allen Ginsberg) puts it like this: “You can’t win. You can’t break even. You can’t even get out of the game.” Although creationists have long mistakenly believed that evolution is a violation of the Second Law, actual scientists have not. For example, physicist Stephen G. Brush, writing for the American Physical Society in 2000, in “Creationism Versus Physical Science,” noted: “As Ludwig Boltzmann noted more than a century ago, thermodynamics correctly interpreted does not just allow Darwinian evolution, it favors it.”

A simple explanation of this comes from a document in the thermodynamics FAQ subsection of TalkOrigins Archive (the  first and foremost online repository of reliable information on the creation/evolution controversy), which in part explains:

Creationists thus misinterpret the 2nd law to say that things invariably progress from order to disorder.

However, they neglect the fact that life is not a closed system. The sun provides more than enough energy to drive things. If a mature tomato plant can have more usable energy than the seed it grew from, why should anyone expect that the next generation of tomatoes can’t have more usable energy still?

That passage goes right to the heart of the matter. Evolution is no more a violation of the Second Law than life itself is. A more extensive, lighthearted, non-technical treatment of the creationist’s misunderstanding and what’s really going on can be found here.

The driving flow of energy — whether from the sun or some other source — can give rise to what are known as dissipative structures, which are self-organized by the process of dissipating the energy that flows through them. Russian-born Belgian physical chemist Ilya Prigogine won the 1977 Nobel Prize in Chemistry for his work developing the concept. All living things are dissipative structures, as are many non-living things as well — cyclones, hurricanes and tornados, for example. Without explicitly using the term “dissipative structures,” the passage above went on to invoke them thus:

Snowflakes, sand dunes, tornadoes, stalactites, graded river beds, and lightning are just a few examples of order coming from disorder in nature; none require an intelligent program to achieve that order. In any nontrivial system with lots of energy flowing through it, you are almost certain to find order arising somewhere in the system. If order from disorder is supposed to violate the 2nd law of thermodynamics, why is it ubiquitous in nature?

In a very real sense, Prigogine’s work laid the foundations for what England is doing today, which is why it might be overstated to credit England with originating this theory, as several commentators at Quanta pointed out, noting other progenitors as well (here, here and here, among others). But already England appears to have assembled a collection of analytical tools, along with a sophisticated multidisciplinary theoretical approach, which promises to do much more than simply propound a theory, but to generate a whole new research agenda giving detailed meaning to that theoretical conjecture. And that research agenda is already starting to produce results. (See his research group home page for more.) It’s the development of this sort of detailed body of specific mutually interrelated results that will distinguish England’s articulation of his theory from other earlier formulations that have not yet been translated into successful theory-testing research agendas.

Above all, as described on the home page mentioned above, England is involved in knitting together the understanding of life and various stages of life-like processes combining the perspectives of biology and physics:

Living things are good at collecting information about their surroundings, and at putting that information to use through the ways they interact with their environment so as to survive and replicate themselves. Thus, talking about biology inevitably leads to talking about decision, purpose, and function.

At the same time, living things are also made of atoms that, in and of themselves, have no particular function. Rather, molecules and the atoms from which they are built exhibit well-defined physical properties having to do with how they bounce off of, stick to, and combine with each other across space and over time.

Making sense of life at the molecular level is all about building a bridge between these two different ways of looking at the world.

If that sounds intriguing, you might enjoy this hour-long presentation of his work (with splashes of local Swedish color) — especially (but not only) if you’re a science nerd.

Whether or not England’s theory proves out in the end, he’s already doing quite a lot to build that bridge between worldviews and inspire others to make similar efforts. Science is not just about making new discoveries, but about seeing the world in new ways — which then makes new discoveries almost inevitable. And England has already succeeded in that.  As the Quanta article explained:

England’s theoretical results are generally considered valid. It is his interpretation — that his formula represents the driving force behind a class of phenomena in nature that includes life — that remains unproven. But already, there are ideas about how to test that interpretation in the lab.

“He’s trying something radically different,” said Mara Prentiss, a professor of physics at Harvard who is contemplating such an experiment after learning about England’s work. “As an organizing lens, I think he has a fabulous idea. Right or wrong, it’s going to be very much worth the investigation.”

Creationists often cast themselves as humble servants of God, and paint scientists as arrogant, know-it-all rebels against him. But, unsurprisingly, they’ve got it all backwards, once again. England’s work reminds us that it’s scientists’ willingness to admit our own ignorance and confront it head on — rather than papering over it — that unlocks the great storehouse of wonders we live in and gives us our most challenging, satisfying quests.

The 7 Most Startling Psychological Experiments of All Time

The 7 Most Startling Psychological Experiments of All Time
by Johanne Schwensen

Human behavior is oft mysterious, but thanks to decades of experimental psychology, we know much more about it than before. These 7 experiments shine a light on the darkest crannies of the mind in order to help you better deal with people both in and out of work.


Back in my university days experimental psychology was my favourite course. Why? Because I got to learn about the most unbelievable psychological experiments that had ever been conducted: the experiments that made history by changing the way we understand emotions, attention, memory, and the concept of evil.

Some of these experiments are memorable due to their brilliant setups, others for their complete lack of ethics. Whatever the reason for their notoriety, the tie that binds them is that they can teach us something basic and vital about human beings – ourselves and those around us, coworkers to bosses to romantic partners.

Why we do what we do both in our personal lives and the professional sphere can be thoroughly explained by science – and here are 7 of the most startling (and revealing) scientific experiments psychology’s ever given us.


1. “Love On a Suspension Bridge:”
are you sure that’s what you feel?

In the famous study from 1974, “Love on a Suspension Bridge”, Dutton & Aron decided to find out how easily we confuse our emotions. They had an attractive female interviewer stop male passersby to ask them a series of questions on the middle of either a dangerous-looking suspension bridge or on a stable bridge. The interviewer asked the men to write a short story about a picture of a woman. She also gave them her phone number in case they “wanted to talk further about the purpose of the study.”

Capilano Suspension Bridge, where Dutton & Aaron conducted the experiment.
Capilano Suspension Bridge, where Dutton & Aaron conducted the experiment.

50 percent of the “suspension-bridge group” called to “talk about the study,” compared to only 12.5 percent of the other group. Interestingly, the suspension-bridge group’s stories also contained more sexual innuendo. Researchers concluded that participants confused their feelings of anxiety about being on a dangerous bridge with romantic feelings for the researcher. In other words, this experiment showed how easily we confuse our emotions and misattribute our arousal.

The takeaway here is that what you think is bothering you might not be the actual root of the problem. Before you jump to any conclusions – marriage proposals or business ones – take a critical look at what’s going on outside to influence that decision. Your frustration with your colleague might be more about you having skipped lunch and feeling ravenous.

Learn more in The Art Of Choosing.

2. The Invisible Gorilla Experiment: focusing might be holding you back


In 1999 psychologists Daniel Simons and Christopher Chabris asked subjects to watch a short video of six people – three in white shirts and three in black – pass around a basketball. While they watched, they had to keep count of the number of passes made only by the people in white shirts.

Now. Here’s where it gets weird: About half-way into the minute-long movie a person in a gorilla costume strolls across the room, stops to pound his chest, and then exits. And what’s even more curious: half of the subjects didn’t even notice it, and replied “nothing” to the question of whether they had noticed anything out of the ordinary.

This experiment goes to show that our attention, no matter how observant we consider ourselves, is selective and narrow. We miss a lot of what goes on around us when we’re engaged in a specific task. It also provides further substantiation for just how beneficial it can be to take a break. If you’ve been trying to crack an unsolvable problem or inject a much-needed dose of inspiration to a presentation, get up from your project, take a walk around and get your mind off of what you’ve been working on. Doing this allows you to break that single-minded focus that keeps you from seeing the obvious solutions you’ve missed or the anomalies that might be holding you back.

Read more about this experiment in The Art of Thinking Clearly.


3. Memories of colonoscopy:
The grand finale bias

In addition to believing you’re observant, you probably also assume you’re a rational being who sees and remembers the world as it really is. But sadly, a wealth of studies show that both our thinking and our memory is brimming with bugs and biases.  One of the more discomfiting experiments conducted in this vein is by Nobel-prize-winning psychologist Daniel Kahneman.

Back when colonoscopies were a very painful procedure, Kahneman found that people’s memories of how unpleasant it was didn’t have anything to do with the length of the procedure. By asking the patient “How much does it hurt now?” on a scale from 0 to 10 every 60 seconds throughout, and then later asking the patients about the memory of the colonoscopy, he found that the patients’ memories actually depended on a) the most intense point and b) the experience’s grand finale.

The patients’ memories of pain was an average of the worst moment in the colonoscopy, and how badly it hurt when the procedure ended, meaning that those in less pain toward the end rated the experience as less painful even than those who had a quicker procedure. Kahnemann concluded that memory is dominated by duration neglect, wherein we ignore the total duration of an event in favor of a particular memory from it, and the peak-end rule, where we overemphasize what occurs at the peak and at the end of an event.

You might be able to see this at work in your own life when you consider work relationships. Imagine that the last time you worked on a branding presentation it was with Sarah the designer, Paul the project manager, and Talia, a senior strategist. If you had one mid-term fight and the end of the project went poorly, you’re more likely to rate the whole experience as nightmarish, even if the parts in between went great. It’s worth it to take into consideration your own duration neglect and the peak-end rule the next time you’re dreading a new assignment. Do so and you’ll be able to get a much more realistic bead on how a similar future encounter is likely to go.

Read more about this experiment in Thinking, Fast and Slow.


4. The Milgram Experiment or Fact:
you’re more lemming than lion

You’re strong, you’re independent, and you’d never cave under social pressure. Right? Well…

In 1961 Yale University psychologist Stanley Milgram started a series of social psychology experiments that showed how easy it is for most of us to become an instrument of authority. The so-called Milgram experiment involved having a subject in one room administer an electric shock to a person (actually an actor) in a separate room, whenever that person answered certain, increasingly difficult questions incorrectly. With each incorrect answer, the voltage of the “shock” also increased. At some point, most subjects asked to stop the procedure because the screams from the actor became increasingly distressing, but when the “scientist” urged them to continue, 65% of them conformed to administering what they thought were deadly shocks.

Thus, it seems that although we value our individuality, authority figures have a lot of influence over us and we’ll conform readily when given orders. Understanding this experiment helps us be more realistic about the sovereignty of our will. It can also help us be more vigilant of how we respond to requests our superiors make of us. The next time your boss asks you to do something that you don’t entirely agree with, even if you’re compelled to go on with it, think again. Speaking up might not save another human from real or imaginary torture, but your awareness of the very human tendency to bow to authority could help you make smarter choices.

Read more about this experiment in Influence and You Are Not So Smart


5. The Marshmallow Experiment: resist
temptation now, reap rewards later

In 1971,  Mischel and Ebbesen conducted the The Marshmallow Test  in order to study the effect of deferred gratification. Here, they used children and their ability to wait to obtain something that they wanted: treats!

In this brilliantly simple test, children chose their favorite treat. Then, a researcher told the children they could either have the treat now, or have two after the researcher had left and come back. The child was then left alone with the treat on a tray in front of them.

Researchers secretly watched the children to see how they would handle the situation. Naturally, some children ate the treat right away. Others waited a bit, then ate it. Some children, however, managed to resist the treat, and this, it turns out, was a wise move: The willpower that the more patient children had exercised actually correlated strongly with success in their adult lives. The strong-willed children scored better on everything from ability to focus, SAT scores, planning ahead, and ability to maintain personal relationships.

What do these findings mean for grown ups? Simply that exercising your willpower and self-control on one small area has a spillover effect on other areas of your life. Resolve not to check your email ‘til lunch and you’re working out your willpower for later, when you’ll need to plug away at learning code. In a nutshell: willpower is a muscle that you can exercise – even if you didn’t have much as a kid – and doing small exercises in will-building throughout the day can help you strengthen it.

Read more about this experiment in The Marshmallow Test.


6. The Stanford prison experiment:
you are what you act like

Smile and you’ll be happy. Fake it ‘til you make it. Research has proven that how you behave does indeed shape your emotions and personality, meaning that role-playing certain characteristics can actually help you develop them. Psychologist Philip Zimbardo’s infamous “Stanford prison experiment” from 1971 illustrated this concept rather dramatically.

Zimbardo had volunteers role-play a prison situation wherein half were prisoners and half were guards. Those tasked with play-acting the part of guards actually became abusive and violent almost immediately. While the volunteers at the start merely pretended to be abusive guards, their feelings and behaviors adapted almost immediately to the new situation. The result? Zimbardo had to stop the experiment after just six days, even though it was planned to run for two weeks.

The experiment gained so much attention that the BBC did a partial re-enactment of the experiment and broadcasted the study’s events in a documentary series called The Experiment. Here, it was also demonstrated, how easily people internalize the behavior of certain (either submissive or aggressive) roles they play, in turn showing us just how malleable we can be.

Though Zimbardo’s experiment showed how distressingly prone people can be to acting cruelly and violently, there are far more positive ways to construe these findings. For example, if you want to be more confident, simply acting it can help make it so. Equally, if you’re in a leadership position and want to groom one of your shier employees to take your place when you make a lateral move to a different department, giving him more leaderly responsibilities can start him on the right path.

Read more about this experiment in The As If Principle.


7. The Jelly Experiment:
go easy on the options

In today’s world we face limitless choices about both products and lifestyle. And there is actually a good reason why it’s so difficult to manage this cornucopia of options: A large enough selection leads to an inability to come to a decision, which the jelly study demonstrated:

Researchers set up a stand with different jelly samples for people to try and then buy at a discount. The experiment was conducted over two days, with 24 varieties of jelly on the first day, and only six on the second. The results showed that they sold ten times more jelly on day two, indicating that too much choice decreased the customers’ ability to make a decision, and that they thus opted out of buying at all.

In the wild, this has plenty of important implications for you. Consider writing an ad for a new product. Of course you’ll be thinking about USPs and comparing your product to your competitors’. Even if what you can provide is leagues beyond the competition, consider limiting yourself to the two or three absolute strongest selling points for your own product. It will help focus your offering and, in turn, focus your customers.

Read more about this experiment in The Paradox of Choice.

Check out all of the books mentioned here in full, or read their 12-minute summaries on Blinkist to figure out where to start.

The History Behind ‘American Horror Story: Freak Show’

The History Behind ‘American Horror Story: Freak Show’

Like Ryan Murphy and Brad Falchuk’s previous American Horror Story incarnations, Freak Show (2014) is keenly aware of its precursors. Wednesday’s premier had numerous allusions to the predecessors that inform both the show and what makes freak shows and carnivals so terrifying, calling on established tropes while subverting others. The season premiere of Freak Show is full of meta-references to both the history of sideshows and the media’s representation of freaks, small homages and nods that could deepen your interest or appreciation for season four.

Still from Tod Browning's Freaks (1932). Source:

Freak Show is set in Jupiter, Florida, 1952, a time in American history when freak shows had fallen out of favor with the general public and existed mainly on the fringes, in isolated, marginalized communities like Coney Island. The freak show’s golden era lasted from roughly 1870 to 1920; dime museums, circuses, fairs and carnivals each featured their own collection of oddities and were the primary source of popular entertainment in the United States, particularly amongst rural populations (Bogdan 1990). P.T. Barnum was amongst the first showmen to capitalize off of the interest in unusual, aberrant or deformed bodies, collecting an assortment of freak performers as a part of his traveling circus. These sideshows and carnivals were actually precursors to the modern museum.


During the late 1800’s and early 1900’s, freak shows were seen as culturally edifying, and it was not uncommon for visitors to collect freak photography (Bogdan 1990; Fordham 2007). Many circuses and sideshows employed “freak finders,” individuals who would scour the country for individuals who were either born different or could be constructed into a freak. Clyde Ingalls, manager of the Ringling Brothers, Barnum and Bailey Sideshow in the 1930s and one of the progenitors of the freak show, once said, “Aside from such unusual attractions as the famous three-legged man, and the Siamese twin combinations, freaks are what you make them. Take any peculiar looking person, whose familiarity to those around him makes for acceptance, play up that peculiarity and add a good spiel and you have a great attraction’” (Bogdan 1990:95), illustrating the notion that freaks were “made” rather than born, a performance and aesthetic fabrication as well as a social status. While the term freak is inherently problematic and morally complicated, I am using it to refer to the categorization of an individual who deviated from the established cultural “norms”. Freaks could be different due to physical traits, as well as personal characteristics.

As the 20th century progressed, however, the rhetoric and popularity underlying the freak show as a medium of entertainment began to crumble. Biomedicine assumed an increasingly dominant role in cultural conceptions of the body, and deformity came to be understood as the purview of science and medicine, rather than something that should be paraded around for people’s entertainment. Rachel Adams writes, “As physical disability became the province of medical pathology, bodies once described as wonders of nature were reconceived in terms of disease […] Freak shows were sleazy arenas of exploitation and bad taste, relegated to small towns and bad neighborhoods where they would be patronized by audiences only slightly less marginal than the carnies themselves” (2001:57).

The Mütter Museum, established in Philadelphia, PA in 1858, contains the skeleton of conjoined twins Chang and Eng, as well as other medical anomalies, collected "to help the public understand the mysteries and beauty of the human body while appreciating the history of diagnosis and treatment of disease". Source:

Brigham A. Fordham adds, “Early in the twentieth century, a number of states and municipalities began to view freak shows as a threat to the morals of society and passed laws prohibiting or regulating freak shows. Fascination with the unusual body became more tainted with pity and disgust, causing the freak show to lose social status and popularity in the American psyche. By the 1940s, the heyday of the freak show had passed” (2007:3). The time period during which Freak Show is set is therefore telling—freak shows had become morally bankrupt and elicited fear and shame in spectators, rather than the huge audiences they used to drum up at the beginning of the century.

Situating the show in Florida is also significant, as Gibsonton, Florida is a well-established a safe haven for freaks and carnies. Chris Balogh writes, “Gibsonton has long been a winter home to all the freak-show acts and show people. It was chosen for its proximity to the headquarters of Ringling Bros. in Tampa” (2013). Over the years, Gibsonton has been home to famous freaks such as Al “the Giant” Tomiani, Jeanie “the Half-Woman” and Grady Stiles “The Lobster Boy.” Elsa Mars’s desperation to increase the popularity of her freak show and the general disgust with which many of the characters on the show are met can therefore be understood within the historical milieu in which the show is set.


Many of the characters featured on Freak Show are recognizable “types” within the freak show circuit. Elsa’s “Cabinet of Curiosity” is doubtless a reference to the 1920 silent film The Cabinet of Dr. Caligari, set within a carnival in a remote German village, where a somnambulist is set upon a murderous quest. Cabinets of curiosity, or wunderkammer, contained exotic objects and strange artifacts from around the world displayed at traveling circuses and sideshows. These wunderkammer can be seen as the precursors to modern day museums.

Daisy and Violet Hilton circa  1920s. Source:

The freaks Elsa includes in her show have historical, as well as cinematic and literary precedents. Many of the characters included in Freak Show are new formations of the famous carnies included in Tod Browning’s cult classic Freaks (1932). Conjoined twins were common attractions at freak shows, and Sarah Paulson’s Dot and Bette Tattler could have been based on Daisy and Violet Hilton, conjoined twins made famous on the vaudeville circuit throughout the 1930’s. Talented musicians, the two women were seen as charming entertainers and were among the cast of real-life freaks included in Freaks (1932). Bearded women, like Kathy Bates’s character Ethel Darling, were also fairly common in freak shows. Jane Barnell, otherwise known as Lady Olga, was a Bearded Woman that toured with Ringling Circus for many years, as well as starred in Tod Browning’s Freaks. Ethel Darling’s son Jimmy Darling, otherwise known as “Lobster Boy,” could also be based off of Grady Stiles Jr. who also went by Lobster Boy due to his condition of ectrodactyly. Stiles’s family had a history of ectrodactyly, and Stiles Jr. performed in sideshows for many years before moving to Gibsonton.

Grady Stiles Jr. Source:

Mat Fraser, who plays “Paul the Illustrated Seal,” represents both the natural and fabricated forms of enfreakment within the sideshow circuit. Fraser possesses phocomelia in both arms, which led to his stage name “Seal Boy.” In behind the scenes interviews for the show, Fraser reveals that his condition was caused by his mother’s use of thalidomide during her pregnancy (Duca 2014). Thalidomide was a drug used to treat morning sickness in pregnant women in the 1950’s and 1960’s, and ultimately led to severe physical disabilities and sterility in countless children (Winerip 2013). Similarly, in Katherine Dunn’s cult classic Geek Love (1989), Al and Lil Binewski experiment with eugenic testing and materials known to induce deformity during pregnancy to knowingly reproduce “freak” children, the oldest of whom is Arturo, a boy with flippers for hands and feet. Fraser’s tattoos were also considered freakish during the golden age of freak shows. Tattooed men and women were members of the self-made freak collective, especially considering that, “naturalists and early anthropologists saw the practice of tattooing as the ultimate sign of primitiveness, revealing a lack of sensitivity to pain and unabashed paganism” (Bogdan 1990:241).

Koo Koo the Bird Girl. Source:

A reincarnation of Koo Koo the Bird Girl, otherwise known as Minnie Woolsey, can be seen amongst Freak Show’s cohort, another reference to Tod Browning’s Freaks. Naomi Grossman reprises her role of Pepper from Asylum (2012-2013) for Freak Show, a so-called pinhead with microcephaly. Individuals with microcephaly were often included in freak shows as exotified “missing links.” Tom and Hettie, siblings born in Ohio with microcephaly, were billed as Hoomio and Iola, “The Wild Children of Australia” in P.T. Barnum’s Circus (Bogdan 1990). Schlitzie, a sideshow performer in several circuses and an actor in Freaks, was perhaps one of the most famous “pinheads” and doubtless an influence in the character of Pepper.

The most terrifying character in the premier of Freak Show was John Carroll Lynch’s Twisty the Clown. As Murphy and Falchuk are well aware, clowns are prominent nightmarish figures in the American cultural imagination, from Stephen King’s It (1986) to John Wayne Gacy, the serial killer who made a living as Pogo the Clown. Prior to his role on Freak Show, Lynch also played the villain on another freak show series, HBO’s Carnivale, which ran from 2003 to 2005. A cult classic in its own right, Carnivale depicts the fantastical world of freak shows during the dustbowl Depression era. There are numerous other examples of carnivals used as terrifying, nightmarish spaces that seem to embody the phastasmagoric horror of Poe, such as Ray Bradbury’s Something Wicked This Way Comes (1962), in which the circus space transcends the boundaries between human and monster, reality and fantasy. The carnivalesque “grotesque” can be understood as a liminal space that pushes the borders of normalcy to explore cultural conceptions of deviance. As a cultural space the exists on the margins of human society, carnivals are often seen as the terrifying interstices where established codes and truths become mutable, where magic and terror can become intertwined, presenting alternatives to social order and normalized versions of identity. They are seen as the spaces where dreams and reality intermingle, subverting established ideologies and perceptions of the world where the freakish body can ultimately be seen as transformative and destabilizing (Chemers 2003). Michel Bakhtin’s theories of the carnivalesque similarly portray the carnival as a liberatory, chaotic space, one that opens up new possibilities and beginnings, with destructive as well as regenerative qualities (1968).


As Freak Show continues to explore the lives of Elsa’s Cabinet of Curiosities, the identity of the various characters, and their place within 1950’s American culture, we can think about the cultural and historical influences that inform the show. Although the history of freak shows is full of stigma and exploitation, many performers have also appropriated the label of freak and used freak shows as a platform for empowerment. The creators of American Horror Story want to recoup certain horror tropes and clichés, while reinventing the genre. They ask us to consider what we find most terrifying, question cultural perceptions that have become normalized, and interrogate what it means to be monstrous.

America Is Built on Torture, Remember?

America Is Built on Torture, Remember?

The people arguing that torture contradicts our
country’s historical virtues are dead wrong.

The release of the Senate Intelligence Committee report has sparked a great deal of outrage—and justifiably so. The details are grim and sickening: The report says that the CIA tortured innocent people, threatened to murder and rape the mothers of detainees, and used rectal feeding or, essentially, anal rape, as a punishment. The report paints a picture of heedless brutality, cruelty, and sadism.

Given the details from Abu Ghraib, and the long-known, supposedly sanctioned techniques like waterboarding, these revelations aren’t exactly surprising. But they still have the power to shock. Andrew Sullivan, who has been a bitter and committed critic of American torture, summed up the reaction of many when he suggested that readers “reflect on a president [George W. Bush] who cannot admit to being the first in that office to authorize such an assault on core American values and decency.” To numerous critics on the left and some on the right as well, the torture seems like a violation of the basic American commitment to freedom, justice, and human rights. It is a betrayal of our ideals as a country.

But is it really? American history, after all, is not an unbroken tale of values and decency. In fact, according to Edward Baptist’s The Half Has Never Been Told: Slavery and the Making of American Capitalism, American decency has always been more a theory than a practice and America’s most important value—the value that turned this country from a marginal economic unknown to a world-straddling imperial power—was torture.

HISTORICALLY, SLAVERY HAS OFTEN been presented as an aberration. Perfected capitalism, the argument goes, came out of Northern industrialization. The South was a stunted or (in neo-Confederate accounts) quaint backwater. Slavery, in this accounting, was a failed institution that would have died out on its own accord even without the Civil War; it was a mistake, rather than a central part of the American narrative.

Baptist convincingly rejects this. Slavery was not quaint, nor stunted, nor backwards looking. It was the engine of American success. Cotton was the most important global crop in the 1700s and 1800s, and, Baptist writes, “The return from the cotton monopoly powered the modernization of the rest of the American economy.” American industrialization came about not despite slavery, or in the North next door to slavery, but because slavery fueled it.

Again, historical accounts of the cotton boom, and of industrialization, have tended to focus on technological advances—especially the cotton gin. But the gin, as Baptist argues, just made it possible to remove cotton seeds faster. This was important in freeing up a production bottleneck, but it doesn’t explain how America managed to produce more and more cotton to run through those gins.

What enabled the growth in cotton production was an innovation in labor management. That innovation was, in a word, torture. In the Southeast, during early slavery in the Americas, labor tended to be organized on a job or quota basis; individual men and women had to finish a certain amount of work, and then they might have some time to themselves—thus incentivizing speedy completion of tasks.

Planters in the expanding Southwest, though, came up with a more efficient, more brutal system. People were given quotas they had to meet—and then if they met those quotas, they were given higher quotas. There was no positive incentive. Instead, there was the whip for failure, or, if not the whip, then “carpenters’ tools, chains, cotton presses, hackles, handsaws, hoehandles, iron or branding livestock, nails, pokers, smoothing irons, singletrees, steelyards, tongs.” Virtually “every product sold in New Orleans stores converted into an instrument of torture,” Baptist writes. Antebellum whites, with a quintessentially American genius for sadism, pioneered just about every method of what we now consider “modern” torture: “sexual humiliation, mutilation, electric shocks, solitary confinement in ‘stress positions’, burning, even waterboarding.”

To avoid this dreadful orgy of violence and cruelty, men and women had to work harder, think quicker, turn all their creativity to the task of picking faster, faster, and faster again. Baptist found that the most efficient pickers were beaten most because, he says, they were the most innovative, the ones most likely to figure out even more ways to get cotton into their sacks. Through the use of systematic terror, whites forced black people to become the most efficient producers of cotton in the world: The United States jumped from producing 180 million bales of cotton in 1821, to 354 million bales in 1831, to 644 million bales in 1841, to 1,390 bales in 1860. “By 1820,” Baptist writes, “the ability of enslaved people in southwestern frontier fields to produce more cotton of a higher quality for less drove most other producing regions out of the world market.”

Torture, Baptist shows, was a success—and was terrible in no small part because it was a success.

OUR CURRENT DEBATE ON torture tends to assume that if the CIA’s methods of waterboarding, rape, humiliation, and murder resulted in useful intelligence, then those methods are somehow justified. In fact, though, as Baptist shows, torture is not justified by success, but institutionalized by it. When torture achieves its aims, the result is more, and more brutal, torture. Slavery became more and more vicious as planters realized that the whip could extort more and more gains from its victims. Even death could be turned to profit; those who were murdered were often presented as object lessons to terrorize those who remained. The success of torture turned America into a charnel house of pillage, rape, and atrocity for decade after brutal decade.

Sullivan’s claim that Bush’s use of torture violated American tradition or values is unsustainable. America’s success was created out of torture; torture is why we are an economic power. From the whip to the waterboard, America has used violence to ensure its safety, its fortune, and its power. “Violence,” as H. Rap Brown said, “is as American as apple pie.”

The argument that America doesn’t torture isn’t just an interpretation of history, of course. It’s also a moral argument; Sullivan, and those like him, point to American values and decency in an attempt to create a country that does not torture. Thinking of Bush’s torture as exceptional is a way to make it unacceptable. Sullivan argues this is not who we are. He may be historically wrong, but shouldn’t we ignore that in the interest of advancing moral right?

The problem is that the argument for American virtue is not, ultimately, a rejection of the logic of torture. It’s an endorsement of it. Torture in the war on terror is justified on the grounds that those people, over there, are irredeemably evil and dangerous. We are good and decent and moral; therefore, anything we do in response to evil is also good and decent and moral—or at worst a tough choice by good men and women manning the ramparts. The videos of ISIS beheadings show once and for all who is in the right. America would never do anything like that, so whatever America does in response is justified. Decency and morality are defined not to exclude torture, but to excuse it.

Reading Baptist, though, it’s clear that America has not, in fact, been morally superior to ISIS, or to anyone, up to and including the Nazis. Yes, we don’t have slavery anymore—but the nation’s current wealth, its current power, and its current place in the world was created out of a nightmare system of cruelty and greed. Nor did that system end, full stop, in 1865. Instead, it ground on, through Jim Crow and lynchings and on to our present prison system, where torture of various kinds is hardly unknown.

When America tortures, then, it is not doing so despite its decency, or to protect its decency. It is doing so as part of a lengthy tradition of almost indescribable violence. We have no virtue that excuses our brutality. We need to avoid torture not because it is incompatible with American ideals, but because it is all too compatible. In forgetfulness or in repose, America’s hands, if not checked, still feel for the whip.

The smart mouse with the half-human brain

The smart mouse with the half-human brain

by Andy Coghlan

What would Stuart Little make of it? Mice have been created whose
brains are half human. As a result, the animals are smarter than their siblings.

The idea is not to mimic fiction, but to advance our understanding of human brain diseases by studying them in whole mouse brains rather than in dishes.

The altered mice still have mouse neurons – the “thinking” cells that make up around half of all their brain cells. But practically all the glial cells in their brains, the ones that support the neurons, are human.

“It’s still a mouse brain, not a human brain,” says Steve Goldman of the University of Rochester Medical Center in New York. “But all the non-neuronal cells are human.”

Rapid takeover

Goldman’s team extracted immature glial cells from donated human fetuses. They injected them into mouse pups where they developed into astrocytes, a star-shaped type of glial cell.

Within a year, the mouse glial cells had been completely usurped by the human interlopers. The 300,000 human cells each mouse received multiplied until they numbered 12 million, displacing the native cells.

“We could see the human cells taking over the whole space,” says Goldman. “It seemed like the mouse counterparts were fleeing to the margins.”

Astrocytes are vital for conscious thought, because they help to strengthen the connections between neurons, called synapses. Their tendrils (see image) are involved in coordinating the transmission of electrical signals across synapses.

Human astrocytes are 10 to 20 times the size of mouse astrocytes and carry 100 times as many tendrils. This means they can coordinate all the neural signals in an area far more adeptly than mouse astrocytes can. “It’s like ramping up the power of your computer,” says Goldman.

Intelligence leap

A battery of standard tests for mouse memory and cognition showed that the mice with human astrocytes are much smarter than their mousy peers.

In one test that measures ability to remember a sound associated with a mild electric shock, for example, the humanized mice froze for four times as long as other mice when they heard the sound, suggesting their memory was about four times better. “These were whopping effects,” says Goldman. “We can say they were statistically and significantly smarter than control mice.”

Goldman first reported last year that mice with human glial cells are smarter. But the human cells his team injected then were mature so they simply integrated into the mouse brain tissue and stayed put.

This time, he injected the precursors of these cells, glial progenitor cells, which were able to divide and multiply. That, he says, explains how they were able to take over the mouse brains so completely, stopping only when they reached the physical limits of the space.

Species cross

“It would be interesting to find out whether the human astrocytes function the same way in the mice as they do in humans,” says Fred Gage, a stem cell researcher at the Salk Institute in La Jolla, California. “It would show whether the host modifies the fate of cells, or whether the cells retain the same features in mice as they do in humans,” he says.

“That the cells work at all in a different species is amazing, and poses the question of which properties are being driven by the cell itself and which by the new environment,” says Wolfgang Enard of Ludwig-Maximilians University Munich in Germany, who has shown that mice are better at learning if they have the human Foxp2 gene, which has been linked with human language development.

In a parallel experiment, Goldman injected immature human glial cells into mouse pups that were poor at making myelin, the protein that insulates nerves. Once inside the mouse brain, many of the human glial cells matured into oligodendrocytes, brain cells that specialise in making the insulating material, suggesting that the cells somehow detected and compensated for the defect.

This could be useful for treating diseases in which the myelin sheath is damaged, such as multiple sclerosis, says Goldman, and he has already applied for permission to treat MS patients with the glial progenitor cells, and hopes to start a trial in 12 to 15 months.

Still a mouse

To explore further how the human astrocytes affect intelligence, memory and learning, Goldman is already grafting the cells into rats, which are more intelligent than mice. “We’ve done the first grafts, and are mapping distributions of the cells,” he says.

Although this may sound like the work of science fiction – think Deep Blue Sea, where researchers searching for an Alzheimer’s cure accidentally create super-smart sharks, or Algernon, the lab mouse who has surgery to enhance his intelligence, or even the pigoons, Margaret Atwood’s pigs with human stem cells – and human thoughts – Goldman is quick to dismiss any idea that the added cells somehow make the mice more human.

“This does not provide the animals with additional capabilities that could in any way be ascribed or perceived as specifically human,” he says. “Rather, the human cells are simply improving the efficiency of the mouse’s own neural networks. It’s still a mouse.”

However, the team decided not to try putting human cells into monkeys. “We briefly considered it but decided not to because of all the potential ethical issues,” Goldman says.

Enard agrees that it could be difficult to decide which animals to put human brain cells into. “If you make animals more human-like, where do you stop?” he says.

gun monkey gif

5 Emerging Technologies That Could Destroy The World

5 Emerging Technologies That Could Destroy The World

By Glyn Taylor

Through the accelerating rate of our technological advancements, we are now seeing the rise of revolutionary new technologies. Even from an optimistic perspective, many potential threats can be foreseen.
5 technologies that could destroy the world

In the book ‘Our Final Century: will the human race survive the twenty-first century?‘,  Martin Rees concludes that humanity has just a 50/50 chance of surviving the 21st century. We refrain from applying such odds, as the century is fraught with unpredictability and even incomprehensibility. There are certain future events though that we know for certain will happen (should technological advance continue uninterrupted).

 1 & 2 – Nanotechnology & 3D Printers

Nanotechnology describes a wide variety of technologies and materials that share one thing in common: They are incredibly small in size. Typical nanostructures are the width of a strand of DNA (2 nanometers), 50 thousand times smaller than the width of a strand of hair. We are already seeing the emergence of nanotechnology; you can read about that in our article, ‘Phase One of the Nanotechnology Revolution has Begun‘. Nanotechnology will really start to make an impact on our world in the next couple of decades; you can read about that in our article, ‘Superhumans Created by Nanotechnology within 30 years‘. For more general background information about nanotechnology, visit here.

Why we need nanotechnology

Growth of Nanotechnology

Just as the industrial revolution ushered our way into the world we know today, nanotechnology will soon change our world beyond comprehension. It is predicted to cure all current types of illness, even aging. It will lead to massive improvements in battery and solar power, ending our dependence on the Earth’s gas, coal, and oil resources. Nano Fabricators will allow us, in our homes, to 3D-Print anything  by literally building it from scratch. Quantum computing will create computers that are billions of times more powerful than the ones we have today. Many many more innovative examples exist; literally everything you know will be dramatically enhanced by nanotechnology.

 But how it could destroy us

Yes, but the risk of misuse of these breakthroughs rises along with the benefits. There are many potential threats that could be caused through nanotechnology misuse/accident, of which provide an existential risk (a risk that would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential).

The Blood Brain Barrier

One of the soonest and simplest threats could arise from the ability of ‘passive’ nanostructures to pass through the blood-brain barrier. This barrier is a tightly knit layer of cells that affords the brain the highest possible protection from the simple chemicals and microorganisms that could harm it. Neuroscientists are purposefully engineering nanoparticles that can cross the blood-brain barrier to deliver medicines in a targeted and controlled way directly to diseased parts of the brain. This ability would also be utilised by the malevolent, in the creation of new forms of biological and chemical weaponry. These types of new biological/chemical WMDs would utilise simple non-toxic chemicals (which would harm the brain but not anything else), which would consequently be very hard to detect by authorities. At its simple stages though it is unlikely to cause an existential risk as it would not be contagious. Further advances though will see the rise of ‘active’ nanostructures, which pose a far greater threat. These are essentially nano-sized robots, which can be programmed to perform specific tasks. Tasks could be to attack certain materials, such as metals, water, internal organs, or specific DNA sequences.

 Apocalyptic Scenario 1: Human Killing Nanobots


Active nanostructures (nanobots) can be programmed to specifically target and kill humans. The smallest insect is about 200 microns; this creates a plausible size estimate for a nanotech-built antipersonnel weapon capable of seeking and injecting toxin into unprotected humans. The human lethal dose of botulism toxin is about 100 nanograms, or about 1/100 the volume of the weapon. As many as 50 billion toxin-carrying devices—theoretically enough to kill every human on earth—could be packed into a single suitcase. You can read more about this threat, here.

A potential exists that the activity of anti-technology terrorists would significantly increase in the future. This could be in response to the imminent development of an Artificial General Intelligence, which the anti-technologists may believe could turn against humanity. They may see it as necessary to release the ‘human killing nanobots’, to prevent an AGI from rising. They could create the nanobots using advanced molecular 3D-Printers, which by the 2040’s will be owned by many production companies, universities and research centers. Eventually, such printers will be available in all homes.

Apocalyptic Scenario 2: Gray Goo

Another threat of nanobots could arise from the Gray Goo Problem. Gray Goo is easily defined and explained as, ‘runaway nanobots’: A swarm of rapidly self-replicating nanobots, in a ravenous quest for fuel, which would consume the entire biosphere until nothing remained but an immense, sludge like robotic mass. It is incredibly difficult to build though, and so an accidental release will be very unlikely. As time passes though it will become easier to build, and if regulation is not put in place to allow it to be detected & disabled, then a terrorist organisation could release it. You can read more about the Gray Goo Problem, here.

3 & 4 – Artificial General Intelligence & Big Data

AGI is defined as a ‘conscious’ artificial construct that would be an intellectually independent thinker, just as are humans. It is predicted by Ray Kurzweil (Chief of Engineering at Google) and many others, that the first AGI will be operational before 2030. For more information about the basics of AGI, visit here.

Big Data is what we are creating through our use of social networks and smartphone apps. The data pot will become so immense that it can be assembled by us and utilised in new apps which have the new capability of understanding us and our world. Google could use the date to not only respond accurately to your search requests, but preempt what those requests will be. Big Data will drive forward a world where everything is connected in what is called the ‘Internet of Things‘.  Our world will become ‘smart’, and that will give AGI a far better understanding, and potential control.

   Why we need Artificial General Intelligence

When we develop an AGI that is significantly more intelligent than human beings, our rate of evolution will rapidly accelerate. The world which would as consequence be created, is mostly incomprehensible. The level of the resulting technological advance and incomprehensibility, has been referred to as the ‘Technological Singularity‘ (an event horizon that cannot be predicted beyond). What we hope though is that it will define the next stage of our evolution; we will transcend our biology; our Earth bound reality will only be the beginning; we can explore the universe; explore our existence; our creator; we can become the creator; become Gods.

But how it could destroy us

Optimistically, our future is a wondrous one. The reality though is fraught with security concerns. The main concern with AGI arises from what we have all seen in the movies – our machines turning against us. The University of Cambridge has launched a Centre for the Study of Existential Risk, which has the primary task of studying AGI risk.

The exact development strategy of AGI is largely as yet unknown. It could involve the integration of an already adult, human mind (as will be seen in the 2014 Johnny Depp movie, Transcendence); it could involve an artificial mind being activated, which would learn from scratch, just as a human child would; or it could be a supercomputer that is tweaked by adding hardware that would allow it to become conscious.

 Apocalyptic Scenario 3 – The Terminators

Terminator Scenario

A potential threat is if the AGI has limited capacity to think intellectually (Weak AI). The threat of this has already been seen in humans. It translates as small mindedness, or selfishness. An example of a small minded group would be Al-Qaeda; it’s members have shown an incapability of thinking about other possibilities; they believe their religion is all that matters. As consequence, it is not possible to reason with them, no matter how intelligent they are; they believe they are morally superior.

If an Artificial Intelligence was to be ‘small minded’, it would be against all views which are not inline with its own, even if other opinions are rationally more likely to be correct. It would see all other ideologies as a threat. If humanity as a whole is in disagreement with it, possibility arises for a war of which we would be striped of our technological advantage, or made extinct. Big Data will allow this rogue Weak AI to potentially take control of all we know with the unleashing of a ‘superbug’, taking our cars, home appliances, power stations, communications, weapons etc; plummeting humanity back into the dark ages, or killing us off altogether.
5 – Biotechnology


The term biotechnology is used to define “any technological application that uses biological systems, living organisms or derivatives thereof, to make or modify products or processes for specific use” (Definition by UN Convention on Biological Diversity). Depending on the tools and applications, the term often overlaps/encompasases the fields of biomedical engineering, tissue engineering, biopharmaceutical engineering, genetic engineering, chemical engineering, bioprocess engineering, bioinformatics (a new brand of information technology), and biorobotics. Biotechnology is also researched and developed at the nanoscale, and so may also be spoken of as being a nanotechnology.

Why we need biotechnology

As obvious through the amount of biotechnology fields, it is set to make a huge impact on the world. Most exciting is the impact it will have on health care. Genetic therapy is currently being used to create cures for diseases such as cystic fibrosis, AIDS and cancer. And to go beyond exciting, into what most consider is not even possible, it is believed that developments in telomerase gene therapy could lead to effective indefinite lifespans. Visit the website of the SENS Foundation to see how humanity could bioengineer its immortality. You can read more about immortality on our website, here.

But how it could destroy us

With the ability to reengineer our own biology, comes the ability to easily destroy it. The Cambridge Project for Existential Risk cites that, “because the seriousness of these risks is difficult to assess, that in itself seems a cause for concern”. The reality is that any imaginable disease could be potentially created.

Bio Weapons

The most dangerous examples are engineered diseases that are contagious, airborne, have long life spans and are able to avoid antibiotic attack. These new diseases could target anything their creator wishes, for example: The reproductive organs, destroying humanities ability to reproduce; organisms which have short telomere lengths – only killing people above a certain age; the eyes, blinding everybody; only certain ethnic groups, or a certain gender. It could be used to change our genetics, perhaps even transforming the science-fiction into reality with the creation what could be considered
as Zombies.

Current biological weapons are simple organisms that have been produced through natural growth, and not genetically modified through gene therapy. The upcoming phase of bioweaponry will feature organisms that have had their genes manipulated, giving them new pathogenic characteristics (increased survivability, infectivity, virulence, stealthy dormancy, drug resistance, etc). Bioengineering of the aforementioned type is grouped and referred to as ‘black biology’, or ‘Chimeras’.

   Apocalyptic Scenario 4: Jihad Chimera

Some scientists predict that within 20 years, biotech research will have advanced to a great enough level to allow biologists to switch their talents to black biology, and with relative ease create advanced Chimera pathogens that are resistant to biological defences (antibiotics and known antidotes). Reasons for biologist to want to do this in the future are many. In this scenario, we will feature Islamic fundamentalists in a hypothetical future where their ideology has almost been eradicated.

The year is 2035, the majority of the world has been democratised and all but 1 country
refuses to tolerates Islamist elements in their Governments; people want to be led by democratic governments who do not hold religious/ethnic bias. Iran is that 1 country, and has kept it’s religious rule; it has became ‘the last stand’ for violent radical Islamists who have flocked there for defence, because of the lingering public majority support for Islamist rule. The country had been defended from international sanctions by Russia and China, on the basis of ‘not interfering in other countries affairs’.

However, it becomes no longer in the Chinese and Russian interests to continue to defend Islamist Iran. Iran is left alone and living conditions in the country rapidly deteriorate. Iranians revolt against their leadership in desperation. Radical Islam now faces its final demise, as does the long ruling extremist regime.

In anticipation of this, the regime had been running a black biology program in their planning for a final Jihad. A new Chimera pathogen has been developed (we will call it the Jihad Chimera).

In 2025, gene therapy had led to a complete cure for cancer. Everybody in the world had been vaccinated. The vaccine involved the addition of certain genes into the human genome.

The Jihad Chimera works by seeking out the cancer-inhibiting gene. It sits on the gene, deactivating it, and it works as a carcinogen, making it certain that infected people will, within a few years, develop cancer. Once cancer cells develop, it activates the Jihad Chimera, releasing toxins which in turn accelerate the rate of cancer growth, making the cancer again untreatable and far more aggressive than traditional cancers.

The Jihad Chimera is released by the Iranians throughout the world in 2035. It is even more infectious that the common cold and by 2036, all of humanity is silently infected. By 2037 the cancer pandemic begins. Attempts at developing cures are made, but by 2040 everybody has terminal cancer.

Question Mark

What do you think?

However unlikely the above apocalyptic scenarios are, the technological threats will be increasing. It is up to us now to avoid these scenarios and all other potential threats that could arise from our rapidly accelerating technological advances. The answer is not the banning of technological advance (as some have proposed), the answer is strict regulation and surveillance. The regulation does not have to slow down progress, neither does the surveillance have to encroach on our liberties. What they both need to do though is keep dangerous tech out of the hands of the malevolent. Our defense will be powered by our advancing tech. Our threats will be powered by… What do you think?