Supernormal Stimuli: Your Brain On Porn, Junk Food, and the Internet

Supernormal Stimuli: Your Brain
On Porn, Junk Food, and the Internet

by Gregory Ciotti

Given the rapid pace of technology, one has to wonder whether or not our brains (and bodies) have been able to keep up with all the new “stimulation” that is available.

Some research suggests that a few of the things we enjoy today would be classified as supernormal stimuli, a term evolutionary biologists use to describe any stimulus that elicits a response stronger than the stimulus for which it evolved, even if it is artificial—in other words, are sources of “super” stimulation like junk food and porn more likely to hook us into bad habits?

It is certainly a very muddy topic, but it’s a question that I believe deserves investigating.

After all, we’ve become increasingly surrounded by stimulation that wasn’t available even a few years ago, so are my mind and body really ready for Flavor Blasted Goldfish™ and never ending social media updates?

Before we get into the research, let’s summarize the concept a bit more clearly: what exactly is a supernormal stimulus?

The brilliant comic below will explain the basics,
and will take you less than 2 minutes to read.

Be Aware: Supernormal Stimuli

1a

 

2a

 

3a

 

4a

 

5a

 

6a

 

7a

 

8a

 

9a

 

10a

 

11a

 

12a

 

13a

 

14a

 

15a

 

16a

 

17a

 

18a

 

19a

 

Comic: by the insanely talented Stuart McMillen, published with permission. More about Stuart and his work at the bottom of the post.

When “Super” Stimulation Goes Wrong

Nikolaas Tinbergen, a Nobel Prize winning ethologist, is the father of the term supernormal stimuli. As noted, Tinbergen found in his experiments that he could create “artificial” stimuli that were stronger than the original instinct, including the following examples:

  • He constructed plaster eggs to see which a bird preferred to sit on, finding that they would select those that were larger, had more defined markings, or more saturated color—a dayglo-bright one with black polka dots would be selected over the bird’s own pale, dappled eggs.
  • He found that territorial male stickleback fish would attack a wooden fish model more vigorously than a real male if its underside was redder.
  • He constructed cardboard dummy butterflies with more defined markings that male butterflies would try to mate with in preference to real females.

In a very quick span of time, Tinbergen was able to influence the behavior of these animals with a new “super” stimulus that they found themselves attracted too, and which they preferred over the real thing.

Instinct took over, and now the animals’ behaviors were a detriment to their livelihood because they simply couldn’t say no to the fake stimulus.

Much of Tinbergen’s work is beautifully captured by Harvard psychologist Deirdre Barrett in the book Supernormal Stimuli: How Primal Urges Overran Their Evolutionary Purpose. One has to wonder if the leap from these findings to human behavior is near or far.

Dr. Barrett seems to think that the link is closer then we believe, arguing that supernormal stimulation govern the behavior of humans as powerfully as that of animals.

The hypothesis is that just like Tinbergen’s quick introductions of abnormal stimulation to animals, rapidly advancing technology may have created a similar situation for humans—can we really be “prepared” for some of our modern, highly stimulating experiences, given the amount of time we’ve had to adapt?

It’s very hard to say—you’ll find excellent arguments from both camps.

Here are a few common examples that are often brought into question:

(Note: please read the full article. I’m not saying that you should never engage with the following, or that the examples below are conclusive, or that they are the “norm,” not at all in fact! They are merely brought up out of curiosity.)

Junk food

1.) The highly addictive nature of junk food is one of our generation’s great concerns—food is being engineered specifically to be more appealing than its natural counterparts. Is it any wonder then that when fast food is more thoroughly introduced to other countries, people start consuming it more often?

2.) It could be argued that for a large span of time humans had a relatively stable palette. Now a new food “concoction” comes out every week. How might this be affecting us? Some studies have suggested that foods like processed grain came about far too quickly and are doing quite a number on your mind and body.

3.) Food is one of the toughest things to struggle with because it’s an absolute necessity—the problem with junk food is due to the fact that it is a “super stimulating” version of a natural reward we are supposed to pursue. Food addiction is the real deal, and a tough habit to break because the triggers are ever present.

TV & video games

1.) A quick peek in my home office would show a still functioning Super Nintendo hooked up with Chrono Trigger ready to go. I don’t think that video games cause excessively violent behavior (research agrees), but I do have to admit that it seems video games may be addictive for some people, and in particular, for certain personalities.

2.) Television addiction may cause some users to elicit the signs of a behavioral addiction—users often watch TV to change mood, but the relief that’s gotten is only temporary, and often brings them back for more.

3.) You’re likely not surprised to hear that computer games have been linked to escapism, but what you may not know is that some studies have found symptoms of withdrawal in a very small subset of subjects; they became moody, agitated, and even had physical symptoms of withdrawal.

Pornography

1.) Probably the most controversial of all modern stimuli, pornography has been described as insidious in nature because it might skew the otherwise normal activity of sex. Porn has been linked to changing sexual tastes, and some argue that porn can become a “never-ending” supply of dopamine (though there are few conclusive studies done on porn and the mind).

2.) There’s a passage from a Kurt Vonnegut novel where a man shows another man a photograph of a woman in a bikini and asks, “Like that Harry? That girl there.” The man’s response is, “That’s not a girl. That’s a piece of paper.” Those who warn of porn’s addictive nature always emphasize that it is not a sexual addiction, it’s a technological one. Could porn impact the way you view the real thing?

3.) It’s been suggested that pornography messes up the “reward circuitry” in human sexuality—why bother trying to pursue and impress a potential mate if you can just go home and look at porn? This has been argued as the beginning of porn addiction, as novelty is always a click a way, and novelty is closely tied to the highly addictive nature of dopamine.

As psychologist Susan Weinschenk explained in a 2009 article, the neurotransmitter dopamine does not cause people to experience pleasure, but rather causes a seeking behavior. “Dopamine causes us to want, desire, seek out, and search,” she wrote.

It is the opioid system that causes one to feel pleasure. Yet, “the dopamine system is stronger than the opioid system,” she explained. “We seek more than we are satisfied.”

The Internet

1.) Unsurprisingly, psychologists are now giving serious consideration to the web, recognizing that it may be a very addictive outlet. It allows unfettered control to engage in nearly anything, and some countries like Japan and South Korea have had serious problems with reclusive, socially inept individuals who have a very unhealthy internet obsession—one story I read detailed a man who hadn’t left his apartment in 6 months.

2.) Social media has been shown to make many people depressed—they see the highlight reel of others, and may feel worse about their own life. These pruned and often misleading looks into others lives was never available before the web. In spite of this, people can’t stop checking them, thinking that they might be missing out on something.

3.) Internet overuse, for some people, may be hurting their ability to focus. The quick bursts of entertainment that the internet provides, and the fact that information is always a click away, may (through overuse) cause a decrease in conceptual and critical thinking. Some have argued that the internet can become ‘chronic distraction’ that slowly eats away at your patience and ability to think and work on things for extended periods of time.

 

What Should You Do?

This can seem like a lot to take in at once.

Before you panic, freak out, and throw away all of your Oreos + cancel your internet subscription, please listen—everything in moderation, just like your reaction to the information in this article.

There is a lot of research that counters what we’ve looked at above. Explore books like The 10,000 Year Explosion for more from that perspective. In addition, consider that resources are all in how you use them.

Take the Internet: sure, there are signs that in some ways the Internet might become a distraction, but think about its contributions. The web is the best source in the world for information and knowledge, so how it affects you depends on how make use of it.

We are all perfectly capable of using and engaging with supernormal stimuli—the only reason I chose to highlight the extreme examples above was to show how things can go wrong with overuse, or misuse.

That’s right folks, you can put away your torches and pitchforks! I’m not the enemy of junk food, the Internet, and everything awesome. My one and only goal for this article was simply exploration of the topic.

In fact, the comic above had similar intentions. The artist, Stuart McMillen, articulately describes why you shouldn’t be afraid of information like this. In many ways, it should be comforting:

In both cases, the main change is awareness. Awareness that the reason we are drawn to sickly desserts is because they are sweeter than any naturally-occurring fruit.

Awareness that watching television activates the primitive ‘orienting response’, keeping our eyes drawn to the moving pictures as if it were predator or prey. Awareness that liking ‘cute’ characters comes from a biological urge to protect and nurture our young.

I have not removed supernormal stimuli from my life, nor do I intend to do so fully. The key is spotting the stimuli as they appear, and engaging the mind to regulate or override temptation.

I echo Deirdre Barrett’s conclusion that sometimes it can feel more rewarding to say no to the supernormal, than to cave into impulse. Only awareness will help stop the supernormal from becoming what is ‘normal’ in our lives.

(Psst… you should subscribe to Stuart’s awesome newsletter to hear about a brand new comic he has coming out in 2014. Also, be sure to stop by his website and check out his other comics. He also has prints available for sale, I’ve purchase one myself and they are great. Well worth the very small price.)

You Decide What’s Normal

The “solution,” so it seems to me, is to simply avoid habituation.

The real enemy here is complacency—or allowing yourself to become a victim of your habits, instead of the person in the driver’s seat.

C.S. Lewis has some insightful thoughts on this:

Only those who try to resist temptation know how strong it is.

After all, you find out the strength of the German army by fighting it, not by giving in. You find out the strength of the wind by trying to walk against it, not by lying down.

A man who gives into temptation after five minutes simply does not know what it would have been like an hour later.

It’s my personal opinion that mini-sabbaticals are a great way to test small dependencies on anything. The ability to go without in regards to things we choose to do is important because it puts you back in control.

Giving something up for just a small period of time can help you understand its place in your life, especially when it’s an optional activity. If you try to stay away from something for just a few days, and you find yourself becoming anxious and agitated, that could be your body telling you something important. If you can give it up “cold turkey” with no problem, that’s important information too!

So no, don’t panic and freak out. Just recognize that your brain can get hooked by the many sources of “super” stimulation we have today, and it’s your job to make sure you are always in control.

Those who do not move do not notice their chains.

—Rosa Luxemburg

Now if you’ll excuse me, I need to get back to wasting time on the Internet.

 

 

 

The Rise and Fall of Circus Freakshows

The Rise and Fall of Circus Freakshows

by Zachary Crockett

“When you’re born, you get a ticket to the freak show.
When you’re born in America, you get a front row seat.” 

George Carlin

In 19th century America, gawking at people who were born with deformities was not only socially acceptable — it was considered family entertainment.

P.T. Barnum made millions by capitalizing on this. His “freakshows” brought together an amalgam of people considered to be curiosities — bearded ladies, tattooed men, the severely disfigured, and the abnormally short and tall — many of whom were unwillingly forced into the industry as young children.

Barnum hyperbolized  (or altogether falsified) the origins of these performers, which made them out to be beasts, rare “specimens,” and cretans. When he met criticism for “perpetuating hoaxes”, he countered that he was only on a mission to sprinkle society with a little magic: “I don’t believe in duping the public,” he wrote to a publisher in 1860. “But I believe in first attracting and then pleasing them.”

He accomplished both of these things through exploitation — yet many of his performers were paid handsome sums, some earning as much as today’s sports stars. The rise and fall of the “freakshow” business is a fascinating economic story, but also a morality tale.

P.T. Barnum and the Rise of the Freakshow

The “freak show,” or “sideshow,” rose to prominence in 16th century England. For centuries, cultures around the world had interpreted severe physical deformities as bad omens or evidence that evil spirits were present; by the late 1500s, these stigmas had translated into public curiosity.

Businessmen scouted people with abnormalities, swooped them up, and shuttled them throughout Europe, charging small fees for viewings. One of the earliest recorded “freaks” of this era was Lazarus Colloredo, an “otherwise strapping” Italian whose brother, Joannes, protruded, upside down, from his chest.

The conjoined twins “both fascinated and horrified the general public,” and the duo even made an appearance before King Charles I in the early 1640s. Castigated from society, people like Lazarus Eager capitalized on their unique conditions to make a little cash — even if it meant being made into a public spectacle. But until the 19th century, freak shows catered to relatively small crowds and didn’t yield particularly healthy profits for showmen or performers.

Meanwhile, in Connecticut, P.T. Barnum had already established himself as a brilliant marketer. By 19, as a storekeeper and early lottery-ticket seller, he’d already married deception and showmanship, and was making nearly $500 per week in profit ($10,700 in current dollars).

But when the U.S. governemnt passed an anti-lottery law, Barnum found himself out of work and moved to New York City. In 1835, inspired by stories he’d heard from England, he purchased a blind, paralyzed slave woman, fabricated a sensational story (that she was 160 years old and had been George Washington’s nurse), and charged viewers to see her in-person.

A year later, the woman passed away and a death report confirmed that she was only 80 years old — but Barnum’s viewers didn’t seem to care: they had been captivated by his storytelling and his deceptions had become “irrefutable truths.” Barnum had purchased the slave for $1,000, and made nearly that amount every week from his “investment.”

Barnum also mastered the art of colorful trickery. His first major hoax, in 1842, was the “Feejee mermaid” — a “creature with the head of a monkey and the tail of a fish.” The specimen — really the torso and head of a juvenile monkey sewn to the back half of a fish — was originally sold to an Englishman by Japanese sailors in 1822, for $6,000 ($103,500 today). After being displayed for some time in London, it found its way to New York, where Barnum negotiated to lease it for $12.50 per week.

Barnum embarked on a ferocious campaign to convince his crowds that the creature was real, feigning newspaper articles and even weaseling his way into the American Museum of Natural History. He fabricated a story about the mermaid’s discovery and distributed over 10,000 pamphlets. In a matter of weeks, he had the public’s attention.

Barnum purchased the American Museum on Broadway in New York and, throughout the 1840s, introduced a “rotating roster of freaks: albinos, midgets, giants, exotic animals” and anyone else who piqued the curiosities of the public. To advertise the space, he hired the most unskilled musicians he could find and had them play on the building’s balcony, in the hopes that this “terrible noise” would attract customers.

Under his leadership, the American freakshow became a booming business — both highly profitable and degrading for its performers.

Charles Stratton: “General Tom Thumb”

Following the success of his Feejee Mermaid, Barnum set out to find extraordinary humans who he could partner with (and capitalize on). He’d heard of a distant cousin, Charles Stratton, who had an incredible abnormality, so he went to investigate.

Stratton had been born to average-sized parents, and he developed at a normal rate until he was six months old, at which point he measured 25 inches tall and weighed 15 pounds. By age five, he hadn’t grown an inch. Barnum partnered with the boy’s father, taught the child to sing, dance, and impersonate famous figures (Cupid, Napoleon Bonaparte), and, in 1844, took him on his first tour around America.

Barnum “re-branded” Stratton as “General Tom Thumb” — “The smallest person who ever walked alone” — and told onlookers that the five year old was actually eleven. After incredible success, the two embarked to Europe, where Queen Victoria became enamored with the act; Stratton was mobbed by crowds wherever he went and achieved international stardom.

By the late 1860s, Barnum made Stratton a well-to-do man. For the better part of fifteen years, he was paid upwards of $150 per week ($4,100 today) for his performances, and, upon retiring, lived in New York’s “most fashionable neighborhood,” owned a steam yacht, and wore only the finest clothes.

Barnum was equally smitten with his new partnership: the European tour paid him so handsomely that he nearly purchased William Shakespeare’s birth home. His earnings extended well into the hundreds of thousands — money he reinvested in his business, and used to purchase his first large museum. He re-named it “Barnum’s Museum,” and by 1846, it was drawing 400,000 visitors a year.

William Henry Johnson: “Zip the Pinhead”

William Henry Johnson was born to impoverished, newly-freed slaves in New Jersey, in 1842. While he possessed a very subtle physical deformity (his head was slightly microcephalic, or cone-shaped), a local showman capitalized on and exaggerated it. Johnson began performing in sideshows in the mid-1850s.

In 1860, P.T. Barnum recruited him, and transformed him into “Zip,” a “different race of human found during a gorilla trekking expedition near the Gambia River in western Africa.” His head was shaved, save for a small tuft on top, and he was dressed in a head-to-toe suit of fur. Darwin had recently published his Origin of the Species, and Zip was promoted as a “missing link” — a beacon of evolutionary proof.

Barnum displayed Zip in a cage and ordered that he must only grunt; he was paid “one dollar a day” to keep quiet and stay in character. He also had Zip play a violin — so badly that he was often paid to stop by spectators.

He quickly became a star in Barnum’s rotation of “freaks,” garnering attention from the likes of author Charles Dickens and other celebrities. For his efforts, Zip was rewarded handsomely: Barnum paid him $100 per performance (of which he often had 10 per week) and purchased him a lavish home in Connecticut.

Zip was known not just for his curious origins, but for his upbeat, positive demeanor; as one spectator wrote, “he amuses the crowd and the crowd amuses him.” His showmanship extended far beyond Barnum’s eventual death, and he performed into his late eighties. He was also a masterful marketer: during 1925’s Scopes Trial, he offered himself as living proof of evolution, generating a massive amount of publicity.

Though he played a fool for the duration of his life, and was exploited, Zip was frugal and retired a millionaire. On his deathbed he reportedly told his sister, “Well, we fooled ‘em for a long time,” implying that, for decades, he’d been conning not only his audiences but sideshow operators into believing that he was mentally incapacitated.

Chang and Eng Bunker: “The Siamese Twins”

Born in 1811 in a small Siamese fishing village, Chang and Eng were conjoined twins, connected by a four-inch ligament at the chest. In the late 1820s, a British merchant established a contract with the twins and exhibited them around Europe and America for three years. Subsequently, the two split from the showman, started their own American sideshow, and gained great fame. The term “Siamese twins” was created by a doctor who witnessed the two perform.

By 1838, at age 29, they retired with $60,000 ($1.3 million today), and settled in North Carolina, where they bought a 100-acre farm and operated a plantation. After adopting “Bunker” as an “American” last name, they became naturalized U.S. citizens and met a pair of sisters, who they wed. They constructed two separate houses on their property and traded off three-day time slots in which each could spend time with his wife. Combined, the two fathered 21 children.

After running out of money in 1850, they reinstated their career as sideshow performers and signed a contract to tour with P.T. Barnum. For the next twenty years, they intermittently performed, and they died four hours apart in 1879, leaving a great fortune to their wives.

Captain Costentenus: “The Tattooed Man”

George Contentenus, America’s “first tattooed side act,” was so committed to his stage story that little is known today about his actual origins. Born in 1836, he claimed to be a Greek-Albanian prince raised in a Turkish harem.

His 338 tattoos covered nearly every inch of his body (save for his nose and the soles of his feet), were incredibly ornate, and depicted Burmese-specific species, and symbols from Eastern Mythology: snakes, elephants, storks, gazelles, dragons, plants, and flowers of all sorts intermingled on his skin.

According to Contentenus’s tale, he’d been on a military expedition in Burma when he and three others were captured by “savage natives” and offered a choice: they could either be cut into pieces from toe to head, or receive full-body tattoos and be liberated — should they survive the excruciating process. The soldiers chose the latter option, a process which took three months and killed Contentenus’s accomplices.

To convince the world that his story was true, he even went so far as to publish a 23-page book in 1881, documenting every detail of his alleged experience. It wasn’t until much later in his career that Contentenus admitted he had feigned this tale in the hopes of attaining fame and fortune. And that he did.

In the 1870s, he partnered with P.T. Barnum and became the American Museum’s highest grossing act, taking home more than $1,000 per week (an impressive $37,000 per week today). An 1878 clipping from the New York Times elaborated on his wealth:

“He wears very handsome diamond rings and other jewelry, valued altogether at about $3,000 [$71,500 in 2014 dollars] and usually goes armed to protect himself from persons who might attempt to rob him.”

“Half the people who visited [Captain Costentenus,] this last specimen of Grecian art, looked as if they would be quite willing to go through the process of having their skins embroidered, if thereby they could insure a comfortable living without labor.”

Upon his death, Costentenus donated half of his fortune to the Greek Church, and the other half to fellow freak show performers who were less fortunate.

Fedor Jeftichew: “The Dog-Faced Boy”

Born with a hair-covered face in 1873, Fedor was destined to follow in his father’s footsteps. Adrian Jeftichew, known throughout Europe as “The Siberian Dog-Man,” was highly superstitious and believed that both he and Fedor were on the receiving end of divine punishment.

Following his father’s early death, the 16-year old Fedor became a ward of the Russian government. By this time, he was “covered with long, silky, fur-like hair that grew thickest on his face.” While the public perceived him to be animal-like and savage, he was contrarily inquisitive, soft spoken, and shy.

Fedor was adopted by a cold-hearted showman, brought to England, and advertised as “the boy who was raised by wolves in Siberian wilderness.” The ever-enterprising P.T. Barnum saw the act, purchased the boy’s contract, and transitioned him to the United States in 1884. But, as with any of his performers, Barnum needed to embellish Fedor’s story.

The child became “Jo-Jo The Dog-Faced Boy,” and played the stereotype of the Victorian naturalist’s approximation of a prehistoric man: he’d been found in a cave deep in the forests of central Russia, feeding on berries and hunting with a rudimentary club; after enduring a bloody battle to capture the “beast,” hunters taught him to walk upright, wear clothes, and speak like a dignified human.

Barnum dressed Fedor in a Russian cavalry uniform, and had him play up his savage nature, “barking, growling, and baring his teeth” at onlookers. Throughout the 1880s, Fedor was among the highest paid performers in the business, netting $500 per week ($13,000 today). By the time of his retirement, his saving totalled nearly $300,000 ($7.6 million).

Death of the Freakshow

By the 1890s, freakshows began to wane in popularity; by 1950, they had nearly vanished.

For one, curiosity and mystery were quelled by advances in medicine: so-called “freaks” were now diagnosed with real, scientifically-explained diagnoses. The shows lost their luster as physical and medical conditions were no longer touted as miraculous and the fanciful stories told by showmen were increasingly discredited by hard science. As spectators became more aware of the grave nature of the performers’ conditions, wonder was replaced by pity.

Movies and television, both of which rose to prominence in the early 20th century, offered other forms of entertainment and quenched society’s demand for oddities. People could see wild and astonishing things from the comfort of a theater or home (by the 1920s), and were less inclined to spend money on live shows. Media also made realities more accessible, further discrediting the stories showmen told: for instance, in a film, audience members could see that the people of Borneo weren’t actually as savage as advertised by P.T. Barnum.

But the true death chime of the freakshow was the rise of disability rights. Simply put, taking utter delight in others’ physical misfortune was finally frowned upon.

The Moral Debate

Even at their peak, these shows had been vehemently critiqued as exploitative and demeaning. In 1861, British historian Henry Mayhew wrote a study in which he dismissed them as “nothing more than human degradation:”

“Instead of being a means for illustrating a moral precept, [freakshows] turned into a platform to teach the cruelest debauchery…The men who preside over these infamous places know too well the failings of their audience.”

Undoubtedly, most early sideshow performers were taken advantage of, manipulated, and pushed into the industry unwillingly. Only one of the performers we’ve profiled above, Captain Costentenus,  entered the trade out of his own volition (consequently, he also made himself an oddity, rather than being born one).

In the 1950s, Carol Grant, a 16 year-old with a deformity, was highly offended after attending a sideshow, and sent a letter to North Carolina’s Agricultural commissioner. “Handicapped people are seeking more in life than being stared at in a sideshow,” she wrote. The letter garnered national attention, and raised a debate: should performers have the option of appearing in these shows, should they choose to do so?

Harvey Boswell, a sideshow operator and paraplegic, responded to Grant:

“I’m stared at but it doesn’t bother me. Nor does it bother the freaks when they are stared at on their way to the bank to deposit the $100, $150, $200, and even $500 per week that some of the more sensational human oddities receive for their showing in the sideshows.”

Long-time showman Bobby Reynolds also pitched in his two cents:

“If you’re a mutation of sorts, the biggest thrill you get is opening the mailbox and getting a check from the government. Or you get put in an institution. People say ‘Oh, you took advantage of those people!’ We didn’t take advantage of those people. They were stars! They were somebody. They enjoyed themselves.”

Nonetheless, by the mid-20th century, remaining performers migrated into traveling carnivals or museums, making only a fraction of what they made a few decades prior. Many who’d relied on sideshows to make a handsome living died in destitution, with little to no disability support.

Modern Incarnations

While freak shows were ousted for their questionable morals, they exist today — just not in the traditional sense.

Television network TLC, for instance, has proved that curiosity still sells. Just as P.T. Barnum exploited “Fat Boy” Ulack Eckert, TLC’s “My 600 Pound Life” exploits the sensational aspects of America’s morbidly obese. The network’s “Little People, Big World” light-heartedly portrays the struggles of a dwarf couple, as Barnum did with Tom Thumb. “The Man With Half a Body,” and “I Am the Elephant Man” each prey on the same prying eyes that funded freak shows throughout the 1800s. And, like their predecessors, these modern-day “stars” are paid for their exploitation — up to $8,000 per episode.

There are also those like Eric Sprague, who’s transformed himself into “Lizardman.” With head-to-toe green scale tattoos, a split tongue, and filed teeth — all products of personal volition — he brands himself as a “professional freak.” Just in case there’s any doubt as to whether or not this is true, “FREAK” is prominently inked across his chest. Sprague is part of a body modification subculture that explicitly seeks “freak stardom.”

But for the majority of the 19th century’s “freaks,” notoriety wasn’t a choice. They grew to accept their lifestyles and appreciate wealth and fame, but paid for it in other ways. Frank Lentini, a three-legged man who was once dubbed “King of the Freaks,” confirmed this in a newspaper interview at the turn of the century.

“My limb does not bother me,” he wrote, “as much as the curious, critical gaze.”

Is This Mind-Controlled Exoskeleton Science or Spectacle?

Is This Mind-Controlled Exoskeleton Science or Spectacle?

By Greg Miller

If you tune in to the opening ceremony of the World Cup in São Paulo on June 12, you might see something truly spectacular. If things go according to plan, a paralyzed young adult will walk onto the field and kick a soccer ball, assisted by a robotic exoskeleton operated by the person’s brain.

The man behind this bold plan is Miguel Nicolelis, a Brazilian-born neuroscientist based at Duke University. Nicolelis is a leading researcher in the field of brain-machine interfaces, devices that tap electrical signals from the brain to operate computers or prosthetic limbs for people paralyzed by accidents or neurodegenerative disease. He sees the World Cup demo as a milestone for Brazilian science and a step toward making wheelchairs obsolete. He has compared it to putting a man on the moon.

Not everyone shares that view. Some researchers have raised scientific and ethical questions, including whether the technology is as advanced as Nicolelis claims, and whether the demo risks exploiting the participant or raising false hopes by promising too much, too soon.

“There’s a lot of people betting that it’s really nothing amazing, and it’s really just grandstanding,” said Daniel Ferris, a neuroscientist and biomedical engineer at the University of Michigan. Ferris says he’ll reserve judgment until he sees it.

But even for experts, the science behind the demo will be difficult to evaluate simply by watching it on TV.

For brain-machine interface researchers, the impressiveness of the demo depends largely on the degree to which the exoskeleton is controlled by the person’s brain. If the brain merely sends stop/go signals that prompt the robot to execute a programmed set of movements, that’s roughly in line with the current state of the science. Several exoskeletons that can allow a paralyzed person to walk (slowly) already are commercially available, and researchers have had some (modest) success starting and stopping them with signals from the brain. At the other extreme, if the person walks gracefully at a normal speed and can make adjustments on the fly–like if the ball moved just as they were about to kick it–that would be a phenomenal advance. 

When I visited his lab last year, Nicolelis was planning to use electrodes surgically implanted in the volunteer’s brain to record the signals of individual neurons in brain regions that control movement and use these to control the exoskeleton. However, his team now plans to use non-invasive EEG electrodes placed on the scalp instead.

There’s been a long-running debate in the field about which approach is better, and at least until recently Nicolelis came down strongly on the surgically implanted electrode side. Based on more than a decade of research in his lab, he argued that recording the activity of hundreds or even thousands of neurons individually provides richer information that can be harnessed to create more naturalistic movements. EEG, by contrast, picks up combined signals from millions of neurons across a much broader swath of the brain. (Scientists sometimes illustrate the difference with an orchestral analogy: Implanted electrodes are like recording a symphony with a mic on just a few individual instruments, EEG is more like recording the whole group with a single mic, but from outside the concert hall.)

Neither approach is perfect, and a major challenge facing brain-machine interface researchers is how to get enough information out of either type of signals to operate the sophisticated robotic devices engineers can now build. The neuroscience hasn’t yet caught up to the robotics.

Looming deadlines and the difficulty of organizing surgical implants forced Nicolelis’s team to switch to the less invasive EEG strategy, Alan Rudolph, vice president for research at Colorado State University and manager of the project, recently told MIT Technology Review.  “We are well aware of the limitations of EEG, but we decided to show what could be done,” he said. (Both Rudolph and Nicolelis declined to be interviewed for this article, citing time pressure, and in Nicolelis’s case an article I wrote last year that included concerns other researchers had raised about the project).

Even with EEG, there is potential for big improvements over current systems. One of the most developed exoskeletons is a Japanese device called HAL (Hybrid Assistive Limb), which is intended for people with only partial paralysis or extreme muscle weakness. “It has tiny sensors that are instrumented all over the exoskeleton, so it measures how your legs are pushing against the device,” Ferris said. Force sensors in the feet detect when a person leans forward, triggering the exoskeleton to move, sort of like a Segway. “You don’t have very much control.”

A few systems are designed to help people with complete paralysis. One of these, an exoskeleton made by Rex Bionics, has been tested with people wearing an EEG cap to control its movements. It’s remarkable, but slow (as you can see in this video). “It moves at a pace you’d get incredibly frustrated with,” Ferris said. “It takes several minutes to walk a few meters.”

If he had to guess, Ferris says no more than 30 such exoskeletons are in regular use by people with full paralysis. Regular use means they might use it for an hour or two at a time, he says. “You’re not going to see them wearing it every day, all day long.”

 

A big question is whether the planned World Cup demo will surpass what’s been done previously. The new exoskeleton was developed by roboticist Gordon Cheng of the Technical University of Munich, and a team of more than 100 scientists and engineers are working furiously to get it ready in time. (The video above shows a bit of what it can do–click the captions setting for English subtitles). They tested the first patient just two weeks ago, on April 30, according to an update (and awesome photo) on the project’s Facebook page. Hundreds, if not thousands of people have posted encouraging messages, and hopes that the research will benefit their loved ones.

“Results are much better than expected,” Nicolelis said in a recent post on the Brazilian government’s World Cup website. ”We didn’t expect to be so far advanced from a clinical point of view, nor having such interesting results in neuroscience terms, the way we’ve had in the last few months.”

But there’s more than just science going on here. In Brazil, much has been made–both pro and con–of the social message sent by the demo.

“The main message is that science and technology can be agents of social transformation in the whole world, that they can be used to alleviate the suffering and the limitations of millions of people,” Nicolelis told the BBC last week. He has also said he sees it as an opportunity to give back to his native country, which he left at age 27 to study in the U.S., and to elevate the position of Brazilian science in the eyes of the world.

Preparations for the World Cup demo. Photo: Carol Delmazo /  World Cup Portal

Yet the World Cup preparations have occurred against a backdrop of growing unrest as Brazilians have been angered to see the government spend billions to build giant stadiums in cities where many people don’t have adequate sanitation, safe drinking water, and other basic services. The government contributed $15 million to the exoskeleton research project.

All of this rankles neuroscientist Edward Tehovnik of the Universidade Federal do Rio Grande do Norte in Natal, Brazil, who criticizes the demo on both scientific and social grounds. “Every day on my way to work I pass this mega stadium. It’s beautiful… like something from another planet,” he said. “And then I look over and I see garbage on the streets, I see potholes everywhere, I see houses collapsing, I see poverty.” Tehovnik says he wishes the money for the stadium and the exoskeleton demo had been used to build a hospital or a school, or something else that would benefit people in the long run. “For me, this represents an unfortunate thing, and this kid kicking the ball is an extension of all that.”

Tehovnik is one of the few scientists willing to speak critically about Nicolelis’ plan on the record. He has a tumultuous history with Nicolelis, who hired him in 2010 to work at a new neuroscience research institute Nicolelis founded in Natal, then fired him in 2012 after a falling out.

Other scientists have misgivings too, but they’re reluctant to voice them publicly. ”This is a very complex topic where personal pride, jealousy, money and expected fame clash,” said one prominent neuroscientist, who agreed to comment only on the condition of anonymity. Ten scientists, including several prominent brain-machine interface researchers, declined to comment on the record or did not respond to interview requests for this article. Some of those who did respond cited fears of jeopardizing future grant applications or publications in a competitive field where scientific disagreements sometimes turn personal.

“I think there is a 50-50 distribution of the pro and contra issues,” the neuroscientist said of the World Cup demo, “but we all feel that distinguishing visionaries from charlatans can be told only two decades after such debates.”

In the meantime, here’s what to watch for if you tune in to see the demo in a few weeks. The speed and fluidity of the movements is a good gauge, Ferris says. (Watch the Rex Bionics video for a reality check–given the current state of the art, there’s virtually no chance we’ll see a paralyzed person bend it like Beckham).

Another important factor, he says, is the person’s level of disability. “Is this someone with a partial spinal cord injury who could maybe walk a little on their own, but now they’re walking with the exoskeleton around them?” That would make it very difficult to know how much the exoskeleton is contributing. It will also be nearly impossible to tell how much of the control is coming from the person’s brain, Ferris says. ”You’re going to have to rely on what they claim.”

 

The dangerous impacts of social media and the rise of mental illnesses

The dangerous impacts of social
media and the rise of mental illnesses


by Mariel Norton

Less FaceTime, more face-to-face time.

Tallulah Wilson was just 15 years old when she took her own life back in October 2012. The gifted ballerina had been receiving treatment for clinical depression, but whilst creating an online fantasy of a cocaine-taking character, she began to share self-harm images on social networking site, Tumblr.

Shortly after her mother discovered Tallulah’s account and had it shut down, the teenager jumped in front of a train at St. Pancras station in London.

Back in 2002, Tim Piper killed himself at the age of 17. Following his struggle with depression, the student embarked on an online search for advice on how to commit suicide – later hanging himself in his bedroom. 

While there are several reasons for using social networking, it appears that its main function is for increased contact with friends and family along with increased engagement in social activities. However, research has shown that young adults with a strong Facebook presence were more likely to exhibit narcissistic antisocial behavior; while excessive use of social media was found to be strongly linked to underachievement at school.

So if you take roughly 1.2 billion Facebook users and 450 million people suffering from mental disorders, what do you get? A global pandemic that’s showing no sign of slowing down anytime soon.

Cyberbullying is still on the rise

Quoting statistics from the Pew Research Center and the World Health Organization respectively, it’s frightening just how high these figures are – especially when you take into account the terrifying growth of online bullying. 

Earlier this year, British charity ChildLine found cyberbullying to be on the rise; with children reporting 4,507 cases of cyberbullying in 2012-13 compared to 2,410 in 2011-12.

Why the increase? It appears that somewhere along the way, the privileges of social networking have been abused – both in terms of its meaning, as well as its victims.   

It was back around the 2005 mark that the technorati heralded the dawn of social media; reaping the benefits of adopting real-time communication via a digital platform. Embracing unfamiliar terminology as well as transforming the landscape upon which standard norms of interaction were practised, suddenly choosing the appropriate profile picture became a first world problem whilst others agonised over which hashtags best summed up their tweets.

Yet there were much more pressing issues that over time would manifest into the difficulties we’ve only just started to speak up about today. This year marked the world’s first ever #TimetoTalk Day, where for 24 hours on February 6, 2014, people were encouraged to start conversations regarding mental health in a bid to end the discrimination against mental illnesses.

Though while this is one instance where social media can be seen as positive in the case of mental health, there have been many other situations where social networking has not been such a good thing.

The effects of social networks on mental illnesses

A matter of contention prevalent within the media, several studies have shown that social networking – Facebook in particular – can have detrimental effects on our wellbeing. Researchers from the University of Michigan assessed Facebook usage over a fortnight and found that the more people that used it, the more negativity they experienced concerning their day-to-day activities; as well as over time, incurring higher levels of dissatisfaction with their life overall. 

Meanwhile, a blog published on Everyday Mindfulness uncovered a fascinating concept known as the ‘discrepancy monitor’; “a process that continually monitors and evaluates our self and our current situation against a gold standard.”

 

In a nutshell, we evaluate our own experiences against what we believe our experiences should be. But when comparing our own circumstances against that of Facebook, we become our own worst enemy – as the digital persona portrayed on this social network only highlights the ‘best bits’ from our short Facebook timeline, in stark contrast to our entire life’s work. 

Need any more proof on just how damaging social media can be? Look to DoSomething.org, America’s largest not-for-profit for the younger generation and social change. Its 9 Ways Technology Affects Mental Health article brings to light several afflictions social media has on mental health, including depression, isolation, insecurity and more recently, FOMO, also known as “Fear Of Missing Out.”

Prevent social media addiction

The peer pressure to remain always connected – coupled with the 24/7 accessibility that mobile media provides – means that there’s always more than one resource available for users to get their digital fix; which unfortunately leads to the biggest demon of all: Addiction. 

While there’s a multitude of self-help guides preaching their own best practices for handling the negative aspects of social media, the resolution begins with learning to use social media when appropriate and remembering that health is wealth – physically, as well as mentally. 

My personal Wal-Mart nightmare: You won’t believe what life is like working there

My personal Wal-Mart nightmare: You
won’t believe what life is like working there

by Pam Ramos

The president visited my store last Friday. He didn’t see
how I sleep on my son’s floor and eat potato chips for lunch

When I woke up to see the news a few weeks ago, I could hardly believe it: President Obama was planning a visit to the Mountain View Wal-Mart where I work.

But the excitement quickly passed when I found out the store would be shutting down hours in advance of his visit. I wouldn’t be able to tell the president what it’s like to work at Wal-Mart and what it’s like to struggle on low wages, without the hours I need. I am living at the center of the income inequality that he speaks about so often, and I wanted to talk to him about how to change this problem.

My situation is not unlike that of many of the 825,000 Wal-Mart associates – and many other Americans – who are working hard, but just can’t keep up. Most of us aren’t even paid $25,000 a year even though we work at the largest employer in the country and one that makes $16 billion in profits.

I wanted to tell the president what it’s like working – and living – like this.

Things have always been tight. After four years working at Wal-Mart in Mountain View, I am bringing home about $400 every two weeks (I’d like to get more hours, but I’m lucky if I work 32 hours a week). That’s not enough to pay for bills, gas and food.  All I can afford to eat for lunch is a cup of coffee and a bag of potato chips. I’ve always done everything possible to stretch paychecks and scrape by. Sometimes it means not getting enough to eat.

But then I got some bad news that made stretching my budget impossible.

Two months ago, I started feeling ill. My doctor told me I needed to take a week off to have a series of medical tests. Every day for a week I went to the hospital and had to pay $30, $60 or $100 in co-pays for each appointment, test and X-ray.

With these additional expenses and without a paycheck for the week I was out, it pushed me over the edge. I didn’t have enough money to pay the rent.

Right now, I don’t have a place to call home.

I sleep on the floor of my son’s living room because I can’t afford my own place. All of my belongings are in my car. I don’t know where to send my mail.

I used to think, “At least I have my health and my family.” But my doctor thinks I may have colon cancer, and with all of the money I still owe the hospital, I’m not sure how to finish the tests and get treatment. Even though I do have insurance through Wal-Mart, the co-pays are more than I can afford with only $400 every two weeks.

I wanted to tell the president I am scared. I am scared for my health. I am scared for the future for my grand kids. And I am scared and sad about the direction that companies like Wal-Mart are taking our country.

I don’t wish the struggle I’m facing onto anyone. But sadly, my situation isn’t unique. I know that I am one of many living in the Wal-Mart economy who has no financial stability. We expect to work until our deaths because we don’t have any retirement savings and are concerned about the future in front of our children and grandchildren.

There are so many of us who have it so hard – trying to live paycheck to paycheck. While the president is here visiting my store, I want him to look inside at what is really happening at Wal-Mart.

I want the president to help us and tell Wal-Mart to pay us enough to cover the bills and take care of our families. That doesn’t seem like too much to ask from such a profitable company, a company that sets the standard for jobs in this country. And I hope it’s not too much to ask from a president who believes that income inequality is the defining challenge of our time.

Pam Ramos has worked for four years at the Walmart in Mountain View, California. President Obama visited this Walmart on Friday May 9, 2014. Ramos is also a member of OUR Walmart, a worker organization calling on Walmart to publicly commit to paying workers $25,000 a year, providing full-time work and ending illegal retaliation.

Defending Jack: Why trading a cow for beans was not such a bad idea after all

Defending Jack: Why trading a cow for
beans was not such a bad idea after all

by Dorothy Keine

We all heard the story of Jack and the giant bean stalk growing up and at one point thought “A cow for beans? What an idiot!” But perhaps Jack was just ahead of his time.

Our tale begins with the humble antibody…

Antibodies are the body’s way of seeking out intruders and illness in order to neutralize or destroy threats. They play a sort of good cop/bad cop role in our lives. Not only do they help the body combat disease (good cop), but they also play a significant role in auto-immune diseases (bad cop). Their presence can often be used to diagnose many diseases as well.

Antibodies can also be manufactured to identify almost anything. They are commonly used in labs to label cells and proteins, make things glow through immunofluorescence, or link up to facilitate chemical reactions.

The most stunning use of antibodies currently can be found in disease research and treatments. Antibodies are on their way out of the lab and are being used as therapeutic molecules for a whole host of diseases. Here is where Jack might have been on to something.

Plantibodies: the magical bean

Historically, antibodies have been made through injecting an antigen into an animal such as a rabbit. The rabbit would then naturally produce antibodies against that antigen. These antibodies could then be purified from the animals blood (this does not kill the animal) and used for disease treatment or research.

The first downside of this method is the upkeep of all the animals needed to produce large amounts of antibody. Also, since the animals used are closely related to humans, there is also the possibility of contamination. The animal might have an illness that the researchers are not aware of that could then be passed on through their blood. The highest precautions are taken to prevent this, but it is still a possibility.

But now, scientists are trading in the cow for the beans.

Plantibodies are human antibodies produced by genetically engineered plants. Unless you have a very strange family history, you are not closely related to any plants. This means that plants cannot pass on their diseases to people. They are also cheap to grow and can be raised in mass quantities.

How do they make these magical beans you might be thinking? Scientists are able to genetically engineer plants that have animal DNA included in their own. They then take the antibody of interest from the animal and inject it into the plants where it will be replicated through the plant’s own process of glycosylation. The antibodies can then be purified from the plants and used just like any other antibody.

Tobacco’s new job: Saving your life.

Tobacco, understandably, has gotten a bad rap in the past few decades. Killing people is not very beneficial for anyone’s reputation. But that is all about to change. Tobacco has found a new PR manager in plantibodies.

The cost of producing antibodies in animals is high. Labs often pay thousands for small amounts. Because of this high production cost, treatable diseases like rabies are often a death sentence in developing countries. The low cost of plantibody production could change all of this.

Tobacco has been cultivated for centuries now. It is low cost, and can easily be grown in vast amounts. It can also be used to create antibodies. In fact, tobacco already has been used in labs to create antibodies (and potential treatments/cures) for Ebola, rabies, and West Nile.


Eat your antibodies

We have just scratched the surface of what plantibodies can do. These plants are expected to yield over 10 kg of therapeutic protein per acre in tobacco, maize, soybean, and alfalfa plants. The cost of this production is expected to be only 1/1oth of the cost of production in animals.

Plantibodies can also be made from modifying seeds. Instead of injecting antibodies into a grown plant, they are instead placed inside seeds. When the seed later germinates, it has everything it needs to create plantibodies. This would allow for easy production and growth. These seeds can be stored for years and still be viable. If there is ever an epidemic, we could just break out the seeds and start growing. Seeds are also much easier to ship. They could be injected in a lab in the US, and then shipped to third world countries at little expense. Thus enabling them to grow their own therapies.

As mentioned before, plants cannot contract or spread human diseases. So there is no worry about production spreading diseases like HIV, prion diseases, or any other blood born pathogens.

Perhaps the most interesting application of plantibodies is the possibility of eating your treatment, no shot required. If antibodies could be produced in bananas or lettuce, you could treat or even prevent West Nile at meal times. No band-aids required.

So perhaps Jack wasn’t so foolish; he just wanted to cure the world. Though perhaps he should double check his science before planting the beans next time.

So why haven’t we moved ahead?

Drug discovery takes a long time and a lot of money. And most fail. Even if you have a miracle drug, it is takes a long time to get FDA approval.

Currently the biggest problem for plantibody production is that it falls under GMO regulations. And with so many states and groups pushing against these, plantibody production faces many problems. While most drugs only have to get approval from the FDA to be deemed safe, plantibodies are regulated by the FDA, EPA, and USDA.

While all of these regulations are put in place for public safety, perhaps it is time to revisit the regulation on plantibodies to make research a little easier. I personally would much rather eat a salad than get a shot any day.

Sadly, these plantibodies have not yet led to any magical giant bean stalks, but research is surely underway.

Solar-powered roads: Coming to a highway near you?

Solar-powered roads: Coming to a highway near you?
By Teo Kermeliotis

As a kid growing up in the mid-1960s, Scott Brusaw would spend hours setting up miniature speedways on the living room carpet so that he could race his favorite slot cars up and down the electric tracks.

“I thought that if they made real roads electric, then us kids could drive,” recalls Brusaw, who grew to become an electrical engineer. “That thought stuck with me my entire life.”

Fast forward to mid-2000s, with the debate over global warming in full swing, Brusaw’s wife Julie asked him whether he could build the electric roads he’d concocted as a child out of solar panels. Brusaw initially laughed off the idea — but not for long.

With an airplane’s black box in mind, the couple started mulling over the possibility of creating a solar powered super-strong case that could house sensitive electronics. They explored the idea of embedding solar cells to store energy inside the case, LEDs to illuminate the road lines and heating elements to resist ice and snow — soon after, the concept of Solar Roadways was born.

The couple’s proposal calls for the traditional petroleum-based asphalt highways to be replaced with a system of structurally-engineered solar panels. These would act as a massive energy generator that could feed the grid during daytime. They would also recharge electric vehicles while moving, thus helping to reduce greenhouse emissions drastically.

“Our original intent was to help solve the climate crisis,” says Brusaw. “We learned that the U.S. had over 72,000 square kilometers of asphalt and concrete surfaces exposed to the sun. If we could cover them with our solar road panels, then we could produce over three times the amount of energy that we use as a nation — that’s using clean, renewable energy instead of coal.”

The Idaho-based couple received their first government contract to work on the project in 2009, and have been working to perfect it ever since. Initially, they joined forces with researchers to develop a super-strong textured glass that would offer cars the traction they require. Then, they fitted LEDs road markers to avoid destroying the cells by painting highway lines over them and heating to warm the surface and keep the system working.

Now, the pair is hoping to raise enough funds on crowdfunding site Indiegogo to gear up production following the successful test of its latest prototype: a Solar Roadways parking lot laid next to their electronics lab.

“They [solar panels] prevented snow and ice accumulation this past winter and are producing the expected amount of power — the parking lot is equivalent to a 3600W solar array,” says Brusaw, who’s hoping to be ready for production later this year or early 2014.

“The panels have passed load testing for vehicles weighing up to 125 tons without breakage,” he adds. “Our textured surface has been traction tested and can stop a vehicle traveling 128kph on a wet surface in the required amount of distance.”

Brusaw says solar road panels could theoretically be laid anywhere — from motorways and parking lots to pavements and playgrounds. He believes that such a prospect could transform the existing motorway infrastructure, prevent accidents and ultimately help save the planet from an environmental disaster.

“In the U.S., roughly half of greenhouse gases are generated by burning fossil fuels to create electricity,” he says. “Another 25% comes out of our tailpipes,” adds Brusaw. “By replacing coal with solar and making electric vehicles practical — which could lead to the end of internal combustion engines — we could theoretically cut greenhouse gas emissions by up to 75%.

Brusaw admits that “in the beginning about half of the people thought we were geniuses and the other half thought we were nuts,” before quickly adding that now “the vast majority of reactions are positive and supportive.”

“I think that many people expect their governments to solve the world’s problems, but the climate crisis is getting worse and our politicians seem baffled,” he says. “Many of our greatest technologies were created in someone’s garage. That’s where Solar Roadways was born and we think that we finally have a solution to the causes of global warming.”

Human Progress Is a Myth

Human Progress Is a Myth

By Johannes Niederhauser

Haven’t we humans come such a long way? In the past 200 years alone we’ve managed to abolish slavery (by moving it to the sweatshops of the Third World), rid our lives of industrial pollution (by moving it to the factories of the Third World) and introduced peace, human rights, and democracy to various undeveloped hinterlands through long, mindless, bloody conflicts.

We really are the sparkling glint of diamond in the otherwise shabby lump of coal that is the modern world, and anyone who hasn’t tasted the ethical sweetness of Western progress surely will soon, presumably via extended bombing campaigns. We have Fair Trade acai berries, high-speed internet, and pop-up scrunchie markets; we are still basking in the afterglow of the Enlightenment, while the rest of the world drags its feet through the Dark Ages.

Noted political philosopher, author, and regular contributor to the Guardian and the New Statesman, John Gray’s latest book is about how all of that is bullshit. The Silence of Animals deals with the touchy subject of human progress, which, Gray asserts, is a myth. Considering the fact there seems to have been genuine progress in the fields of science, medicine, and technology, I was a little confused by that, so I called him up for an explanation.


John Gray’s book, The Silence of Animals

VICE: First of all, could you explain what you mean by the term progress and why you think it’s a myth? 
John Gray: I define progress in my new book as any kind of advance that’s cumulative, so that what’s achieved at one period is the basis for later achievement that then, over time, becomes more and more irreversible. In science and technology, progress isn’t a myth. However, the myth is that the progress achieved in science and technology can occur in ethics, politics, or, more simply, civilization. The myth is that the advances made in civilization can be the basis for a continuing, cumulative improvement.

Do you have any examples to back that up?
Take slavery. If you achieve the abolition of slavery, you can then go on to achieve democracy. Again, the myth is that what’s been achieved is the basis for future achievement. My observation of history is that this isn’t the case for civilization. Of course, I strongly support advances in civilization, like the emancipation of women and homosexuals and the abolition of torture, but all that can be easily swept away again.

I see. So there was the supposed abolition of torture in the US, but it came back again in Guantanamo Bay and Abu Ghraib.
Exactly. There were genuine advances in ethics and politics, but they were lost in an instant. And the ban on torture was notoriously “relaxed” by George W. Bush and his gang in the world’s greatest democracy, not by some obscure dictatorship. Back in February 2003, before the Iraq invasion, I published a spoof article called “A Modest Proposal” in the style of Jonathan Swift. Based on actual developments in liberal political theory at the time, I argued that torture would be needed as an instrument in the struggle for worldwide human rights.

What were the reactions like?
People found it ridiculously, perversely misanthropic and pessimistic. Even those who perceived it as satirical, which it was, of course.

Typical liberal humanist outrage. Some topics are just untouchable.
Less than a year later, however, pictures from the tortures in Abu Ghraib came out. It wasn’t difficult to foresee that this would happen—hence that great achievement of the necessary ban of torture is very easily forgotten. Something like torture, which is completely beyond the boundaries of civilization, can become renormalized at any time.

I sometimes think of us people in the West as masters of self-deception.
Oh, sure.

We perceive ourselves as highly ethical. Shop at the right store and you feel like you’re saving the world, but the clothes we wear are being produced by wage slaves in the Third World. We’ve only outsourced slavery, we haven’t banned it.
Yeah, we’ve changed the name, and we’ve outsourced it. And we’ve done the same thing with pollution. We’ve outsourced heavy industrial pollution to China and India so that we can be very clean. But they produce the goods that we actually use. I feel the same as you, but I wouldn’t describe it as hypocrisy because hypocrisy suggests we actually know what the truth is.


Tony Blair and George Bush, bringing progress to the wider world.

So it’s even worse—complete delusion. 
Think of Tony Blair. People regard him as a liar, but I believe that’s too much of a compliment. I think he lacks the moral development to engage in falsity. Whatever he spoke, he believed. Another example: many people from the political left think of Western intervention in Iraq, Libya, and Afghanistan as concealed resources grabbing. But many people—commentators, politicians, certainly Blair—believe that they’re actually promoting enlightenment and progressive values throughout the world.

You don’t think it’s concealed imperialism?
No, because it’s done with a genuine belief that their beliefs are the beliefs of the world. They suggest that Western countries, despite the great problems they have, are the meaning of history. That is a myth. Western countries do have many virtues—we’re having this conversation, which we couldn’t do in many other parts of the world—but they also have great difficulties. Behind every myth, there is this demand for the meaning of life.

But human beings need meaning in life.
I’m not denying that we as human beings can create meaning ourselves, but there’s no ultimate meaning inscribed in the universe or in history. My advice to people who need a meaning that’s beyond what they can create is to join a religion. On the whole, those are older and wiser myths than secular myths like progress.

Would you say the myth of progress is sort of a religion in itself?
Oh yes, it is. Our secular myths are just religious myths rebottled, but with most of the good things taken out.

So, in that sense, is contemporary atheism also a religion?
Atheists always turn red when I call atheism a religion. If atheism means what it should mean—to not have any use for the concept of God—then, in that sense, I am an atheist. But I’m not an evangelist. The fact that there were buses going around London saying “There is probably no God” is completely ridiculous. You can definitely call atheists religious when they’re being evangelists and trying to convert the world to their belief.


London’s atheist bus, photo by Jon Worth at atheistbus.org.uk.

Are they ever successful?
No. The biggest conversions taking place at the moment are Africans to Islam and many Chinese to Christianity. So atheism is a side joke of history compared with that. What we see today is rather a huge expansion of traditional religion. Atheism is a media phenomenon.

Let’s turn to another modern myth: happiness. I recently went to some talk titled “Will happiness save the world?”
That’s wonderful.

Shouldn’t we just give Prozac to everybody so that they’re all equally happy?
I don’t think that happiness—in the modern sense—is at all a sensible goal. Because when happiness is the goal, the risk is that people become more cautious and less adventurous than they would be. Happiness is a myth of satisfaction. The belief is that if you arrange your life in a certain way, then you’ll have a certain state of mind. But I think a more interesting, fulfilling way to live is to just do what interests you because you never know in advance whether you’ll be satisfied or not.

Yet so many people try to calculate their lives and perceive themselves as optimize-able machines.
Yeah, people seem to wish to be able to engineer their own mental state. And, as you say, if you start doing that, then it’s a quick step to chemical engineering. As we know in America and elsewhere, there’s an enormous amount of antidepressants used, which people didn’t do in previous times. Does that mean they were depressed all the time?

Oh yeah, definitely.
I’ve called progress the Prozac of the thinking class before. People have often said to me, “If I didn’t believe in progress, I wouldn’t be able to get up in the morning.” But as the belief in progress is only about 200 years old, I usually ask them, “What were people doing before that? Lying in bed all the time?”

That’s the heritage of the Enlightenment. Everything before that was just the Dark Ages.
Oh yes. The whole of human history up until the Enlightenment was a prelude to us. It’s completely absurd. The Enlightenment would have it that everybody who lived before it—people like Shakespeare—was an undeveloped human being. The proponents of the Enlightenment and the idea of progress like to think that they are an important chapter in this vast historical narrative. In fact, that narrative doesn’t exist. It’s a rather silly fiction.

Isn’t the belief that everything will get better and that the world is now moving toward a blessed end state kind of schizophrenic, in the sense that we’ve actually been living in a deep crisis since the 1970s?
The rapid movement in technological advancements creates a phantom of progress. Phones are getting better, smaller, and cheaper all the time. In terms of technology, there’s a continuous transformation of our actual everyday life. That gives people the sense that there is change in civilization. But, in many ways, things are getting worse. In the UK, incomes have fallen and living standards are getting worse.

And advances in technology don’t mean that things are necessarily getting better in the grand scheme of things.
Oh, absolutely. Technological progress is double-edged. The internet, for example, has more or less destroyed privacy. Anything you do leaves an electronic trace.

Some people even want their mind to be transferred into the Internet to be digitally immortal.
That’s kind of moving in a way, but also utterly absurd. Even if it were possible to upload your whole mind on to a computer, it wouldn’t be you.

There seems to be a wide misunderstanding of what it means to be yourself.
Yes. You haven’t chosen to be the self that you are. You’re irreplaceable. You’re a singularity. We are who we are because of the lives that we have. And that involves having a body, being born, and dying.

Especially dying.
Yes, especially. A lot of contemporary phenomena, like faith in progress, is really an attempt to evade the reality of death. In actuality, each of our lives is singular and final; there is no second chance. This is not a rehearsal. It’s the real thing. It’s very easy to escape that reality with religion, but I don’t despise that—if that’s what you need, then that’s fine. What I certainly do despise is the emergence of religious needs in people who despise religion. But not the needs themselves.

Uploading yourself to cyberspace sounds a lot like going to heaven when you die.
I don’t live my life on the basis of religion, and I don’t subscribe to it at all. But the idea that human beings will be recalled from the dead at the end of time is more credible than virtual immortality because it’s supposed to be mysterious—it’s supposed to be a miracle and inexplicable. In that sense it’s a more credible view than this utterly bizarre nonsense about uploading our minds into cyberspace. I mean, what is the reality of cyberspace? Cyberwar. In fact, everything behind all that talk about virtual immortality is a Christian worldview.

The Geek shall inherit the Earth

The Geek shall inherit the Earth

By Sebastian Anthony

Modern society is massively complex. We like to pretend that our mastery of tools and technology has made life easier or better, but in actuality it has never been harder to simply live life.

Above all else, tools and technology give us choices, and more choice means more complexity. 10,000 years ago, life basically consisted of hunting, eating, and procreating. Stone arrowheads were the state of the art, tool-wise. Over time, new materials were mastered and new tools devised (iron, paper, plastics, computer chips) — and society grew increasingly complex (trade, diplomacy, religion, global media).

Tools are force multipliers — and our tools and technologies are now so advanced that the tiniest of human machinations can have worldwide repercussions. In the past, your actions very rarely affected anything or anyone beyond your immediate vicinity — today, a single photo shot by a smartphone and uploaded to Facebook can change the world. In the past, tools had very specific purposes — today, thanks to monstrously powerful general purpose hardware and operating systems, our computerized tools can perform an almost infinite number of tasks, often at the same time, and usually without us even being aware that they’re being performed.

For each will have to bear his own load

It’s cliched, but with great power comes great responsibility — but to put it bluntly, most mere mortals simply have no idea how to handle the overwhelming power of modern devices. Do you know someone who has sent an embarrassing email or picture message to the wrong person, or misunderstood the privacy settings on their Facebook or Twitter feeds? How many of your friends know what really happens when you push the power button on your PC, or press play on Spotify?

It wasn’t so long ago that most people completely understood every aspect of their tools — and this reflected in their proficiency with these tools. Today, there probably isn’t a single person alive who could tell you exactly how to make an LCD monitor, let alone a whole computer — and likewise, there are very few people who know how to properly use a computer. A modern personal computer outputs more data and has more functionality than a 1970s supercomputer that would be operated by a dozen engineers — and yet in today’s always-connected, ubiquitously digital world, we expect a single, relatively uneducated person to somehow use these devices effectively.

And yet somehow… miraculously… it actually works. Yes, people still screw up and crash their car while texting, or get malware on their computer, but for the most part we make incredibly good use of our tools. Despite the occasional faux pas, we do seem to make surprisingly good use of our technologies.

Partly, this is down to the near-infinite adaptability of mankind — but it’s also down to the geeks. Human civilization has always had elders that guide their spiritual children safely through life’s perils. In the olden days, these wise men and women would educate their communities in the ways of the world — how to grow crops, how to nurture children. In today’s hyper-advanced society, geeks are our sages, our shaman, our technocratic teachers.

The geek/scientist last supper

The geek is my shepherd

Now more than ever before, the only way that we will successfully navigate technological pitfalls and make it out in one piece is if we listen carefully and follow in the footsteps of the geeks, the shepherds of society. This is quite a burden for geeks, who obviously have a better grasp of the underlying science and wizardry, but they’re still being buffeted by the same startling rate of advancement and myriad ethical and moral repercussions that hyper-advanced technology is thrusting upon the rest of us.

As our shepherds, geeks must assimilate our technological advances, and then quickly provide guidance for the rest of us. You can probably remember a time when you asked a geek for advice on your next PC, whereupon he gladly imparted upon you the latest hardware, software, and peripheral wisdom. Or maybe you’re the geek that people come to, seeking council.

Today, with the exponential effect of Moore’s law and the emergence of pervasive, ubiquitous computing, it’s a little more complicated. It’s no longer a matter of the fastest computer or largest hard drive; we’re now talking about ecology (power usage, recycling), privacy (social sharing, behavioral targeting), and other philosophical quandaries that most geeks really aren’t ready for. Five years ago, almost every geek would agree on which CPU is the fastest (the Core 2 Duo) — but today, ask three geeks about which mobile OS is the best, or what your Facebook privacy settings should be, and you’ll get three very different answers.

Seb, during his Messianic phaseThis isn’t necessarily a bad thing. As our interactions with hyper-advanced technology shifts from the hard sciences underpinning hardware (chemistry, physics) to the soft sciences that govern software (sociology, psychology, law), it’s understandable that absolute answers are harder to come by. It isn’t vital that geeks always give the right answer, anyway: The main thing is that they know enough that they can give advice.

Ultimately, for those of us who are non-geeks, the real takeaway here is that we’re beholden to the wish and whimsy of our geeky compatriots. This has been the case throughout history, though, with the masses following in the footsteps of just a handful of wise men. Geeks have taken over the mantle now, and for better or worse there isn’t much we can do about it. It would seem that geeks are doing a pretty good job so far, though.

If you’re a geek, however, remember that you are society’s ace in the hole; a shepherd who will gently guide us through the uncertain, ever-shifting mists of bleeding-edge tech, but also a captain who will ride out any storms that we suddenly find ourselves in. This is a lot of responsibility to bear, but like the priests, village elders, and witches that came before you, you will do the job, and you will hopefully do it to the best of your capability. Pay heed: Your actions will directly affect the adoption (or not) of technology, thus shaping the future of human civilization.

No pressure, geeks. No pressure.

 

Spider-Man, Rhino and What It Takes to Power an Exoskeleton

Spider-Man, Rhino and What It Takes to Power an Exoskeleton

By E. Paul Zehr

“Get your mechanized mitts in the air!”

— Spider-Man to Rhino in The Amazing Spider-Man 2 (2014 Sony Pictures)

Created by Stan Lee and Steve Ditko and appearing initially in a story by Lee with art by Jack Kirby in Amazing Fantasy #15 in August of 1962, Spider-Man has been a hugely popular and ever quirky superhero. You can’t help but love Spider-Man’s accessible character played to great effect by Andrew Garfield in the recent Marvel Studios films The Amazing Spider-Man (2012) and The Amazing Spider-Man 2 (2014).


(Thwip! Thwip! Warning: this post may contain some small spoilers for The Amazing Spider-Man 2… please read on at our own risk…)

But this post isn’t so much about ol’ Web Head as much as it is about Spider-Man’s fantastic gallery of super-villainous bad guys. Or, I should say, one of Spidey’s more obscure enemies. I don’t mean Electro, Sandman, Vulture, Mysterio, Hobgoblin or Doctor Octopus (my favorite in this list). It’s not about any one of these “Sinister Six”. Instead I want to look into Rhino, who plays an interesting but admittedly peripheral role in The Amazing Spider-Man 2.

Rhino (aka Aleksei Sytsevich) debuted in 1966 in the pages of The Amazing Spider-Man #41 and is portrayed in The Amazing Spider-Man 2 by the excellent Paul Giamatti. Even though Rhino may not be the smartest of villains—Mike Conroy in his book “500 Comic Book Villains” used the adjective “dimmest”—there are some really compelling things about him.

The main one for me is Rhino’s exoskeleton. I’m hugely captivated by exoskeletons and their applications with real biological bodies. I wrote about exoskeletons in some of my other guest posts here at Scientific American (“Assembling An Avenger”, “Iron Man Extreme Firmware Update 3.0”, and “The Exoskeletons Are Here!”). The way Rhino is portrayed in The Amazing Spider-Man 2 galloping through New York City in broad daylight really does put a spotlight on exoskeletons.

Despite that the surface view of Rhino’s powered up and armored exoskeleton—like that of Iron Man—grabs all the attention, I want to get beneath the surface here and talk a bit about the power problem. How could you provide enough energy to power up the robotic exoskeleton of a raging Rhino?

When I wrote Inventing Iron Man I spoke to my colleague Jim Kakalios, the friendly neighborhood physics professor and author of The Physics of Superheroes about this. Jim was quick to explain that:

“Energy storage in batteries has dramatically lagged behind information storage. If batteries had followed Moore’s Law, which describes the increase in density of transistors on integrated circuits, with a doubling in capacity every two years, then a battery that would discharge in one hour in 1970 would last for over a century today. Ultimately, if we don’t want to wear licensed nuclear power packs on our backs, we are limited to chemical processes to run our suit of high tech armor, and in that case we must either sacrifice weight or lifetime.”

I’ve tried to represent this issue in the figure to the left where the red line for CPU power can be seen as increasing at a rate far beyond that of battery energy density, drawn in blue (data from Starner and Pardiso 2004).


Fully developed armored exoskeletons like Rhino’s need a lot of power. To give you an example of how much energy would actually be needed to power something simple like stepping movement in a robotic, exoskeletal body consider the work of Masato Hirose and Kenichi Ogawa at Honda Research and Development and Engineering Companies. In 2007 they estimated that the Honda P2 humanoid robot (two prototypes before the well-known ASIMO robot) consumed nearly 3,000 watts during walking, giving it an operational time of only 15 minutes. This is equivalent to about 2,600 kCal of energy, the daily energy expenditure for an average North American male!

So while we wait for cold fusion (sorry, “low energy nuclear reactions”) and the invention of Tony Stark’s arc reactor, what kind of additional tweaks should be built into the exoskeleton of any self-respecting and science-grounded super villain?

The incremental answer is energy harvesting! When Rhino is rampaging around New York City making mayhem, his arms and legs are doing both positive and negative mechanical work. Basically his positive work powers his propulsive stepping and the negative work is occurring in the recovery from each stepping motion. This is an important part of the neuromechanics of moving, but it kind of seems like wasted energy.

But what if there was some way to harvest this wasted energy more directly for other purposes? Over many years, researchers have been thinking about this very question using power from people. This includes ideas around harvesting heat energy, energy from breathing, blood pressure, inertia from moving, arm motion, and energy at heel strike while walking.


A team headed by Dr. Max Donelan at Simon Fraser University developed a device—one of TIME Magazine’s Top 50 best inventions of 2008—that’s strapped around the knee. This “bionic power” energy harvester is basically a frame and generator system mounted on a modified orthopedic knee brace (see image at right). In application, a user would wear one of the 850-gram devices on each leg. Using a clutch and generator system, the negative work during walking is captured and stored. Current devices can generate a maximum of 25 watts of electricity. At a comfortable walking speed 12 watts can be generated, which over of one hour would charge four cell phones.

Donelan agreed that the energy harvesting capacity of his device is “still far less than Rhino would need to take on Spider-Man.” But, he added, “Rhino could use it to charge his phone so he could call in his villain posse to kick some Spidey-butt.”

In the real world, though, this is enough electrical energy to power different mobile devices or an artificial limb. Without question Rhino’s exoskeleton, regardless of the source of major power, would need to make use of this very efficient kind of power storage and reuse.

Using the alliterative naming convention of the great Stan Lee—who gave us both Peter Parker and J. Jonah Jameson—let’s end by saying every little bit of current counts and we can have no watts wasted.