What Does It Mean That 1 in 4 Adults Didn’t Read a Book Last Year?

What Does It Mean That 1 in 4 Adults Didn’t Read a Book Last Year?

By Casey N. Cep

Last week, the Pew Research Center released its survey on America’s reading habits. Depending on whom you asked, the survey either exposed the age of illiteracy or revealed the rise of the literati. Those tearing their tunics and pulling their hair bemoaned that 24 percent of adults did not read a single book last year; others sang praises to the heavens that 76 percent of adults still read books.

Almost three-quarters of American adults read at least one book, either print or electronic or audio, in the last year. The typical reader consumed five. E-books are on the rise; audio books are still popular; and even most of those who consume books in alternative formats continue to read printed books. Fifty percent of those surveyed have a tablet or e-reader and 92 percent have a mobile phone.

Our reading habits reflect not only our choices, but also our abilities. More and more, they also reflect our access. Acquiring books, new or used, may seem like an inexpensive venture for most, but for others the cost is prohibitive.

The Pew survey is a fascinating look at how often and by what means we read, though it reveals little about what we actually read. According to Nielson BookScan, the five best-selling books of 2013 were: Jeff Kinney’s Diary of a Wimpy Kid: Hard Luck (the eighth book in the series); Dan Brown’s Inferno; Bill O’Reilly and Martin Dugard’s Killing Jesus; Eben Alexander’s Proof of Heaven; and Rick Riordan’s The House of Hades. Look at Amazon sales, and the list is a little different: Tom Rath’s Strengths Finder 2.0; Sheryl Sandberg’s Lean In: Women, Work, and the Will to Lead; Kinney’s Diary of a Wimpy Kid: Hard Luck; Rush Limbaugh’s Rush Revere and the Brave Pilgrims: Time-Travel Adventures With Exceptional Americans; and Sarah Young’s Jesus Calling: Enjoying Peace in His Presence.

If we are what we read, then Americans are wimpy, religious, ambitious, self-improving, and patriotic. The specific possibility that the only book any adult read last year was one of the best-selling books on the Nielson or Amazon list is perhaps more disheartening than the shapeless fact that three-quarters of the American population read only one book. But reading is reading, no matter what is read, and the Pew study looks specifically at books when no doubt most of those surveyed read something in the last year, even if it’s wasn’t books.

The availability of well-reported stories and long-form journalism, as well as fiction and poetry, online suggests that our literary diets might have evolved, not become emaciated. A decade ago, you might have heard a story on the news or read an article in your daily newspaper that led you to buy a book on the subject. Now the same tidbit, however you find it, sends you to the Web to find context from many sources and opinions from multiple writers. The deep analysis we associate with books can be found scattered across the Internet like chapters without a binding, an endless collection of ideas that we edit ourselves.

In the past, your collective library could be seen on your shelves and in your loan history at the library. Now it’s in various browser histories, divided between devices, stretched between pages of books, audio files, emails, and PDFs. The complexity of the Pew survey shows how much more difficult it is to track “reading” in the Digital Age.

However, the most critical measure of our reading culture is not necessarily the amount read, in whatever format, but the ability to read. Last April, the United States Department of Education and the National Institute of Literacy found that 32 million Americans, or about 14 percent of the population, cannot read, while almost a quarter of American adults read below a fifth-grade level. In fact, literacy rates in America haven’t risen much in two decades.

Those sobering figures complicate whatever judgments we might make about last week’s Pew study. Our reading habits reflect not only our choices, but also our abilities. More and more, they also reflect our access. Acquiring books, new or used, may seem like an inexpensive venture for most, but for others the cost is prohibitive. The free alternatives, public libraries, are chronically underfunded: local branches are closing, while opening hours and staff are increasingly limited. To assume that the 24 percent of adults who did not read a single book last year chose to do so is to ignore the likelihood that some of those surveyed cannot read or could not access a book.

Suppose, though, a sizable portion of those surveyed can read, but don’t, or have access to books, but choose not to read them. Once we’ve funded literacy programs and adequately supported public libraries, then how do we nurture a reading culture?

Some of the specifics of the Pew survey offer ideas; the detailed analysis found that: “Women are more likely than men to have read a book in the previous 12 months, and those with higher levels of income and education are more likely to have done so as well.” Education is critical to cultivating a culture of reading, not only basic literacy, but a love of reading. The so-called “language gap” begins early, some believe as early as the first 18 months of a child’s life, and it only grows. If we truly want a nation of readers, then our conversations about inequality ought to focus on education, not only income.

What makes us human?

What makes us human? Unique brain area linked to higher cognitive powers

Oxford University researchers have identified an area of the human brain that
appears unlike anything in the brains of some of our closest relatives.

The brain area pinpointed is known to be intimately involved in some of the most advanced planning and decision-making processes that we think of as being especially human.

‘We tend to think that being able to plan into the future, be flexible in our approach and learn from others are things that are particularly impressive about humans. We’ve identified an area of the brain that appears to be uniquely human and is likely to have something to do with these cognitive powers,’ says senior researcher Professor Matthew Rushworth of Oxford University’s Department of Experimental Psychology.

MRI imaging of 25 adult volunteers was used to identify key components in the ventrolateral frontal cortex area of the human brain, and how these components were connected up with other brain areas. The results were then compared to equivalent MRI data from 25 macaque monkeys.

This ventrolateral frontal cortex area of the brain is involved in many of the highest aspects of cognition and language, and is only present in humans and other primates. Some parts are implicated in psychiatric conditions like ADHD, drug addiction or compulsive behaviour disorders. Language is affected when other parts are damaged after stroke or neurodegenerative disease. A better understanding of the neural connections and networks involved should help the understanding of changes in the brain that go along with these conditions.

The Oxford University researchers report their findings in the science journal Neuron. They were funded by the UK Medical Research Council.

(A) The right vlFC ROI. Dorsally it included the inferior frontal sulcus and, more posteriorly, it included PMv; anteriorly it was bound by the paracingulate sulcus and ventrally by the lateral orbital sulcus and the border between the dorsal insula and the opercular cortex. (B) A schematic depiction of the result of the 12 cluster parcellation solution using an iterative parcellation approach. We subdivided PMv into ventral and dorsal regions (6v and 6r, purple and black). We delineated the IFJ area (blue) and areas 44d (gray) and 44v (red) in lateral pars opercularis. More anteriorly, we delineated areas 45 (orange) in the pars triangularis and adjacent operculum and IFS (green) in the inferior frontal sulcus and dorsal pars triangularis. We found area 12/47 in the pars orbitalis (light blue) and area Op (bright yellow) in the deep frontal operculum. We also identified area 46 (yellow), and lateral and medial frontal pole regions (FPl and FPm, ruby colored and pink). Credit: Neuron, Neubert et al.

Professor Rushworth explains: ‘The brain is a mosaic of interlinked areas. We wanted to look at this very important region of the frontal part of the brain and see how many tiles there are and where they are placed.

‘We also looked at the connections of each tile – how they are wired up to the rest of the brain – as it is these connections that determine the information that can reach that component part and the influence that part can have on other brain regions.’
From the MRI data, the researchers were able to divide the human ventrolateral frontal cortex into 12 areas that were consistent across all the individuals.

‘Each of these 12 areas has its own pattern of connections with the rest of the brain, a sort of “neural fingerprint”, telling us it is doing something unique,’ says Professor Rushworth.
The researchers were then able to compare the 12 areas in the human brain region with the organisation of the monkey prefrontal cortex.

Overall, they were very similar with 11 of the 12 areas being found in both species and being connected up to other brain areas in very similar ways.

However, one area of the human ventrolateral frontal cortex had no equivalent in the macaque – an area called the lateral frontal pole prefrontal cortex.
‘We have established an area in human frontal cortex which does not seem to have an equivalent in the monkey at all,’ says first author Franz-Xaver Neubert of Oxford University. ‘This area has been identified with strategic planning and decision making as well as “multi-tasking”.’

The Oxford research group also found that the auditory parts of the brain were very well connected with the human prefrontal cortex, but much less so in the macaque. The researchers suggest this may be critical for our ability to understand and generate speech.

The Ethics Of War Machines

The ethics of war machines

By Adrianne Jeffries


By the time the sun rose on Friday, December 19th, the Homestead Miami race track had been taken over by robots. Some hung from racks, their humanoid feet dangling above the ground as roboticists wheeled them out of garages. One robot resembled a gorilla, while another looked like a spider; yet another could have been mistaken for a designer coffee table. Teams of engineers from MIT, Google, Lockheed Martin, and other institutions and companies replaced parts, ran last-minute tests, and ate junk food. Spare heads and arms were everywhere.

It was the start of the Robotics Challenge Trials, a competition put on by the Defense Advanced Research Projects Agency (DARPA), the branch of the US Department of Defense dedicated to high risk, high reward technology projects. Over a period of two days, the machines would attempt a series of eight tasks including opening doors, clearing a pile of rubble, and driving a car.

The eight robots that scored highest in the trials would go on to the finals next year, where they will compete for a $2 million grand prize. And one day, DARPA says, these robots will be defusing roadside bombs, surveilling dangerous areas, and assisting after disasters like the Fukushima nuclear meltdown.

Mark Gubrud, a former nanophysicist and frumpy professor sort, fit right in with the geeky crowd. But unlike other spectators, Gubrud wasn’t there to cheer the robots on. He was there to warn people.

“DARPA’s trying to put a face on it, saying ‘this isn’t about killer robots or killer soldiers, this is about disaster response,’ but everybody knows what the real interest is,” he says. “If you could have robots go into urban combat situations instead of humans, then your soldiers wouldn’t get killed. That’s the dream. That’s ultimately why DARPA is funding this stuff.”

As the US military pours billions of dollars into increasingly sophisticated robots, people inside and outside the Pentagon have raised concerns about the possibility that machine decision will replace human judgment in war.

Around a year ago, the Department of Defense released directive 3000.09: “Autonomy in Weapons Systems.” The 15-page document defines an autonomous weapon — what Gubrud would call a killer robot — as a weapon that “once activated, can select and engage targets without further intervention by a human operator.”

The directive, which expires in 2022, establishes guidelines for how the military will pursue such weapons. A robot must always follow a human operator’s intent, for example, while simultaneously guarding against any failure that could cause an operator to lose control. Such systems may only be used after passing a series of internal reviews.


The guidelines are sketchy, however, relying on phrases like “appropriate levels of human judgment over the use of force.” That leaves room for systems that can be given an initial command by a human, then dispatched to select and strike their targets. DARPA is working on a $157 million long-range anti-ship missile system, for example, that is about as autonomous as an attack dog that’s been given a scent: it gets its target from a human, then seeks out and engages the enemy on its own.

Some experts say it could take anywhere from five to thirty years to develop autonomous weapons systems, but others would argue that these weapons already exist. They don’t necessarily look like androids with guns, though. The recently tested X-47B is one of the most advanced unmanned drones in the US military. It takes off, flies, and lands on a carrier with minimal input from its remote pilot. The Harpy drone, built by Israel and sold to other nations, autonomously flies to a patrol area, circles until it detects an enemy radar signal, and then fires at the source. Meanwhile, defense systems like the US Phalanx and the Israeli Iron Domeautomatically shoot down incoming missiles, which leaves no time for human intervention.

“A human has veto power, but it’s a veto power that you have about a half-second to exercise,” says Peter Singer, a fellow at the Brookings Institute and author of Wired for War: The Robotics Revolution and Conflict in the 21st Century. “You’re mid-curse word.”
Gubrud, an accomplished academic, first proposed a ban on autonomous weapons back in 1988. He’s typically polite, but talk of robotics brings out his combative side: the DARPA challenge organizers assigned him an escort after he accosted director Arati Prabhakar and tried to get her to admit that the agency is developing autonomous weapons.

He may have been the lone voice of dissent among the hundreds of robot-watchers at DARPA’s event, but Gubrud has some muscle behind him: the International Committee for Robot Arms Control (ICRAC), an organization founded in 2009 by experts in robotics, ethics, international relations, and human rights law. If robotics research continues unchecked, ICRAC warns, the future will be a dystopian one in which militaries arm robots with nuclear weapons, countries start unmanned wars in space, and dictators use killer robots to mercilessly control their own people.

Concern about robot war fighters goes beyond a “cultural disinclination to turn attack decisions over to software algorithms,” as the autonomy hawk Barry D. Watts put it. Robots, at least right now, have trouble discriminating between civilians and the terrorists and insurgents who live among them. Furthermore, a robot’s actions are a sum of its programmer, operator, manufacturer, and other factors, making it difficult to assign responsibility if something does go wrong. And finally, replacing soldiers with robots would convert the cost of war from human lives to dollars, which could lead to more conflicts.

ICRAC and more than 50 organizations including Human Rights Watch, Nobel Women’s Initiative, and Code Pink have formed a coalition calling itself the Campaign to Stop Killer Robots. Their request is simple: an international ban on autonomous weapons systems that will head off the robotics arms race before it really gets started.


There has actually been some progress on this front. A United Nationsreport in May, 2013 called for a temporary ban on autonomous lethal systems until nations set down rules for their use. “There is widespread concern that allowing lethal autonomous robots to kill people may denigrate the value of life itself,” the report says. “Tireless war machines, ready for deployment at the push of a button, pose the danger of permanent (if low-level) armed conflict.”

The UN Convention on Certain Conventional Weapons will convene a meeting of experts this spring, the first step toward an international arms agreement. “We need to have a clear view of what the consequences of those weapons could be,” says Jean-Hugues Simon-Michel, the French ambassador to the UN Conference on Disarmament and its chairman, who persuaded the other nations to take up the issue. “And of course when there is a particular concern with regard to a category of weapons, it’s always easier to find a solution before those weapons exist.”

Watching the robots stumble around the simulated disaster areas at the DARPA trials would have been reassuring to anyone worried about killer robots. Today’s robots are miracles of science compared to those from 20 years ago, but they are still seriously impaired by lousy perception, energy inefficiency, and rudimentary intelligence. The machines move agonizingly slowly and wear safety harnesses in case they fall, which happens often.
The capabilities being developed for the challenge, however, are laying the groundwork for killer robots should we ever decide to build them. “We’re part of the Defense Department,” DARPA’s director, Arati Prabhakar, acknowledges. “Why do we make these investments? We make them because we think that they’re going to be important for national security.” One recent report from the US Air Force notes that “by 2030 machine capabilities will have increased to the point that humans will have become the weakest component in a wide array of systems and processes.”


By some logic, that might be a good thing. Robot shooters are inherently more accurate than humans, and they’re unaffected by fear, fatigue, or hatred. Machines can take on more risk in order to verify a target, loitering in an area or approaching closer to confirm there are no civilians in the way.

“If we can protect innocent civilian life, I do not want to shut the door on the use of this technology,” says Ron Arkin, PhD, a roboticist and ethicist at the Georgia Institute of Technology who has collaborated extensively with Pentagon agencies on various robotics systems.

Arkin proposes that an “ethical governor,” a set of rules that approximates an artificial conscience, could be programmed into the machines in order to ensure compliance with international humanitarian law. Autonomy in these systems, he points out, isn’t akin to free will — it’s more like automation. During the trials, DARPA deliberately sabotaged the communications links between robots and their operators in order to give an advantage to the bots that could “think” on their own. But at least for now, that means being able to process the command “take a step” versus “lift the right foot 2 inches, move it forward 6 inches, and set it down.”

“When you speak to philosophers, they act as if these systems will have moral agency,” Arkin says. “At some level a toaster is autonomous. You can task it to toast your bread and walk away. It doesn’t keep asking you, ‘Should I stop? Should I stop?’ That’s the kind of autonomy we’re talking about.”

“No one wants to hear that they’re building a weapon,” says Doug Stephen, a software engineer at the Institute for Human and Machine Cognition (IHMC) whose team placed second at DARPA’s event. But he admits that the same capabilities being honed for these trials — ostensibly to make robots good for disaster relief — can also translate to the battlefield. “Absolutely anything,” Stephen says, “can be weaponized.”

His team’s robot, a modification of the humanoid Atlas built by Boston Dynamics, earned the most points in the least amount of time on several challenges, including opening doors and cutting through walls. When it successfully walked over “uneven terrain” built out of cinder blocks, the crowd erupted into cheers. Stephen and his team will now advance to the final stage of the challenge next year — alongside groups from institutes including MIT and NASA — to vie for the $2 million prize.

That DARPA funding could theoretically seed the rescue-robot industry, or it could kickstart the killer robot one. For Gubrud and others, it’s all happening much too fast: the technology for killer robots, he warns, could outrun our ability to understand and agree on how best to use it. “Are we going to have robot soldiers running around in future wars, or not? Are we going to have a robot arms race which isn’t just going to be these humanoids, but robotic missiles and drones fighting each other and robotic submarines hunting other submarines?” he says. “Either we’re going to decide not to do this, and have an international agreement not to do it, or it’s going to happen.”

Stephen Hawking shakes up theory (again): Black holes are actually gray

Stephen Hawking shakes up theory (again): Black holes are actually gray

by Alan Boyle

British physicist Stephen Hawking earned worldwide attention for his surprising claims about black holes, and he’s doing it again with a new paper claiming that “there are no black holes.”

Actually, Hawking isn’t denying the existence of the massive gravitational singularities that lurk at the center of many galaxies, including our own Milky Way. He’s just saying the classical view of a black hole as an eternal trap for everything that’s inside, even light, is wrong. In his revised view, black holes are ever so slightly gray, with a chaotic and shifting edge rather than a sharply defined event horizon.

“The absence of event horizons mean that there are no black holes — in the sense of regimes from which light can’t escape to infinity,” Hawking writes in a brief paper submitted to the ArXiv.org preprint database. “There are, however, apparent horizons which persist for a period of time.”

Hawking’s paper, titled “Information Preservation and Weather Forecasting for Black Holes,” has kicked off a new round in the long-running debate over black holes and what happens to the stuff that falls into them. Theoretical physicists, including Hawking, have gone back and forth on this issue, known as the information paradox.

Back and forth over black holes
For decades, Hawking contended that the information that disappeared inside a black hole was lost forever. Then, in 2004, he reversed course and said the information would slowly be released as a mangled form of energy. That switch led him to pay off a bet he had made with another physicist about the fate of information in a black hole.

More recently, other physicists have suggested that there was a cosmic firewall dividing the inner region of a black hole’s event horizon from the outside, and that anything falling through the event horizon would be burnt to less than a crisp. But that runs counter to the relativistic view of black holes, which holds that there should be no big difference in the laws of physics at the event horizon.

To resolve the seeming paradox, Hawking says that black holes would have “apparent horizons” — chaotic, turbulent regions where matter and energy are turned into a confusing mess. “There would be no event horizons and no firewalls,” he says. Everything in a black hole would still be there, but the information would be effectively lost because it gets so scrambled up.

“It will be like weather forecasting on Earth. … One can’t predict the weather more than a few days in advance,” Hawking writes.

Protests and jests
Hawking’s paper wasn’t peer-reviewed, but his peers are already weighing in on the accuracy of the black hole weather report.

“It is not clear what he expects the infalling observer to see,” Joseph Polchinski, a pro-firewall physicist at the University of California at Santa Barbara, told New Scientist. “It almost sounds like he is replacing the firewall with a chaos-wall, which could be the same thing.”

“The idea that there are no points from which you cannot escape a black hole is in some ways an even more radical and problematic suggestion than the existence of firewalls,” Raphael Bousso, a theoretical physicist at the University of California at Berkeley, said in Nature’s online report on Hawking’s paper. “But the fact that we’re still discussing such questions 40 years after Hawking’s first papers on black holes and information is testament to their enormous significance.”

If the “no black holes” quote is taken out of context, it makes Hawking’s claim sound kind of ridiculous — and Andy Borowitz, a humorist at The New Yorker, has turned that take into an Onion-like jab at members of Congress. “If black holes don’t exist, then other things you scientists have been trying to foist on us probably don’t either, like climate change and evolution,” Borowitz writes in one faux quote.

Fortunately, we’re getting to the point where we won’t have to take any theorist’s word for the existence of black (or gray) holes. Astronomers are preparing to watch a huge cloud of gas fall into the black hole at the center of our galaxy — and over the next decade, they’re planning to follow through on the Event Horizon Telescope, a campaign aimed at direct observation of the galactic black hole’s edge.

As for Hawking, it just so happens that this is a big month: He turned 72 years old a couple of weeks ago, and he appears to be keeping active despite his decades-long struggle with amyotrophic lateral sclerosis. And this week marks the television premiere of “Hawking,” a PBS documentary about the good doctor’s life and work. For still more about the world’s best-known physicist, check out his recently published memoir, “My Brief History.”

Noah’s Ark Blueprints Found—4,000-Year-Old Detailed Instructions

Noah’s Ark Blueprints Found—
4,000-Year-Old Detailed Instructions

By Tara MacIsaac

‘One of the most important human documents ever discovered’

Irving Finkel, British Museum curator and author of “The Ark Before Noah,”
has found a 4,000-year-old tablet that describes the materials and measurements
for building Noah’s Ark.

It also describes the Ark in a way never before conceived by archaeologists—as round.

Finkel writes in a museum blog post of his discovery. He was at a press conference to promote his book, when Douglas Simmonds approached him with a tablet given to him by his father. His father had picked up some artifacts from Egypt and China after the war in the late 1940s.

The tablet “turned out to be one in a million,” said Finkel. Dating from 1750 B.C., it tells the Babylonian “Story of the Flood.” The Babylonian story, and its similarities to the story recounted in the Book of Genesis, were already known, but this table “has startling new contents,” Finkel said.

He lists off some of the materials a God told the Babylonian Noah to use for his ark: “Quantities of palm-fibre rope, wooden ribs and bathfuls of hot bitumen to waterproof the finished vessel … The amount of rope prescribed, stretched out in a line, would reach from London to Edinburgh!”

The ark would have had an area of about 2.2 miles squared (3.6 kilometers squared)—about the size of one and a half football fields—with walls 20 feet high.

The aspect of the description that most stunned Finkel, however, is that the ark was round. He said: “To my knowledge, no one has ever thought of that possibility.”

Finkel told the Associated Press that the tablet is “one of the most important human documents ever discovered.”

Something Doesn’t Smell Right

Something Doesn’t Smell Right

By Peter Reuell

Researchers find receptors that help determine how
likes, dislikes from sniffing are encoded in the brain

For most animals, the scent of rotting meat is powerfully repulsive. But for others, such as carrion-feeding vultures and insects, it’s a scent that can be just as powerfully attractive.

The question of why some animals are repelled and others attracted to a particular scent, scientists say, gets at one of the most basic and poorly understood mysteries in neuroscience: How does the brain encode likes and dislikes?

Harvard scientists say they’re closer to unraveling that question with the discovery of the first receptors in any species evolved to detect cadaverine and, two of the chemical byproducts responsible for the distinctive — and to most creatures repulsive — smell of rotting flesh. The study is described in a paper published in the Proceedings of the National Academy of Sciences.

“This is the first time we’ve identified a receptor for these chemicals,” said Associate Professor of Cell Biology Stephen Liberles, a senior author of the paper. “The larger question we’re interested in is: What does it mean that something is an aversive or attractive odor? How are likes and dislikes encoded in the brain? Understanding the receptors that respond to those cues could give us a powerful inroad to understanding that.”

Though researchers have long understood that olfaction involves receptors, which detect odors and in turn activate brain neurons, Liberles, together with Nobel laureate Linda Buck, recently discovered a second family of receptors, dubbed trace amine-associated receptors, or TAARs.

Though fewer in number than other odorant receptors — mice, for example, have 15, versus more than 1,000 odorant receptors, while humans have 350 receptors and just six TAARs — Liberles said the functions of the TAARs remained largely unknown.

“We knew they were olfactory receptors, but we didn’t know what ligands might activate them,” Liberles said of the TAARs. “We know in the taste system there are different families of receptors for bitter and sweet, so we thought the TAARs might be doing something specific in olfaction.”

To understand how the TAARs function, researchers sought to identify scents that would activate them, hoping they might offer clues into why a second olfactory system evolved. In recent years, scientists working in Liberles’ lab identified odors that activated six TAARs in mice and seven in rats, nearly all of which were highly aversive.

To check TAARs in fish, Liberles’ team worked with colleagues in Germany to implant olfactory receptors in cell cultures and test them against hundreds of possible odorants, hoping to identify which ones activated the receptor.

What the researchers discovered, Liberles said, was that one particular receptor appeared to act as a sensor for diamines — a class of chemicals that include cadaverine and putrescine — nearly all of which are notoriously foul-smelling. Later tests using live zebrafish showed that when researchers marked part of a fish tank with the scent of rotting fish, the fish were highly likely to avoid the area.

“What’s also interesting is that this odor — like the predator odor we identified in mice — was aversive the very first time the animal encountered it,” Liberles said. “That suggests the aversion is innate — it’s not learned — and that it involves genetic circuits that are genetically predetermined, that exist, dormant, in the animal waiting for it to encounter the odor.

“You might like the smell of baking cookies, but it’s only because you’ve learned to associate it with their taste, or the sugar rush you get from eating them,” he continued. “But this aversion is there from birth. That suggests there is some developmental mechanism underlying these circuits. The question is, what is that?”

Though researchers have thus far only shown that the TAARs are activated by amines, Liberles said it’s unlikely that is their only role in olfaction.

“We’ve been hunting for a unified theme for what the TAARs might be doing,” he said. “One model is that they’re amine receptors, and another is that they’re all encoding for aversion. I don’t think either is quite correct. I think they may have started as amine receptors, but they have since evolved to do other things.”

Understanding how odorants like cadaverine and putrescine work in the olfactory system could also shed light on why some scents — such as rotting meat — repel some creatures, but attract others.

“Species-specific behavioral responses suggest that somehow the neural circuits are changing from species to species,” Liberles said. “For instance, tests in our lab have shown that trimethylamine is attractive to mice, but highly aversive to rats. Something similar might be happening with cadaverine.

“How does that happen? It’s not known,” he continued. “We don’t understand, as a field, how aversive and attractive odors are differentially processed … but identifying the receptor gives us a handle on the neural circuits that are involved. Now that we have the receptor, we can ask basic questions about aversion and attraction circuitry in general. From there, we can begin to understand how attractive and aversive stimuli are differentially encoded, and cadaverine is about as aversive as you can get.”

Phones become smarter, people get dumber

Phones become smarter, people get dumber

by Araina Huffington

What leading executives need more than anything today is wisdom. And one of the things that makes it harder and harder to connect with our wisdom is our increasing dependence on technology. Our hyper-connectedness is the snake lurking in our digital Garden of Eden.

“People have a pathological relationship with their devices,” said Kelly McGonigal, a psychologist who studies the science of self-control at Stanford’s School of Medicine. “People feel not just addicted, but trapped.” We are finding it harder and harder to unplug and renew ourselves.

Professor Mark Williams summed up the damage we’re doing to ourselves: “What we know from the neuroscience — from looking at the brain scans of people that are always rushing around, who never taste their food, who are always going from one task to another without actually realising what they’re doing — is that the emotional part of the brain that drives people is on high alert all the time.


“So, when people think: ‘I’m rushing around to get things done,’ it’s almost like, biologically, they’re rushing around just as if they were escaping from a predator. That’s the part of the brain that’s active. But nobody can run fast enough to escape their own worries.” Mindfulness, on the other hand, “cultivates our ability to do things knowing that we’re doing them”. In other words, we become aware that we’re aware. It’s an incredibly important tool — and one that we can’t farm out to technology.

There are some who believe the increasing power of big data (using powerful computers to sift through and find patterns in massive amounts of information) is going to rival the human consciousness at some point. But there’s also growing scepticism about how effective big data is at solving problems.

As Nassim Taleb, author of The Black Swan, writes: “Big data may mean more information, but it also means more false information.” And even when the information is not false, the problem is “that the needle comes in an increasingly larger haystack”.

Empty information

The quest for knowledge may be pursued at higher speeds with smarter tools today, but wisdom is found no more readily than it was three thousand years ago in the court of King Solomon. In fact, ours is a generation bloated with information and starved for wisdom.

Stand-up Louis CK has put a brilliant comedic mirror in front of us and our screen addictions. In one of his routines, he captures the absurdity of children’s events where none of the parents is actually able to watch the soccer game or school play because they’re straining to record it on video with their devices, blocking “their vision of their actual child”. So hell-bent are we on recording our children’s milestones that we miss them altogether. “The resolution on the kid is unbelievable if you just look,” he joked. “It’s totally HD.”

File it under: Be careful what you wish for. Big data, unfettered information, the ability to be in constant contact, and our growing reliance on technology are all conspiring to create a noisy traffic jam between us and our place of insight and peace. Call it an iParadox; our smartphones are actually blocking our path to wisdom.