Topics to Explore

Moneynomics (33) Science (29) General (26) Business (25) informative (22) research (22) Healthrive (21) Interesting (20) Technology (20) insightful (20) Books (16) offbeat (16) Economy (15) Culture (14) Physics (14) Electrical (13) Engineering (13) Electronics (12) America (11) Economics (11) World Affairs (11) World Views (11) psychology (11) Arts (10) Authors (10) Foreign Policy (10) GenSci (10) COGpsych (9) Creative (9) Globalization (9) Hard Science (9) History (9) Interview (9) Mental Health (9) cogsci (9) Health (8) Neuroscience (8) hacktive (8) Entertainment (7) United States (7) brain (7) Career (6) China (6) Cosmos (6) Job Search (6) Jobs (6) Kids (6) Lifehacks (6) Literature (6) Logictive (6) Perceptive (6) Space (6) Tips and Tricks (6) ee (6) how to (6) infographic (6) video (6) Astronomy (5) Energy (5) Green Energy (5) Politics (5) Resume (5) Universe (5) Wisdom (5) innovative (5) innovators (5) nanotechnology (5) Autism (4) Entrepreneur (4) Inspiration (4) Lifentials (4) Quote (4) Religion (4) WTF (4) geek (4) Crime (3) Employment (3) Endings (3) Genetics (3) Green Tech (3) Infotainment (3) Job-Hunt (3) Pics (3) Social Sciences (3) Women (3) apple (3) cover letter (3) explainer (3) movies (3) philosophy (3) social issues (3) AstroPhysics (2) Beginnings (2) Blog (2) Education (2) Electric Vehicles (2) Evolution (2) Food (2) Frugal (2) Funny (2) Future (2) Gaming (2) Internet (2) Men (2) Music (2) Nutrition (2) Parenting (2) Quantum (2) Review (2) School (2) SciFi (2) Short story (2) Smart (2) Songs (2) Stories (2) TV Shows (2) advertising (2) cars (2) children (2) environment (2) inventors (2) phenomenon (2) power (2) speculative (2) Aotomobiles (1) Architechture (1) Comics (1) Cooking (1) DIY (1) Death (1) Divorce (1) Europe (1) Family (1) Fiction (1) Fuel Cells (1) Games (1) History of science (1) Human body (1) Lessons (1) Marriage (1) Medicine (1) MultiCulturism (1) NPR (1) Nature (1) Old age (1) Organized crime (1) Parents (1) Personal finance (1) Pregnancy (1) Programming (1) Projects (1) Quantum mechanics (1) Renewable energy (1) Retirement (1) Revolution (1) Satire (1) Science fiction (1) Sex (1) Social Media (1) Sociology (1) Solar (1) Space Travel (1) Stats (1) Talks (1) Tesla (1) Theoretical Physics (1) Thoughtful Meditations (1) Weight loss (1) Wikipedia (1) aging (1) biology (1) diet (1) documentary (1) excerpt (1) feminism (1) flash game (1) ideas (1) indie (1) marketing (1) marvel (1) psychiatry (1) sceptic (1) superhero (1) technology and mathematics (1) x-men (1)

Thursday, September 15, 2011

NIST Demonstrates First Quantum 'Entanglement' of Ions Using Microwaves

Laura Ost @ NIST.GOV

Physicists at the National Institute of Standards and Technology (NIST) have, for the first time, linked the quantum properties of two separated ions (electrically charged atoms) by manipulating them with microwaves instead of the usual laser beams. The feat raises the possibility of replacing today's complex, room-sized quantum computing "laser parks" with miniaturized, commercial microwave technology similar to that used in smart phones.

gold ion trap
Gold ion trap on aluminum nitride backing. In NIST microwave quantum computing experiments, two ions hover above the middle of the square gold trap, which measures 7.4 millimeters on a side. Scientists manipulate and entangle the ions using microwaves fed into wires on the trap from the three thick electrodes at the lower right.
Credit: Y. Colombe/NIST
View hi-resolution bottom image

Microwaves have been used in past experiments to manipulate single ions, but the NIST group is the first to position microwaves sources close enough to the ions—just 30 micrometers away—and create the conditions enabling entanglement, a quantum phenomenon expected to be crucial for transporting information and correcting errors in quantum computers.

Described in the August 11, 2011, issue of Nature,* the experiments integrate wiring for microwave sources directly on a chip-sized ion trap and use a desktop-scale table of lasers, mirrors and lenses that is only about one-tenth of the size previously required. Low-power ultraviolet lasers still are needed to cool the ions and observe experimental results but might eventually be made as small as those in portable DVD players. Compared to complex, expensive laser sources, microwave components could be expanded and upgraded more easily to build practical systems of thousands of ions for quantum computing and simulations.

"It's conceivable a modest-sized quantum computer could eventually look like a smart phone combined with a laser pointer-like device, while sophisticated machines might have an overall footprint comparable to a regular desktop PC," says NIST physicist Dietrich Leibfried, a co-author of the new paper.

Quantum computers would harness the unusual rules of quantum physics to solve certain problems—such as breaking today's most widely used data encryption codes—that are currently intractable even with supercomputers. A nearer-term goal is to design quantum simulations of important scientific problems, to explore quantum mysteries such as high-temperature superconductivity, the disappearance of electrical resistance in certain materials when sufficiently chilled.

Ions are a leading candidate for use as quantum bits (qubits) to hold information in a quantum computer. Although other promising candidates for qubits—notably superconducting circuits, or "artificial atoms"—are manipulated on chips with microwaves, ion qubits are at a more advanced stage experimentally in that more ions can be controlled with better accuracy and less loss of information.

The use of microwaves reduces errors introduced by instabilities in laser beam pointing and power as well as laser-induced spontaneous emissions by the ions. However, microwave operations need to be improved to enable practical quantum computations or simulations. The NIST researchers achieved entanglement 76 percent of the time, well above the minimum threshold of 50 percent defining the onset of quantum properties but not yet competitive with the best laser-controlled operations at 99.3 percent.

The research was supported by the Intelligence Advanced Research Projects Activity, Office of Naval Research, Defense Advanced Research Projects Agency, National Security Agency and Sandia National Laboratories.

For more details, see the NIST  news announcement "NIST Physicists 'Entangle' Two Atoms Using Microwaves for the First Time" at www.nist.gov/pml/div688/microwave-quantum-081011.cfm.

 

Interstellar Travel Not Possible Before 2200AD, Suggests Study

A new estimate of the amount of energy needed to visit the stars suggests we won't have enough for at least another two centuries

By KFC  @ [Source]

How soon could humanity launch a mission to the stars? That's the question considered today by Marc Millis, former head of NASA's Breakthrough Propulsion Physics Project and founder of the Tau Zero Foundation which supports the science of interstellar travel.

This is a question of increasing importance given the rate at which astronomers are finding new planets around other stars. Many believe that it's only a matter of time before we find an Earth analogue. And when we do find a place with the potential to host life like ours, there is likely to be significant debate about the possibility of a visit.

The big problem, of course, is distance. In the past, scientists have studied various factors that limit our ability to traverse the required lightyears. One is the speed necessary to travel that far, another is the cost of such a trip.

By looking at the rate at which our top speed and financial clout are increasing, and then extrapolating into the future, it's possible to predict when such missions might be possible. The depressing answer in every study so far is that interstellar travel is centuries away.

Today, Millis takes a different approach. He looks at the energy budget of interstellar missions. By looking at the rate at which humanity is increasing the energy it has available and extrapolating into the future, Millis is able to estimate when we will have enough to get to the stars.

To make his extrapolation, Millis looked at the amount of energy the US has used to launch the shuttle over the last thirty years or so, as a fraction of the total energy available to the country. He assumes that a similar fraction will be available for interstellar flight in future. He then calculates how much energy two different types of mission will consume.

The first mission is a human colony of 500 people on a one-way journey into the void. He assumes that such a mission requires 50 tones per human occupant and that each person will use about 1000W, equal to the average amount used by people in the US in 2007.

From this, he estimates that the ship would need some 10^18 Joules for rocket propulsion. That compares to a shuttle launch energy of about 10^13 Joules

The second mission is an unmanned probe designed to reach Alpha Centauri, just over 4 light years away, in 71 years. Such a ship would be some three orders of magnitude less massive than a colony ship so it's easy to imagine that it would require less energy.

But Millis places another constraint on this mission. Not only must it accelerate towards its destination, it must decelerate when it gets there (although why this isn't a requirement for a colony ship isn't clear).

That changes the the numbers significantly. Millis estimates that the probe would require some 10^19 Joules.

The final step in is to determine when humanity will have this kind of energy available for these kinds of missions. By extrapolation, Millis calculates that the required energy will not be available until at least the year 2196. "This study found that the first interstellar mission does not appear possible for another 2 centuries centuries," he says.

That's necessarily a crude calculation but a sobering one nonetheless. It implies that while we will soon be able to gaze with wonder upon other Earths, it will not be possible to visit them within the lifetime of anybody alive today.

In other words, for the foreseeable future, we're trapped.

Ref: arxiv.org/abs/1101.1066: Energy, Incessant Obsolescence And The First Interstellar Missions

 

 

"Alone Together": An MIT Professor's New Book Urges Us to Unplug

BY DAVID ZAX @ fastcompany.com

 

In her new book, an MIT professor shares her ambivalence about the overuses of technology, which, she writes, "proposes itself as the architect of our intimacies."

Sherry Turkle, has been an ethnographer of our technological world for three decades, hosted all the while at one of its epicenters: MIT. A professor of the social studies of science and technology there, she also heads up its Initiative on Technology and Self. Her new book, Alone Together, completes a trilogy of investigations into the ways humans interact with technology. It can be, at times, a grim read. Fast Company spoke recently with Turkle about connecting, solitude, and how that compulsion to always have your BlackBerry on might actually be hurting your company's bottom line. 

 

The title of your book, Alone Together, is chilling.

If you get into these email, Facebook thumbs-up/thumbs-down settings, a paradoxical thing happens: even though you're alone, you get into this situation where you're continually looking for your next message, and to have a sense of approval and validation. You're alone but looking for approval as though you were together--the little red light going off on the BlackBerry to see if you have somebody's validation. I make a statement in the book, that if you don't learn how to be alone, you'll always be lonely, that loneliness is failed solitude. We're raising a generation that has grown up with constant connection, and only knows how to be lonely when not connected. This capacity for generative solitude is very important for the creative process, but if you grow up thinking it's your right and due to be tweeted and retweeted, to have thumbs up on Facebook...we're losing a capacity for autonomy both intellectual and emotional.

You only mention Twitter a few times in the book. What are your thoughts on Twitter?

I think it's an interesting notion that sharing becomes part of actually having the thought. It's not "I think therefore I am," it's, "I share therefore I am." Sharing as you're thinking opens you up to whether the group likes what you're thinking as becoming a very big factor in whether or not you think you're thinking well. Is Twitter fun, is it interesting to hear the aperçus of people? Of course! I certainly don't have an anti-Twitter position. It's just not everything.

You write in your book that we today seem to view authenticity with the same skittishness that the Victorians viewed sex.

For some purpose, simulation is just as good as a real. Kids call it being "alive enough." Making an airline reservation? Simulation is as good as the real. Playing chess? Maybe, maybe not. It can beat you, but do you care? Many people are building robot companions; David Levy argues that robots will be intimate companions. Where we are now, I call it the "robotic moment," not because we have robots, but because we're being philosophically prepared to have them. I'm very haunted by these children who talk about simulation as "alive enough." We're encouraged to live more and more of our lives in simulation.

 

 

Read more here.

 

 

Wednesday, September 14, 2011

Why Do Marriages Fall Apart? A Visual Representation of US Marriage Stats [INFOGRAPHIC]]

American marriages have never been more precarious. But why do marriages fall apart, and how are families changing as a result?

The following infographic, by Tiffany Farrant and PromotionalCodes.org.uk, casts a piercing eye on the institution. Based on the annual report by The National Marriage Project, it paints a picture of marriage becoming a less and less relevant factor in the way American's live and raise children. The short version: Marriage is simply shrinking as a cultural value; where 66% of women over 15 were married in 1960, the figure has shrunk every decade since.* Now, it's just 51%:

 

When Marriage Disappears
When Marriage Disappears Infographic By Promotional Codes

How YouTube's Global Platform Is Redefining the Entertainment Business

BY: DANIELLE SACKS @ fastcompany.com
 YouTube CEO Salar Kamangar, Margaret Stewart, Shishir Mehrotra, Hunter Walk, and Robert Kyncl

Maximum cool: YouTube CEO Salar Kamangar (front) and his team: Margaret Stewart (user experience), Shishir Mehrotra (monetization), Hunter Walk (product), and Robert Kyncl (TV and film) | Photographs by Robyn Twomey

 

 

YouTube CEO Salar Kamangar and his team have transformed Google's Folly into a mind-blowing -- and lucrative -- global platform that is redefining the entertainment business.

YouTube says 94 of the top 100 brand advertisers have now run campaigns on the platform, and what's attracting them is the increasing body of research that shows that advertising on YouTube works. According to an effectiveness study by the U.K. firm Decipher Media Research, promoted videos -- video ads that appear prominently on YouTube's search-results page, competing with the content that users have searched for -- triple unaided brand awareness.

These results have yielded two insights -- that ads should be content and that any ad a user chooses is quite resonant -- and those have helped inform Mehrotra's latest initiative, which seeks to overhaul the way ads are consumed and sold on the site. TrueView, as it's known, gives viewers the option to skip an ad entirely -- but charges advertisers a premium if their content is chosen and watched the whole way through. (Another TrueView option, akin to part of Hulu's ad program, lets users choose one of a slate of ads to watch.) Nissan, Sony Pictures, and Ultimate Fighting Championship have been early adopters.

"We [the industry] want the new 30-second spot," says Publicis's Scheppach, who runs a group that's pioneering new ad models for emerging media. Based on her research, there's "300% to 400% improvement of advertising value if you pick the ad," she says.

Ultimately, Google sees this idea of "cost-per-view" advertising spreading even to its display-ad business (already driving $2.5 billion in annual revenue, of which YouTube has been called a significant but unspecified part). During Advertising Week in New York, where TrueView debuted, Google predicted that by 2015, 50% of display ads will include video, while 75% will have a social component. Most important, the company anticipates that these innovations could help make display advertising a $50 billion industry.

Read more here.


 

 

 

The China Paradox

How should Americans understand a country that presents itself as simultaneously weak and strong?

BY CHRISTINA LARSON @ foreignpolicy.com

Until recently, the Chinese paradox that most puzzled Western audiences was how to understand a country that is both communist and hyper-capitalist. But that is hardly the only, or even the most striking, paradox of the modern Middle Kingdom. China is fast on its way to becoming a global superpower, even as it grapples with such enormous domestic challenges as supplying enough energy to keep its cities lit, absorbing millions of rural migrants into cities each year, reining in choking pollution, creating a social safety net, and attempting to lift millions out of poverty. Although China holds $1 trillion in U.S. debt, its per capita GDP is still roughly one-tenth that of the United States. Beijing is subsidizing China's fast-growing clean-tech export industry, even as the skies above the country's largest cities remain a hazy gray. Such seeming contradictions are dazzlingly confusing to outsiders -- and sometimes to China's own leaders.

Yet, this recent show of confidence is making some in Beijing nervous. Although from a distance China's Communist government may appear a decision-making monolith, in fact a variety of voices are now arguing about the country's future direction -- and what face to show foreigners -- as Council on Foreign Relations scholar Elizabeth Economy documented in her recent Foreign Policyarticle, "The End of the Peaceful Rise?" While all good mandarins take pride in their country's growing economic and geopolitical clout, some critics within China worry that inflated pride comes before a fall. Ye Hailin, a research fellow with the Chinese Academy of Social Sciences, for instance, recently pointed out what he sees as flaws in current domestic sensibilities: "Three decades of reform have led to a rapid increase of wealth in China, and this in turn has also made the Chinese people arrogant. ...The Chinese people are no longer tolerant of criticisms."

 

But refusing to accept criticism is not necessarily the same thing as thinking of oneself as a superpower. At least China's citizenry, for all their surging patriotism, aren't yet buying that line. One interesting paradox about how Chinese and American people see China was evident in two recent polls. Americans tend to exaggerate China's economic strength (and presumed threat to U.S. stature), while Chinese tend to downplay news of their rising power. In a recent Pew Research Center poll, nearly half of Americans -- 47 percent -- named China as the world's top economic power (though, in fact, China's economy is about one-third the size of that of the United States). That's up significantly from early 2008, when 30 percent of Americans made the same claim. Meanwhile, when asked whether China was a "superpower," only 12 percent of Chinese people agreed in a recent poll by state-run Global Times newspaper. It's a good reminder that not only is China home to vast wealth and poverty, but also home to a range of views, ever-evolving.

Read more here.

 

Tuesday, September 13, 2011

New combination of nanoparticles and graphene results in a more durable catalytic material for fuel cells

Mary Beckman, PNNL [Source]

Bracing catalyst in material makes fuel cell component work better and last longer

Triple Junction

A nanoparticle of indium tin oxide (green and red) braces platinum nanoparticles (blue) on the surface of graphene (black honeycomb) to make a hardier, more chemically active fuel cell material. A new combination of nanoparticles and graphene results in a more durable catalytic material for fuel cells, according to work published today online at the Journal of the American Chemical Society. The catalytic material is not only hardier but more chemically active as well. The researchers are confident the results will help improve fuel cell design.

"Fuel cells are an important area of energy technology, but cost and durability are big challenges," said chemist Jun Liu. "The unique structure of this material provides much needed stability, good electrical conductivity and other desired properties."

Liu and his colleagues at the Department of Energy's Pacific Northwest National Laboratory, Princeton University in Princeton, N.J., and Washington State University in Pullman, Wash., combined graphene, a one-atom-thick honeycomb of carbon with handy electrical and structural properties, with metal oxide nanoparticles to stabilize a fuel cell catalyst and make it better available to do its job.

"This material has great potential to make fuel cells cheaper and last longer," said catalytic chemist Yong Wang, who has a joint appointment with PNNL and WSU. "The work may also provide lessons for improving the performance of other carbon-based catalysts for a broad range of industrial applications."

Muscle Metal Oxide

Fuel cells work by chemically breaking down oxygen and hydrogen gases to create an electrical current, producing water and heat in the process. The centerpiece of the fuel cell is the chemical catalyst — usually a metal such as platinum — sitting on a support that is often made of carbon. A good supporting material spreads the platinum evenly over its surface to maximize the surface area with which it can attack gas molecules. It is also electrically conductive.

Fuel cell developers most commonly use black carbon — think pencil lead — but platinum atoms tend to clump on such carbon. In addition, water can degrade the carbon away. Another support option is metal oxides — think rust — but what metal oxides make up for in stability and catalyst dispersion, they lose in conductivity and ease of synthesis. Other researchers have begun to explore metal oxides in conjunction with carbon materials to get the best of both worlds.

As a carbon support, Liu and his colleagues thought graphene intriguing. The honeycomb lattice of graphene is porous, electrically conductive and affords a lot of room for platinum atoms to work. First, the team crystallized nanoparticles of the metal oxide known as indium tin oxide — or ITO — directly onto specially treated graphene. Then they added platinum nanoparticles to the graphene-ITO and tested the materials.

Platinumweight

The team viewed the materials under high-resolution microscopes at EMSL, DOE's Environmental Molecular Sciences Laboratory on the PNNL campus. The images showed that without ITO, platinum atoms clumped up on the graphene surface. But with ITO, the platinum spread out nicely. Those images also showed catalytic platinum wedged between the nanoparticles and the graphene surface, with the nanoparticles partially sitting on the platinum like a paperweight.

To see how stable this arrangement was, the team performed theoretical calculations of molecular interactions between the graphene, platinum and ITO. This number-crunching on EMSL's Chinook supercomputer showed that the threesome was more stable than the metal oxide alone on graphene or the catalyst alone on graphene.

But stability makes no difference if the catalyst doesn't work. In tests for how well the materials break down oxygen as they would in a fuel cell, the triple-threat packed about 40% more of a wallop than the catalyst alone on graphene or the catalyst alone on other carbon-based supports such as activated carbon.

Last, the team tested how well the new material stands up to repeated usage by artificially aging it. After aging, the tripartite material proved to be three times as durable as the lone catalyst on graphene and twice as durable as on commonly used activated carbon. Corrosion tests revealed that the triple threat was more resistant than the other materials tested as well.

The team is now incorporating the platinum-ITO-graphene material into experimental fuel cells to determine how well it works under real world conditions and how long it lasts.


Reference: Rong Kou, Yuyan Shao, Donghai Mei, Zimin Nie, Donghai Wang, Chongmin Wang, Vilayanur V Viswanathan, Sehkyu Park, Ilhan A. Aksay, Yuehe Lin, Yong Wang, Jun Liu, Stabilization of Electrocatalytic Metal Nanoparticles at Metal-Metal Oxide-Graphene Triple Junction Points, February 8, 2011, J. Am. Chem. Soc., DOI 10.1021/ja107719 (http://pubs.acs.org/doi/full/10.1021/ja107719u.

This work was supported by the U.S. Department of Energy Office of Energy Efficiency and Renewable Energy.

After the unthinkable: How 9/11 changed fiction

IN THE days and weeks after 9/11 a number of writers asked what the future of fiction could be after such a rupture. The comments echoed philosopher Theodor Adorno’s comment: “Writing poetry after Auschwitz is barbaric.”
 
Ten years on it is abundantly clear that fiction does, of course, have a future. Some novelists have tackled the events of that September day head on; others have used the episode as a spur to look at the Western world shaken out of its complacency. The quality of the output, as in all areas of fiction, is highly variable.
 
Jay McInerney’s “The Good Life” was a rather crass before-and-after view of a couple forced to re-examine their relationship following the events of 9/11; Jonathan Safran Foer’s “Extremely Loud and Incredibly Close” had a number of touching moments but was ultimately too long to carry itself. Don Delillo’s “Falling Man” was a strange sort of novel which lacked the density of his other work, but it did capture some of the most chilling elements of the events: “By the time the second plane appears,” Keith comments as he and Lianne watch the endlessly cycling video of the attacks, “we're all a little older and wiser.”
 
There are three important reasons why it is hard to write a good 9/11 novel. The first is that the attack on the World Trade Centre was such a huge and overpowering event that it often overshadows and dominates the fictional elements of a novel: literary novelists normally shy away from choosing such a big and unbelievable event as the backdrop to a story. Mr McInerney’s book is the poorer, I think, because his characters seem so paper-thin beside the burning towers and anguished souls the television footage depicted. For this reason non-fiction has often been the better medium to convey the most moving and poignant record of the day.

The second is that all fiction of every genre hinges around some kind of crisis, internal or external, that a book has to see its way through. This can take many forms. But 9/11 is in a sense a bigger crisis than many novels can contain or capture: it’s a situation where truth is both bigger and stranger than fiction.
 
That is probably why many authors have taken 9/11 as a jumping-off point to look at a group or type of person they had not thought to before. Martin Amis wrote a short story in the voice of one of the 9/11 hijackers. John Updike’s “Terrorist” traced the world of a would-be suicide bomber, for example. The setting for that book, like Updike’s other work, was suburban middle-America, and many of the characters were also recognisable from earlier books, but his central figure, a teenager who becomes radicalised, sits uneasily in this context—uneasy both for the character and sadly the novel too.

Read more...

Monday, September 12, 2011

Race, Religion, and Diversity in London After 9/11

by @ newyorker.com

Suddenly summoned to witness some thing great and horrendous, we keep fighting not to reduce it to our own smallness,” wrote John Updike ten years ago in these pages. He watched the towers fall with “the false intimacy of television,” from a tenth-floor apartment in Brooklyn Heights. Over in North West London, we were certainly very small and distant, but we still felt that false intimacy. We are a mixed community, including many Muslims, from Bangladesh, Pakistan, India, the United Arab Emirates, Africa. I grew up with girls who wore the head scarf, a fact that seemed no more remarkable to me at the time than Jewish boys wearing yarmulkes or Hindu kids with bindis on their foreheads. Different world. What enabled it? It helped that so many of the class disparities between us had been partially obscured. United in the same primary schools, we were neither mesmerized by, nor especially frightened of, our differences. Later, that sense of equality became difficult to maintain. Teen-agers are preoccupied with status and justice—they notice difference. Why do some have so much while others have nothing? Natural superiority? Hard work? Historical luck? Or exploitation? For some, the basic political insights of adolescence arrived with an extra jolt: your people over here were hurting your people over there; your home was attacking your home. Then came the cataclysm. The end of the world for nearly three thousand innocent people. The beginning of a different sort of world for the rest of us. From the epicenter in Manhattan, shock waves rippled across Europe. In North West London, a small but significant change: the stereotype of the Muslim boy was transformed. From quiet, sexless, studious child—sitting in the back of class and destined for an engineering degree—to Public Enemy No. 1.

Zadie Smith was only 25 when her first novel, White Teeth, was published. It seems that this is sort of the Number One Fact about Zadie Smith, as it were, especially for Writing About Zadie Smith. At the New Yorker festival this past fall, Smith was questioned about what it was like to become such a remarkable success at such an incredibly young age. Her response was classically Zadie: It wasn’t that unusual at all, she said, and then she proceeded to list a host of names who had achieved similarly early and lasting success — among them John Updike, Martin Amis, and the rest of the foundation of modern Western literature. And she’s right — for a talent of her caliber, she is just right on track.

Sunday, September 11, 2011

The Worst Mistake America Made After 9/11: How focusing too much on the war on terror undermined our economy and global power.

 

On Sept. 11, 2001, the post-Cold War era that began so euphorically on Nov. 9, 1989, abruptly ended. The long decade that stretched from the fall of the Berlin Wall to the fall of the World Trade Center was marked by military spending cuts, domestic political scandals, and a general sense that American foreign policy was adrift. President George H.W. Bush had talked of the "New World Order" but had no policy to fit the clever phrase. President Bill Clinton had a clutch of policies but never found a neat way to describe them.

In the wake of al-Qaida's attack on New York and Washington, an organizing principle suddenly presented itself. Like the Cold War, the new "war on terror," as it instantly became known, clearly defined America's friends, enemies, and priorities. Like the Cold War, the war on terror appealed both to American idealism and to American realism. We were fighting genuine bad guys, but the destruction of al-Qaida also lay clearly within the sphere of our national interests. The speed with which we all adopted this new paradigm was impressive, if somewhat alarming. At the time, I marveled at the neatness and cleanliness of this New New World Order and observed "how like an academic article everything suddenly appears to be."

 

In our single-minded focus on Islamic fanaticism, we missed, for example, the transformation of China from a commercial power into an ambitious political power. We failed to appreciate the significance of economic growth in China's neighborhood, too. When President George W. Bush traveled in Asia in the wake of 9/11, he spoke to his Malaysian and Indonesia interlocutors about their resident terrorist cells. His Chinese colleagues, meanwhile, talked business and trade.

We also missed, at least initially, the transformation of Russia from a weak and struggling partner into a sometimes hostile opponent. Through the lens of the war on terror, Vladimir Putin, president of Russia in 2001, looked like an ally. He, too, was fighting terrorists, in Chechnya. Though his was quite a different war against quite different terrorists (and not only against terrorists), for a brief period he nevertheless convinced his American counterparts that his struggle and their struggle were more or less the same thing.

 

Thanks to the war on terror, we missed what might have been a historic chance to make a deal on immigration with Mexico. Because all of Latin America was irrelevant to the war on terror, we lost interest in, and influence on, that region, too. The same goes for Africa, with the exception of those countries with al-Qaida cells. In the Arab world, we aligned ourselves closely with authoritarian regimes because we believed they would help us fight Islamic terrorism, despite the fact that their authoritarianism was an inspiration to fanatical Islamists. If we are now treated with suspicion in place like Egypt and Tunisia, that is part of the explanation.

Finally, we stopped investing in our own infrastructure—think what $3 trillion could have done for roads, research, education, or even private investment, if a part of that sum had just been left in taxpayers' pockets—and we missed the chance to rethink our national energy policy. After 9/11, the president could have gone to the nation, declared an emergency, explained that wars would have to be fought and would have to be paid for—perhaps, appropriately, through a gasoline tax. He would have had enormous support. It's hard to remember now, but I could just about fill the tank of my car for $20 back in 2001. At the time, I'd have been happy to make it $21 if it helped the marines in Afghanistan. Instead, the president cut taxes and increased defense spending. We are only now paying the price.

 

Continue reading here.

 

The power of lonely: What we do better without other people around

(Tim Gabor for The Boston Globe)

By Leon Neyfakh

You hear it all the time: We humans are social animals. We need to spend time together to be happy and functional, and we extract a vast array of benefits from maintaining intimate relationships and associating with groups. Collaborating on projects at work makes us smarter and more creative. Hanging out with friends makes us more emotionally mature and better able to deal with grief and stress.

Spending time alone, by contrast, can look a little suspect. In a world gone wild for wikis and interdisciplinary collaboration, those who prefer solitude and private noodling are seen as eccentric at best and defective at worst, and are often presumed to be suffering from social anxiety, boredom, and alienation.

But an emerging body of research is suggesting that spending time alone, if done right, can be good for us — that certain tasks and thought processes are best carried out without anyone else around, and that even the most socially motivated among us should regularly be taking time to ourselves if we want to have fully developed personalities, and be capable of focus and creative thinking. There is even research to suggest that blocking off enough alone time is an important component of a well-functioning social life — that if we want to get the most out of the time we spend with people, we should make sure we’re spending enough of it away from them. Just as regular exercise and healthy eating make our minds and bodies work better, solitude experts say, so can being alone.

 

“There’s so much cultural anxiety about isolation in our country that we often fail to appreciate the benefits of solitude,” said Eric Klinenberg, a sociologist at New York University whose book “Alone in America,” in which he argues for a reevaluation of solitude, will be published next year. “There is something very liberating for people about being on their own. They’re able to establish some control over the way they spend their time. They’re able to decompress at the end of a busy day in a city...and experience a feeling of freedom.”

 

Solitude has long been linked with creativity, spirituality, and intellectual might. The leaders of the world’s great religions — Jesus, Buddha, Mohammed, Moses — all had crucial revelations during periods of solitude. The poet James Russell Lowell identified solitude as “needful to the imagination;” in the 1988 book “Solitude: A Return to the Self,” the British psychiatrist Anthony Storr invoked Beethoven, Kafka, and Newton as examples of solitary genius.

 

But what actually happens to people’s minds when they are alone? As much as it’s been exalted, our understanding of how solitude actually works has remained rather abstract, and modern psychology — where you might expect the answers to lie — has tended to treat aloneness more as a problem than a solution. That was what Christopher Long found back in 1999, when as a graduate student at the University of Massachusetts Amherst he started working on a project to precisely define solitude and isolate ways in which it could be experienced constructively. The project’s funding came from, of all places, the US Forest Service, an agency with a deep interest in figuring out once and for all what is meant by “solitude” and how the concept could be used to promote America’s wilderness preserves.

 

“Aloneness doesn’t have to be bad,” Long said by phone recently from Ouachita Baptist University, where he is an assistant professor. “There’s all this research on solitary confinement and sensory deprivation and astronauts and people in Antarctica — and we wanted to say, look, it’s not just about loneliness!”

 

Continue reading here.

 

The True Cost of 9/11: Trillions and trillions wasted on wars, a fiscal catastrophe, a weaker America.

The September 11, 2001, terror attacks by Al Qaeda were meant to harm the United States, and they did, but in ways that Osama bin Laden probably never imagined. President George W. Bush’s response to the attacks compromised America’s basic principles, undermined its economy, and weakened its security.

The attack on Afghanistan that followed the 9/11 attacks was understandable, but the subsequent invasion of Iraq was entirely unconnected to Al Qaeda – as much as Bush tried to establish a link. That war of choice quickly became very expensive – orders of magnitude beyond the $60 billion claimed at the beginning – as colossal incompetence met dishonest misrepresentation.

Indeed, when Linda Bilmes and I calculated America’s war costs three years ago, the conservative tally was $3-5 trillion. Since then, the costs have mounted further. With almost 50% of returning troops eligible to receive some level of disability payment, and more than 600,000 treated so far in veterans’ medical facilities, we now estimate that future disability payments and health-care costs will total $600-900 billion. But the social costs, reflected in veteran suicides (which have topped 18 per day in recent years) and family breakups, are incalculable.

Today, America is focused on unemployment and the deficit. Both threats to America’s future can, in no small measure, be traced to the wars in Afghanistan and Iraq. Increased defense spending, together with the Bush tax cuts, is a key reason why America went from a fiscal surplus of 2% of GDP when Bush was elected to its parlous deficit and debt position today. Direct government spending on those wars so far amounts to roughly $2 trillion – $17,000 for every US household – with bills yet to be received increasing this amount by more than 50%.

Moreover, as Bilmes and I argued in our book The Three Trillion Dollar War, the wars contributed to America’s macroeconomic weaknesses, which exacerbated its deficits and debt burden. Then, as now, disruption in the Middle East led to higher oil prices, forcing Americans to spend money on oil imports that they otherwise could have spent buying goods produced in the US.

But then the US Federal Reserve hid these weaknesses by engineering a housing bubble that led to a consumption boom. It will take years to overcome the excessive indebtedness and real-estate overhang that resulted.

Friday, September 9, 2011

High-performance capacitor could lead to better rechargeable batteries

By Lisa Zyga @ physorg.com 

Abstract

Abstract Image

Zeolite-templated carbon is a promising candidate as an electrode material for constructing an electric double layer capacitor with both high-power and high-energy densities, due to its three-dimensionally arrayed and mutually connected 1.2-nm nanopores. This carbon exhibits both very high gravimetric (140−190 F g−1) and volumetric (75−83 F cm−3) capacitances in an organic electrolyte solution. Moreover, such a high capacitance can be well retained even at a very high current up to 20 A g−1. This extraordinary high performance is attributed to the unique pore structure.

The unique 3D array of nanopores in zeolite-templated carbon enables it to be used as an electrode for high-performance supercapacitors that have a high capacitance and quick charge time. Image credit: Hiroyuki Itoi, et al. ©2011 American Chemical Society.

In order to develop next-generation electric vehicles, solar energy systems, and other clean energy technologies, researchers need an efficient way to store the energy. One of the key energy storage devices for these applications and others is a supercapacitor, also called an electric double-layer capacitor. In a recent study, scientists have investigated the possibility of using a material called zeolite-templated carbon for the electrode in this type of capacitor, and found that the material’s unique pore structure greatly improves the capacitor's overall performance.

To store energy, the electric double-layer capacitor is charged by ions that migrate from a bulk solution to an electrode, where they are adsorbed. Before reaching the electrode’s surface, the ions have to travel through narrow nanopores as quickly and efficiently as possible. Basically, the quicker the ions can travel down these paths, the quicker the capacitor can be charged, resulting in a high rate performance. Also, the greater the adsorbed ion density in the electrode, the greater the charge that the capacitor can store, resulting in a high volumetric capacitance.

Recently, scientists have been testing materials with pores of various sizes and structures to try to achieve both quick ion transport and high adsorption ion density. But the two requirements are somewhat contradictory, since ions can travel more quickly through larger nanopores, but large nanopores make the electrode density low and thus decrease the adsorbed ion density.

The zeolite-templated carbon consists of nanopores that are 1.2 nm in diameter (smaller than most electrode materials) and that have a very ordered structure (whereas other pores can be disordered and random). The nanopores’ small size makes the adsorbed ion density high, while the ordered structure – described as a diamond-like framework – allows the ions to quickly pass through the nanopores. In a previous study, the researchers showed that zeolite-templated carbon with nanopores smaller than 1.2 nm cannot enable fast ion transport, suggesting that this size may provide the optimal balance between high rate performance and high volumetric capacitance.

In tests, the zeolite-templated carbon’s properties exceeded those of other materials, demonstrating its potential to be used as an electrode for high-performance electric double-layer capacitors.

More information: Hiroyuki Itoi, et al. “Three-Dimensionally Arrayed and Mutually Connected 1.2-nm Nanopores for High-Performance Electric Double Layer Capacitor.” Journal of the American Chemical Society. DOI:10.1021/ja108315p

 

The problem with American remakes of British shows

Matt Zoller Seitz @ Salon.com says that the planned NBC version of the brilliant "Prime Suspect" shows why network TV shouldn't mess with great imports.

Prime Suspect, Jane Tennison (Helen Mirren)

 

Prime Suspect, Jane Tennison (Helen Mirren)

Pictured: Maria Bello as Det. Jane Timoney
Pictured: Maria Bello as Det. Jane Timoney

The idea of remaking the story of Detective Chief Inspector Jane Tennison (Helen Mirren) for American network TV seems wrongheaded. The problem is the venue. The U.S. broadcast TV model -- with its 42-minutes-a-week, 22-weeks-a-year format, frequent commercial interruptions, and still-oppressive content restrictions -- is the enemy of every fine quality that the original "Prime Suspect" possessed.

 

More important, American TV is averse to letting race, class, politics and other touchy elements drive stories because it might make viewers and sponsors skittish. That's why the American crime show's favorite bad guy is the serial killer, a mythologically exaggerated monster whose existence lets filmmakers titillate and terrify while declining to engage with society at large.

Jane Tennison never dealt with effete, wisecracking, Hannibal Lecter-type bogeymen. She lived in reality. Over 15 years,"Prime Suspect" dealt frankly with sex, sexism, race, class and the intrusion of politics into police work. It did so subtly, prizing plausibility and never delivering a jolt without reason. And it treated time as an ally instead of an enemy. One of the pleasures of "Prime Suspect" was the opportunity to re-engage with it after a long break and discover that Tennison had risen in rank or settled into a new job or a new relationship. The gaps between installments enhanced the sense that you were seeing excerpts from a life in progress. 

 

You can't do any of that on NBC. You can't re-create or even approximate "Prime Suspect" in a commercial broadcast network series that airs 22 episodes a year. The material can't breathe in the same way. And forget about being unflinching. What passes for unflinching on NBC is "Law & Order: Criminal Intent," an entertaining but mostly absurd procedural that bears about as much qualitative relation to "Prime Suspect" as "Training Day" does to "Serpico." And don't even get me started on TNT's "The Closer," a fitfully entertaining series that has wrongheadedly been described as an American answer to "Prime Suspect," presumably because its main character is a strong-willed female detective. (It's not a subtle psychological drama, it's a suck-up-to-the-star spectacle about a mercurial Southern belle following her muse and dazzling the nonbelievers. "Prime Suspect" writes in plain script, "The Closer" in big block letters.) Not many American cop shows, broadcast or cable, have engaged with reality as directly as "Prime Suspect" -- and the best of those were produced not in Hollywood, but in Baltimore:  "Homicide: Life on the Street," "The Corner," and "The Wire."

 

My friend Keith Uhlich, a Time Out New York film critic and a devotee of series TV, has a theory that broadcast network shows provide viewers with two sources of drama. One is the conflict between characters. The other is the conflict between the series and the system that produces it.

When we watched the Fox network series "The X-Files," for example, the main draw was Fox Mulder and Dana Scully vs. a vast and unknowable conspiracy. The secondary draw was series creator Chris Carter and company vs. the Fox network and the network TV assembly line generally. Every time you watched that series -- or "NYPD Blue," or "Lost," or "24" or "ER" any other U.S. network program of note -- there was an extra-dramatic sense of anticipation. You wanted to see if the writers would manage to transcend network content restrictions, format limitations, behind-the-scenes meddling by executives and sponsors -- not to mention the pitiless pressure of having to churn out 22 episodes a year even if they didn't have enough stories to justify it -- and produce great TV. Or just good TV.

Judge for yourself:

 

 

Autism’s First Child

As new cases of autism have exploded in recent years—some form of the condition affects about one in 110 children today—efforts have multiplied to understand and accommodate the condition in childhood. But children with autism will become adults with autism, some 500,000 of them in this decade alone. What then? Meet Donald Gray Triplett, 77, of Forest, Mississippi. He was the first person ever diagnosed with autism. And his long, happy, surprising life may hold some answers.

By John Donvan and Caren Zucker

Image credit: Miller Mobley/Redux

In 1951, a Hungarian-born psychologist, mind reader, and hypnotist named Franz Polgar was booked for a single night’s performance in a town called Forest, Mississippi, at the time a community of some 3,000 people and no hotel accommodations. Perhaps because of his social position—he went by Dr. Polgar, had appeared in Life magazine, and claimed (falsely) to have been Sigmund Freud’s “medical hypnotist”—Polgar was lodged at the home of one of Forest’s wealthiest and best-educated couples, who treated the esteemed mentalist as their personal guest.

Polgar’s all-knowing, all-seeing act had been mesmerizing audiences in American towns large and small for several years. But that night it was his turn to be dazzled, when he met the couple’s older son, Donald, who was then 18. Oddly distant, uninterested in conversation, and awkward in his movements, Donald nevertheless possessed a few advanced faculties of his own, including a flawless ability to name musical notes as they were played on a piano and a genius for multiplying numbers in his head. Polgar tossed out “87 times 23,” and Donald, with his eyes closed and not a hint of hesitation, correctly answered “2,001.”

Indeed, Donald was something of a local legend. Even people in neighboring towns had heard of the Forest teenager who’d calculated the number of bricks in the facade of the high school—the very building in which Polgar would be performing—merely by glancing at it.

According to family lore, Polgar put on his show and then, after taking his final bows, approached his hosts with a proposal: that they let him bring Donald with him on the road, as part of his act.

Donald’s parents were taken aback. “My mother,” recalls Donald’s brother, Oliver, “was not at all interested.” For one, things were finally going well for Donald, after a difficult start in life. “She explained to [Polgar] that he was in school, he had to keep going to classes,” Oliver says. He couldn’t simply drop everything for a run at show business, especially not when he had college in his sights.

But there was also, whether they spoke this aloud to their guest or not, the sheer indignity of what Polgar was proposing. Donald’s being odd, his parents could not undo; his being made an oddity of, they could, and would, prevent. The offer was politely but firmly declined.

What the all-knowing mentalist didn’t know, however, was that Donald, the boy who missed the chance to share his limelight, already owned a place in history. His unusual gifts and deficits had been noted outside Mississippi, and an account of them had been published—one that was destined to be translated and reprinted all over the world, making his name far better-known, in time, than Polgar’s.

His first name, anyway.


Video: The authors reveal how they tracked down Donald and discuss the significance of his long, happy life.

 

Donald was the first child ever diagnosed with autism. Identified in the annals of autism as “Case 1 … Donald T,” he is the initial subject described in a 1943 medical article that announced the discovery of a condition unlike “anything reported so far,” the complex neurological ailment now most often called an autism spectrum disorder, or ASD. At the time, the condition was considered exceedingly rare, limited to Donald and 10 other children—Cases 2 through 11—also cited in that first article.

That was 67 years ago. Today, physicians, parents, and politicians regularly speak of an “epidemic” of autism. The rate of ASDs, which come in a range of forms and widely varying degrees of severity—hence spectrum—has been accelerating dramatically since the early 1990s, and some form of ASD is now estimated to affect one in every 110 American children. And nobody knows why.

There have always been theories about the cause of autism—many theories. In the earliest days, it was an article of faith among psychiatrists that autism was brought on by bad mothers, whose chilly behavior toward their children led the youngsters to withdraw into a safe but private world. In time, autism was recognized to have a biological basis. But this understanding, rather than producing clarity, instead unleashed a contentious debate about the exact mechanisms at work. Differing factions argue that the gluten in food causes autism; that the mercury used as a preservative in some vaccines can trigger autistic symptoms; and that the particular measles-mumps-rubella vaccine is to blame. Other schools of thought have portrayed autism as essentially an autoimmune response, or the result of a nutritional deficiency. The mainstream consensus today—that autism is a neurological condition probably resulting from one or more genetic abnormalities in combination with an environmental trigger—offers little more in the way of explanation: the number of genes and triggers that could be involved is so large that a definitive cause, much less a cure, is unlikely to be determined anytime soon. Even the notion that autism cases are on the rise is disputed to a degree, with some believing that the escalating diagnoses largely result from a greater awareness of what autism looks like.

There is no longer much dispute, however, about the broad outlines of what constitutes a case of autism. The Diagnostic and Statistical Manual of Mental Disorders—the so-called bible of psychiatry—draws a clear map of symptoms. And to a remarkable degree, these symptoms still align with those of one “Donald T,” who was first examined at Johns Hopkins University, in Baltimore, in the 1930s, the same boy who would later amaze a mentalist and become renowned for counting bricks.

In subsequent years, the scientific literature updated Donald T’s story a few times, a journal entry here or there, but about four decades ago, that narrative petered out. The later chapters in his life remained unwritten, leaving us with no detailed answer to the question Whatever happened to Donald?

There is an answer. Some of it we turned up in documents long overlooked in the archives of Johns Hopkins. But most of it we found by tracking down and spending time with Donald himself. His full name is Donald Gray Triplett. He’s 77 years old. And he’s still in Forest, Mississippi. Playing golf.

The question that haunts every parent of a child with autism is What will happen when I die? This reflects a chronological inevitability: children with autism will grow up to become adults with autism, in most cases ultimately outliving the parents who provided their primary support.

Then what?

It’s a question that has yet to grab society’s attention, as the discussion of autism to date has skewed, understandably, toward its impact on childhood. But the stark fact is that an epidemic among children today means an epidemic among adults tomorrow. The statistics are dramatic: within a decade or so, more than 500,000 children diagnosed with autism will enter adulthood. Some of them will have the less severe variants—Asperger’s syndrome or HFA, which stands for “high-functioning autism”—and may be able to live more independent and fulfilling lives. But even that subgroup will require some support, and the needs of those with lower-functioning varieties of autism will be profound and constant.

Continue Reading here.

Thursday, September 8, 2011

In Case of Tech Bubble, Do Not Break Glass

Even if bloated valuations of Facebook and Groupon point to another bubble bound to burst, the Fed shouldn't head it off but prepare for the fallout

Latest internet valuations

POSITIVE SIDE OF SPECULATION

Among those benefits are entrepreneurial risk-taking and the animal spirits of innovation. "I do not feel confident that a policy which, in the pursuit of stability of prices, output, and employment, had nipped in the bud the railway boom of the forties, or the American railway boom of 1869-71, or the German electrical boom of the [1890s], would have been on balance beneficial to the populations concerned," wrote the British economist Dennis Robertson in 1926. Arthur Rolnick, former head of research at the Federal Reserve Bank of Minneapolis and currently senior fellow at the Humphrey Institute of Public Affairs, agrees with the Robertsonian point of view: "Sure, there's speculation, but this is how we want markets to work," Rolnick says. "It's the way we innovate and bring new products to market."

Not all innovations are desirable, of course, as we've seen recently. Much of the whiz-bang financial technology that made the housing bubble possible turned out to be toxic. Still, even if economists can agree on a set of statistical guideposts for determining the madness of crowds, monetary policy is often too blunt a policy instrument. Sure, the Fed can always pop a bubble by sharply hiking the fed funds rate. But that will also stymie angel investors, venture capitalists, and other intrepid investors from funding profitable ideas bubbling up from university labs and corporate research departments.

"The Fed raising the fed funds rate to deal with a bubble in one sector of the economy isn't very smart," says David Laidler, economist at the University of Western Ontario. "Whether the problem is in high tech or in housing, you're using an economy-wide instrument to deal with it, which isn't wise."

THE CANADIAN MODEL

The solution, many economists agree, is for the Fed to place a far greater emphasis on regulatory initiative than monetary policy when confronting bubbles.

Take the new mortgage rules announced by Canadian finance minister Jim Flaherty on Jan. 17. For the second time in less a year, the Canadian government acted to lean against ballooning consumer debt in a low interest rate environment to "protect the stability of the economy." In sharp contrast, when the stock market entered nosebleed territory in the late 1990s, the Fed ignored calls to raise margin requirements in the stock market, a targeted regulatory move.

During the real estate bubble, federal regulators had plenty of tools at their disposable to cut short the widespread abuses. Edward Gramlich, the late Federal Reserve governor, repeatedly warned about abuses in the subprime mortgage market and urged that the Greenspan Fed unleash investigators on lenders trolling for customers in poor neighborhoods and examine in detail their relations with major financial institutions. The Greenspan Fed, enamored with the elixir of deregulation, ignored his advice. "Regulators didn't do their job," says Rolnick.

Indeed, the Securities & Exchange Commission should step up its scrutiny of private investors and company valuations in the growing trading market for private share offerings of such marquee high-tech companies as Facebook, Twitter, and LinkedIn. Even more important, the Fed needs to exercise the extra oversight powers it got in the Frank-Dodd financial services reform legislation last July. (And Congress should avoid watering down the rules and regulations.)

Here's the thing: Plenty of mobile Internet companies, social networking firms, and other information technology companies will get funded over the next few years. Many of them will fail. That's capitalism. Regulators need to concentrate on preventing major financial institutions from feeding the frenzy and putting the taxpayer at risk. That's regulatory prudence. Stay tuned.

 

Is this the start of the second dotcom bubble?

 

 

Universe packed with hidden stars

Source

"If, say, one per cent of red dwarfs have an Earth-like planet around them, this one percent now applies to a number that is three times larger than we had so far assumed."

Starry Night

This photo taken by the Hubble Space Telescope shows a cluster of diverse galaxies / AP Source: AP

THE Universe might hold three times as many stars as was previously thought, a new cosmic census of eight galaxies beyond the Milky Way suggested.

US astronomers said that they discovered that small, dim stars known as red dwarfs are more plentiful than previously estimated.

"A best guess at the total number of stars in the universe is about 100 sextillion -- that is, a '1' followed by 23 zeros. We now believe this estimate is perhaps too low by a factor of about three," said Dr Pieter van Dokkum, of Yale University, who led the research.

Even that estimate remains uncertain, he added, because scientists do not know with confidence how many galaxies there are in the universe.

The findings, published in the journal Nature, also boosted the potential for other planets.

"There are possibly trillions of Earths orbiting these stars," Dr van Dokkum said.

The revised estimate of 300 sextillion stars in the universe emerged from a study of eight elliptical galaxies that are between 50 million and 300 million light years away.

In previous observations of the galaxies, it was impossible to detect red dwarf stars, as they are only about one-tenth of the mass of the sun and 1000 times fainter.

Powerful new instruments at the Keck Observatory in Hawaii allowed Dr van Dokkum's team to work out that the proportion of red dwarfs in elliptical galaxies is about 20 times greater than in the Milky Way.

You Snooze, You Lose: More Weekend Sleep Cuts Kids' Obesity Risk

By  @ healthland.time.com

 

Kids sleeping late on the weekends? Let ‘em — they're not being lazy; they're cutting their risk of obesity, according to new research published online today in the journal Pediatrics.

102058247.resize

Ideally, parents should strive for a constant bedtime and wake time. But since that's not always realistic, it's good to know that children who catch up on sleep during weekends and vacations are better able to counteract the adverse effects of insufficient sleep during the week and reduce their risk of obesity. 

Researchers at the University of Chicago analyzed the sleep patterns and BMI of 308 children between the ages of 4 to 10, dividing them into nine groups and using wrist actigraphs for one week to determine when they were asleep. The group of children with normal sleep patterns had the lowest risk of obesity and metabolic complications.

On average, the children slept an average of eight hours each night, less than what they should be getting. Kids ages 5 to 8 should sleep 9 to 10 hours, but children — like adults in our society — are largely sleep-deprived. 

“We tend to disrespect sleep,” says David Gozal, the lead author and chair of the pediatrics department at the University of Chicago. “We're not aware there's a very substantial price to pay for shortening the duration of sleep and for creating very irregular sleep schedules. Together, these create a much higher risk of obesity."

The worst combination? Irregular sleep and not enough of it. Those kids with the shortest, most irregular sleep had a 4.2-fold increased risk of obesity. When this group got more sleep on the weekends, their risk decreased to 2.8 fold — better but still nowhere near as good as the normal sleepers.

The obese kids in the study slept less time and more irregularly on weekends and were less likely to compensate on the weekends for not getting enough sleep on weekdays, which added up to metabolic problems. Short, irregular sleep increased the risk of inflammation, glucose sensitivity and resulted in a rise in lipids. 

“The point is regularity,” says Gozal. “If you are a regular catch-up sleeper on the weekends, that can have a beneficial effect if you are a short sleeper during the week. But if you have irregular, short/long sleep during the week and you continue that during the weekend, that puts you at worse risk.”

There's plenty of buzz about childhood obesity but not much chatter about the importance of sleep, says Gozal. Educating families about the significance of sleep through public health campaigns that emphasize the link could breed healthier kids.

“The best thing to globally reduce the risk for obesity is to sleep long during the week and during the weekend and have regular sleep,” says Gozal.

Read more: http://healthland.time.com/2011/01/24/you-snooze-you-lose%e2%80%a6weight-that-is-more-sleep-on-weekends-could-stave-off-childhood-obesity/#ixzz1X7p28ByS

Wednesday, September 7, 2011

Space over Time

BY MIKE ORCUTT @ technologyreview.com

 


Human exploration is the most visible use of spaceflight, but ­business and defense satellites fill the sky.

The retirement of the space shuttles marks the end of NASA's human spaceflight program, at least for now. But human missions funded by the U.S. government have represented only a small part of the action in space.

Of the 7,000 spacecraft that have been launched into orbit or beyond, more than half were defense satellites used for such purposes as communication, ­navigation, and imaging. (The Soviet Union sent up a huge number, partly because its satellites tended to be much shorter-lived than those from the United States.) In the 1970s, private companies began increasingly adding to the mix, ­launching satellites for telecommunications and broadcasting.

This graphic groups payloads by the nationality of the owner. A satellite, a capsule of cosmonauts, or a deep-space probe would each count as one payload. The data, which run through July 2011, were drawn from hundreds of sources, including space agency documents, academic journals, and interviews. They were compiled by Jonathan ­McDowell, an ­astrophysicist at the Harvard-­Smithsonian Center for Astrophysics and author of Jonathan's Space Report, a newsletter that tracks launches.

[Source]

 

Automakers Show Interest in an Unusual Engine Design

The Scuderi engine could substantially improve fuel consumption by storing compressed air.

 

BY KEVIN BULLIS

$50 million engine: It took Scuderi Group most of the $65 million it’s raised so far to develop just one engine, the prototype shown here. It’s a split-cycle two-cylinder engine, in which one cylinder compresses air and the other combusts a fuel-air mixture. 

Credit: Scuderi

Scuderi's Split-Cycle Engine Scuderi Group

An engine development company called the Scuderi Group recently announced progress in its effort to build an engine that can reduce fuel consumption by 25 to 36 percent compared to a conventional design. Such an improvement would be roughly equal to a 50 percent increase in fuel economy.

Sal Scuderi, president of the Scuderi Group, which has raised $65 million since it was founded in 2002, says that nine major automotive companies have signed nondisclosure agreements that allow them access to detailed data about the engine. Scuderi says he is hopeful that at least one of the automakers will sign a licensing deal before the year is over. Historically, major automakers have been reluctant to license engine technology because they prefer to develop the engines themselves as the core technology of their products. But as pressure mounts to meet new fuel-economy regulations, automakers have become more interested in looking at outside technology.

A conventional engine uses a four stroke cycle: air is pulled into the chamber, the air is compressed, fuel is added and a spark ignites the mixture, and finally the combustion gases are forced out of the cylinder. In the Scuderi engine, known as a split-cycle engine, these functions are divided between two adjacent cylinders. One cylinder draws in air and compresses it. The compressed air moves through a tube into a second cylinder, where fuel is added and combustion occurs.

Splitting these functions gives engineers flexibility in how they design and control the engine. In the case of the Scuderi engine, there are two main changes from what happens in a conventional internal-combustion engine. The first is a change to when combustion occurs as the piston moves up and down in the cylinder. The second is the addition of a compressed-air storage tank.

 

Continue reading here.

 

 

 

Researchers use virtual-reality avatars to create 'out-of-body' experience

Volunteers experienced the virtual bodies as if they were their own, with possible applications in computer games or to transport people digitally to other locations

, science correspondent | guardian.co.uk

Avatar
In the film Avatar, human minds are transferred into synthetic bodies. Photograph: Sportsphoto Ltd./Allstar

In the film Avatar, explorers on the planet Pandora transmit their minds into alternative bodies. Now scientists have come a step closer to recreating the experience in the lab.

They have successfully "projected" people into digital avatars that can move around a virtual environment. The participants experienced the digital body as if it were their own, even if the virtual humans were of the opposite sex.

The research is aimed at understanding how the brain integrates information coming from the senses in order to determine the position of the body in space. But the results could also be used in next generation computer games or for people who want to transport themselves, digitally, to other locations.

Olaf Blanke, a neurologist with the Brain Mind Institute at Ecole Polytechnique Fédérale de Lausanne in Switzerland, who led the work, used a virtual-reality (VR) setup with cameras linked to a head-mounted video display to achieve his results. He presented his work on Thursday at the annual meeting of the American Association for the Advancement of Science.

It is an extension of previous work by the same researchers that aims to recreate out-of-body experiences. These are defined as situations where a person who is awake sees their own body from somewhere outside themselves. This can occur when brain function has been damaged through a stroke, epilepsy or drug abuse. The most common cases happen in traumatic events such as car accidents or during operations.

In those experiments, carried out by Blanke and colleagues in 2007, volunteers wore goggles containing a video screen for each eye fed fed by a pair of cameras behind the participant. Because the two images were combined by the brain into a single image, they saw a 3D image of their own back.

Experimenters then moved a plastic rod towards a location just below the cameras, in their field of view, while the participant's real chest was simultaneously touched in the corresponding position. The participants reported feeling that they were located where the cameras had been placed, watching a body in front of them that belonged to someone else.

Continue reading here.