21 December 2013

Science Journal Nature Names LUX as a Scientific Highlight of the Year

UCSB’s contribution is at the core: a 72,000-gallon tank of ultrapure water

Photomultiplier tubes in the Large Underground Xenon dark-matter experiment, deep in a mine in South Dakota, Credit: Lux: Carlos H. Faham

The premier European science journal Nature opened its annual year-end special “365 DAYS: the year in science” with the Large Underground Xenon (LUX) experiment, the most sensitive dark matter detector in the world. A team of scientists from UC Santa Barbara has been instrumental in the design, construction and filling of the sophisticated water tank that houses the LUX experiment.
LUX is situated 4,850 feet underground at South Dakota’s Sanford Lab where few cosmic ray particles can penetrate. The detector is further protected from background radiation emanating from the surrounding rock by immersion in the tank of ultrapure water.
Dark matter is currently known only through its gravitational pull on conventional matter seen in astronomical observations. LUX, which contains more than 300 kilograms of liquid xenon, seeks to catch dark matter particles as they pass through Earth and hit xenon nuclei. This collision would cause flashes of light detectable by LUX’s 122 photomultiplier tubes.
According to Nature’s year in review, while LUX “did not see any particles of elusive dark matter flying through Earth…it put the tightest constraints yet on the mass of dark-matter particles and their propensity to interact with visible matter.”
“It is a tremendous honor for the LUX result to be highlighted by Nature, the venerable scientific journal with the highest impact in terms of citations, and one of the very few journals that reports cutting-edge research across a wide variety of fields,” said UCSB physicist Harry Nelson, principal investigator of the UCSB LUX team. “Progress toward directly detecting the dark matter of the universe has been exciting lately, and we hope that in the near future Nature can highlight our success in finally ‘touching’ that mysterious stuff, which makes up 85 percent of matter in our universe.”
“We were pleased and surprised to get such a strong response from our first paper, which was based on a short run of 85 days with the experiment,” said UCSB LUX team principal investigator Mike Witherell, vice chancellor for research and a professor in the Department of Physics. “In 2014 we are going to be conducting a 300-day run from which we will get even more sensitive results.”
In addition, the proposal for an experiment 20 times the size of LUX has just been submitted. This experiment, called LUX-Zeplin (LZ), would contain 7 tons of xenon inside the same 72,000-gallon tank of pure water used by LUX. The LZ collaboration has just named UCSB’s Nelson to take over as spokesperson in the summer of 2014. “I’ve been lucky to be chosen by my research colleagues on this successor experiment to LUX where we hope to achieve more than 100 times better sensitivity.”  
“Nelson will be stepping down from his current role as chair of the LUX executive board to lead construction project for the LZ instrument, coordinating the efforts of 25 participating institutions in Europe and the U.S.,” Witherell said. “We think we have a good chance of a big discovery with LZ. We hope that these early LUX results will strengthen our case.”
In addition to principle investigators Nelson and Witherell, the UCSB LUX Team includes postdoctoral scholar María del Carmen Carmona Benítez, graduate students Curt Nehrkorn and Scott Haselschwardt and engineers Susanne Kyre and Dean White.
 The UCSB LUX group is supported by the Department of Energy Office of High Energy Physics and by UCSB.

Contact Info: 

Julie Cohen
julie.cohen@ucsb.edu
(805) 893-7220

20 December 2013

For Neuroscientist Charles Zuker, Brain Science is a Matter of Taste

Charles Zuker has devoted his career to unraveling the neurobiology of the senses—especially taste, but he is quick to tell you that it’s not because of some
Charles Zuker,
Credit: Columbia University
inherent fascination with bitter, sweet, and salty truths. “The fact is that we don’t study the senses simply to understand the senses,” says Zuker, a professor of biochemistry and molecular biophysics and of neuroscience. “We study the senses as an entry point—a tractable problem—in dissecting the mysteries of the brain.” And taste, he notes, is a particularly elegant system for plumbing those mysteries.


The senses are the conduits of the external world into our brain. What we see, hear, touch, taste, and smell are processed in the brain and become the basis for an internal representation of the outside world. “The biggest challenge in the field of sensory biology is to understand how you make that representation, how you store it, how you recall it, how you modify it by experience and emotion, and how it all comes together to orchestrate and guide behavioral choices,” says Zuker, who is on the faculty of Columbia's new Mortimer B. Zuckerman Mind Brain Behavior Institute as well as the University's Kavli Institute for Brain Science.

The sense of taste offers some unique advantages for scientists trying to solve these puzzles. First, it relies on just five basic inputs (albeit in myriad combinations): sweet, sour, bitter, salty, and umami—a meat-like flavor first identified in 1908 by Japanese scientist Kikunae Ikeda. Compare this limited palette with the vast array of smells, sights, and sounds that our other senses must process. Second, each of the five basic flavors has intrinsic value. From birth, we love things that are sweet, slightly salty, and umami, and we reject things that are bitter, sour, or very salty (though later in life, we may override these aversions and acquire a taste for coffee, grapefruit, and anchovies).
Because of our inborn preferences, says Zuker, “the taste system affords a beautiful platform for understanding how the brain encodes innate attractive and aversive behaviors, how it transforms a signal into like versus dislike.” A 2013 study by Zuker and his associates found, for example, that the cellular and molecular basis for rejecting extremely salty foods involves activating the aversive pathways for sour and bitter tastes. Earlier work by Zuker showed that each of the basic flavors can be mapped to specific neurons in the brain.
Zuker and the researchers in his lab often work with genetically altered mice to explore how the brain processes taste and guides behavior. “You can manipulate neural circuits in mice to change the perception of taste such that the mice now prefer bitter to sweet or find sweet no more appealing than water,” he says. His research also looks at how taste may be modified by internal states such as hunger, satiety, emotion, and expectation. “Each of these internal states alters the way something tastes,” he observes. “When you are very hungry, something ordinary can be exquisite.” Fear and sadness can have the opposite effect. “Severely depressed patients often claim that food seems unpalatable.”
The grandchild of Eastern European Jews who fled to Chile to escape the Holocaust, Zuker took an early interest in biology. He remembers playing with microscopes as a small boy and receiving a binocular microscope as a bar mitzvah gift. “For the first time, I could look at minute things with both eyes. That opened up a whole new world."
Zuker began college at age 16 and entered graduate school at MIT at 19. By then he had fallen under the spell of molecular biology. He had a long and successful career at the University of California, San Diego, and has been a Howard Hughes Medical Institute Investigator since 1989, and a Senior Fellow at the Janelia Research Center since its inception. In 2009, he moved to Columbia in order to be part of what is now the Mortimer B. Zuckerman Mind Brain Behavior Institute, a cross-disciplinary center for brain science that will occupy the first building on Columbia’s new Manhattanville campus. “I have no doubt that after putting together such a remarkable group of people in Renzo Piano’s extraordinary Greene Science Center, something magical will happen,” says Zuker.
Like most Zuckerman Institute investigators, Zuker is focused on basic science, but he knows his work could have valuable applications. The elderly often lose interest in eating in part because their taste receptor cells have deteriorated. “This is not a trivial problem,” says Zuker, and a better understanding of the taste system could help yield solutions. It might also help patients with eating disorders, and in applications in battling diabetes and obesity. For instance, “Can one find find ways to make a little bit of sweet taste like a lot of sweet?” he asks.
But the main driver for Zuker’s work is to understand the brain, the most complex organ on Earth. “Faith, happiness, hunger, hope, creativity, ingenuity—all of that is nothing but electrical signals, the only language the brain understands. How, for example, do we transform fear into courage? Well, it happens in your brain and it’s encoded somewhere in those 100 billion neurons.”
—by Claudia Wallis
Columbia University · 402 Low Library, Mail Code 4321, 535 W. 116th St. · New York, NY 10027 · USA 

19 December 2013

Research reveals simple solution to help address deadly problem


GREENSBORO, NC – During flu and cold season, one of the most frequent hygiene hints is hand washing.
Sounds simple, right? The problem is how to get people to follow that advice.
Research at The University of North Carolina at Greensboro reveals a practical solution, which has been published online in the American Journal of Public Health. The study will be published in the February issue of AJPH.
There are simple changes to public environments, restrooms in particular, that can have a significant and positive impact on hand washing behavior,” said Dr. Eric W. Ford, a professor of healthcare in the Bryan School of Business and Economics at UNCG. “We found that the simple, visual cue of a paper towel being displayed increases both hand-washing rates and the use of soap.”
Here’s how the study was conducted:
Towel dispensers in public restrooms at UNCG were set to present a towel either with or without activation by users; the two modes were set to operate alternately for 10 weeks. Wireless sensors were used to record entry into bathrooms. Towel and soap consumption rates were checked weekly. There were 97,351 hand-washing opportunities across all restrooms.
The results: A visual cue can increase hand-washing compliance in public facilitiesTowel use was 22.6 percent higher and soap use was 13.3 percent higher when the dispenser presented the towel without user activation than when activation was required.
“It’s so simple that it seems almost intuitive, but the most important study implication is that public facility managers can easily and inexpensively improve the public health of the communities that they serve,” said Ford.
It’s an important health issue, Ford said, noting that statistics on hospital infection rates show that in the U.S., deaths from flu-related causes have ranged from 3,000 to 49,000 over the years. (http://www.cdc.gov/flu/about/disease/us_flu-related_deaths.htm).

The research grew out of UNCG's sustainability committee, which posed the question, “Does saving towels come at the expense of good hand hygiene?” The study was funded by the Bryan School and was conducted in the spring of 2012.

Ford was the study’s principal investigator. Brian Boyer, who was earning a master’s of public health during the study, managed data collection and entry from the sites. Colleagues who helped with the study design and analysis were Timothy R. Huerta of the Ohio State University College of Medicine and Nir Menachemi of the School of Public Health at the University of Alabama Birmingham.

New actors in the Arctic ecosystem: Atlantic amphipods are now reproducing in Arctic waters


Biologists from the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI) have for the first time shown that amphipods from the warmer Atlantic are now reproducing in Arctic waters to the west of Spitsbergen.  This surprising discovery indicates a possible shift of the Arctic zooplankton community, scientists report in the journal Marine Ecology Progress Series.  The primary victims of this “Atlantification” are likely to be marine birds, fish and whales.  The reason is that the migrating amphipods measure around one centimetre, and so are smaller than the respective Arctic species; this makes them less nutritious prey. 

Amphipods have a preference which made it easy for AWI biologists to recognise these changes.  This is because the sea dwellers, which are classed as zooplankton, would appear to like hiding.  “Their favourite hiding places apparently include our sediment traps which have been suspended for 13 years in HAUSGARTEN, the AWI long-term observatory in the Fram Strait.  We had originally anchored our funnel-shaped traps at a depth of some 300 metres there in the West Spitsbergen Current in order to catch downward floating material such as algae or excrement from zooplankton.  However, from the start we also found several amphipods in the traps.  The sample containers are full to the brim, especially in summer months.  We therefore believe that the animals are actively swimming into the traps”, states AWI plankton specialist Dr. Eva-Maria Nöthig. 

The by-catch rapidly proved to be a valuable sample set, because over years changes were not only seen in the number of amphipods caught, but also in the species composition.  “In the first four years our catches consisted exclusively of the Arctic and sub-Arctic individuals Themisto libellula and Themisto abyssorum.  We found examples of the smaller species Themisto compressa, which is native to the Atlantic Ocean, in our sediment traps in July 2004 for the first time.  They had apparently come that far north during a warm phase of the West Spitsbergen Current”, the scientist reports. 

A one-off discovery?  By no means!  During subsequent years what had begun as an exception turned into a seasonally recurrent rule.  From this time scientists documented ever more examples of the Atlantic species Themisto compressa, especially in summer months.  Despite this, scientists at that time believed water in the West Spitsbergen Current, with its average temperature of 3 to 3.5 degrees Celsius, to be too cold to permit the animals from the southern part of the North Atlantic, which have a greater sensitivity to cold, to reproduce there. 

New findings contradicted this assumption: “The catches in the months of August and September 2011 contained ovigerous females and recently hatched juveniles of the Atlantic species for the first time.  Moreover in following months we were able to provide evidence of the migrating amphipod in all stages of development, despite the fact that the warm phase of the West Spitsbergen Current had already subsided”, says Eva-Maria Nöthig. 

The scientists began to calculate: the water masses of the West Spitsbergen Current running northwards require approximately 150 days to get from the North Atlantic to the Arctic Ocean.  Too long to transport females already bearing eggs from their native habitat at 60 degrees north latitude in time for their larvae to hatch near the west coast of Spitsbergen.  “In view of these facts, we believe that the Atlantic amphipods are reproducing in the waters of the eastern Fram Strait.  This means the animals reach sexual maturity here and also have their offspring here”, Eva-Maria Nöthig says. 

She and her colleagues see the findings as a sign of a shift in the ecosystem in the eastern Fram Strait.  “We know from our long-term measurements in the Fram Strait and at HAUSGARTEN as well as from scientific literature that there have always been phases in the past in which comparably warm Atlantic water has advanced far northwards.  However, we have been unable to find a single indication that conditions ever changed as fundamentally as to permit these Arctic waters to serve as a nursery ground for Atlantic amphipods”, says Eva-Maria Nöthig. 

Scientists do not yet know whether the migrants will now continue their northward spread and whether they will compete for a habitat with the two native species of amphipods.  However, whenever new actors emerge in a habitat, changes can occur in its range of species and food web.  Eva-Maria Nöthig: “The Atlantic amphipods have a body length of around one centimetre, shorter than the Arctic species Themisto libellula which is up to five centimetres long.  Predators of Arctic amphipods will need to catch around five times the number of Atlantic amphipods in order to ingest an equivalent amount of energy to that obtained previously.  The victims of these changes will probably be those species at the end of the food chain.” 

The biologists’ results are underpinned by the oceanographic long-term observations of the West Spitsbergen Current which AWI scientists are conducting at HAUSGARTEN and with the help of a mooring right across the Fram Strait.  According to this, the water temperature of the northern current at a depth of 250 metres has risen by some 0.8 degrees Celsius between 1997 and 2010. 

Notes for Editors:

The study was originally published under: Angelina Kraft, Eva-Maria Nöthig, Eduard Bauerfeind, David J. Wildish, Gerhard W. Pohle, Ulrich V. Bathmann, Agnieszka Beszczynska-Möller, Michael Klages (2013): First evidence or reproductive success in a southern invader species indicates possible community shifts among Arctic zooplankton, Marine Ecology Progress Series, MEPS 493:291-296 (2013), doi:10.3354/meps10507, Online publication date: November 20, 2013  
 
Your scientific contact person at the Alfred Wegener Institute is Dr Eva-Maria Nöthig (phone +49 471 4831-1473, e-mail: Eva-Maria.Noethig(at)awi.de). 

First author Dr Angelina Kraft will be available as well for any questions via e-mail: Angelina.Kraft(at)gmx.de 

Your contact person at the Dept. of Communications and Media Relations is Sina Löschke (phone +49 471 4831-2008, e-mail: medien(at)awi.de). 

Follow the Alfred Wegener Institute on Twitter and Facebook for all current news and information on everyday stories from the life of the Institute. 

The Alfred Wegener Institute conducts research in the Arctic and Antarctic and in the high and mid-latitude oceans.  The Institute coordinates German polar research and provides important infrastructure such as the research icebreaker Polarstern and stations in the Arctic and Antarctic to the international scientific world. The Alfred Wegener Institute is one of the 18 research centres of the Helmholtz Association, the largest scientific organisation in Germany.

18 December 2013

Powerful Ancient Explosions Explain New Class of Supernovae

Study by UCSB scientist finds they likely originate from the creation of magnetars

A small portion of one of the fields from the Supernova Legacy Survey showing SNLS-06D4eu and its host galaxy (arrow). The supernova and its host galaxy are so far away that both are a tiny point of light that cannot be clearly differentiated in this image. The large, bright objects with spikes are stars in our own galaxy. Every other point of light is a distant galaxy,
Credit: UCSB
Astronomers affiliated with the Supernova Legacy Survey (SNLS) have discovered two of the brightest and most distant supernovae ever recorded, 10 billion light-years away and a hundred times more luminous than a normal supernova. Their findings appear in the Dec. 20 issue of the Astrophysical Journal.

D. Andrew Howell;
Credit: Katrina Marcinowski
These newly discovered supernovae are especially puzzling because the mechanism that powers most of them — the collapse of a giant star to a black hole or normal neutron star — cannot explain their extreme luminosity. Discovered in 2006 and 2007, the supernovae were so unusual that astronomers initially could not figure out what they were or even determine their distances from Earth. 
 “At first, we had no idea what these things were, even whether they were supernovae or whether they were in our galaxy or a distant one,” said lead author D. Andrew Howell, a staff scientist at Las Cumbres Observatory Global Telescope Network (LCOGT) and adjunct faculty at UC Santa Barbara. “I showed the observations at a conference, and everyone was baffled. Nobody guessed they were distant supernovae because it would have made the energies mind-bogglingly large. We thought it was impossible.”
One of the newly discovered supernovae, named SNLS-06D4eu, is the most distant and possibly the most luminous member of an emerging class of explosions called superluminous supernovae. These new discoveries belong to a special subclass of superluminous supernovae that have no hydrogen.
The new study finds that the supernovae are likely powered by the creation of a magnetar, an extraordinarily magnetized neutron star spinning hundreds of times per second. Magnetars have the mass of the sun packed into a star the size of a city and have magnetic fields a hundred trillion times that of the Earth. While a handful of these superluminous supernovae have been seen since they were first announced in 2009, and the creation of a magnetar had been postulated as a possible energy source, the work of Howell and his colleagues is the first to match detailed observations to models of what such an explosion might look like.
Co-author Daniel Kasen from UC Berkeley and Lawrence Berkeley National Lab created models of the supernova that explained the data as the explosion of a star only a few times the size of the sun and rich in carbon and oxygen. The star likely was initially much bigger but apparently shed its outer layers long before exploding, leaving only a smallish, naked core.
“What may have made this star special was an extremely rapid rotation,” Kasen said. “When it ultimately died, the collapsing core could have spun up a magnetar like a giant top. That enormous spin energy would then be unleashed in a magnetic fury.”
Discovered as part of the SNLS — a five-year program based on observations at the Canada-France-Hawaii Telescope, the Very Large Telescope (VLT) and the Gemini and Keck telescopes to study thousands of supernovae — the two supernovae could not initially be properly identified nor could their exact locations be determined. It took subsequent observations of the faint host galaxy with the VLT in Chile for astronomers to determine the distance and energy of the explosions. Years of subsequent theoretical work were required to figure out how such an astounding energy could be produced.
The supernovae are so far away that the ultraviolet (UV) light emitted in the explosion was stretched out by the expansion of the universe until it was redshifted (increased in wavelength) into the part of the spectrum our eyes and telescopes on Earth can see. This explains why the astronomers were initially baffled by the observations; they had never seen a supernova so far into the UV before. This gave them a rare glimpse into the inner workings of these supernovae. Superluminous supernovae are so hot that the peak of their light output is in the UV part of the spectrum. But because UV light is blocked by the Earth’s atmosphere, it had never been fully observed before.
The supernovae exploded when the universe was only 4 billion years old. “This happened before the sun even existed,” Howell explained. “There was another star here that died and whose gas cloud formed the sun and Earth. Life evolved, the dinosaurs evolved and humans evolved and invented telescopes, which we were lucky to be pointing in the right place when the photons hit Earth after their 10-billion-year journey.”
Such superluminous supernovae are rare, occurring perhaps once for every 10,000 normal supernovae. They seem to explode preferentially in more primitive galaxies — those with smaller quantities of elements heavier than hydrogen or helium — which were more common in the early universe.
“These are the dinosaurs of supernovae,” Howell said. “They are all but extinct today, but they were more common in the early universe. Luckily we can use our telescopes to look back in time and study their fossil light. We hope to find many more of these kinds of supernovae with ongoing and future surveys.”

Contact Info: 

Julie Cohen
julie.cohen@ucsb.edu
(805) 893-7220

17 December 2013

MIT News: Algorithm uses subtle changes to make a face more memorable


New algorithm uses subtle changes to make a face more memorable without changing a person’s overall appearance

CAMBRIDGE, Mass. -- Do you have a forgettable face? Many of us go to great lengths to make our faces more memorable, using makeup and hairstyles to give ourselves a more distinctive look.
Now your face could be instantly transformed into a more memorable one without the need for an expensive makeover, thanks to an algorithm developed by researchers in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).The algorithm, which makes subtle changes to various points on the face to make it more memorable without changing a person’s overall appearance, was unveiled earlier this month at the International Conference on Computer Vision in Sydney.“We want to modify the extent to which people will actually remember a face,” says lead author Aditya Khosla, a graduate student in the Computer Vision group within CSAIL. “This is a very subtle quality, because we don’t want to take your face and replace it with the most memorable one in our database, we want your face to still look like you.”
More memorable — or less
The system could ultimately be used in a smartphone app to allow people to modify a digital image of their face before uploading it to their social networking pages. It could also be used for job applications, to create a digital version of an applicant’s face that will more readily stick in the minds of potential employers, says Khosla, who developed the algorithm with CSAIL principal research scientist Aude Oliva, the senior author of the paper, Antonio Torralba, an associate professor of electrical engineering and computer science, and graduate student Wilma Bainbridge.
Conversely, it could also be used to make faces appear less memorable, so that actors in the background of a television program or film do not distract viewers’ attention from the main actors, for example.To develop the memorability algorithm, the team first fed the software a database of more than 2,000 images. Each of these images had been awarded a “memorability score,” based on the ability of human volunteers to remember the pictures. In this way the software was able to analyze the information to detect subtle trends in the features of these faces that made them more or less memorable to people.The researchers then programmed the algorithm with a set of objectives — to make the face as memorable as possible, but without changing the identity of the person or altering their facial attributes, such as their age, gender, or overall attractiveness. Changing the width of a nose may make a face look much more distinctive, for example, but it could also completely alter how attractive the person is, and so would fail to meet the algorithm’s objectives.When the system has a new face to modify, it first takes the image and generates thousands of copies, known as samples. Each of these samples contains tiny modifications to different parts of the face. The algorithm then analyzes how well each of these samples meets its objectives.Once the algorithm finds a sample that succeeds in making the face look more memorable without significantly altering the person’s appearance, it makes yet more copies of this new image, with each containing further alterations. It then keeps repeating this process until it finds a version that best meets its objectives.“It’s really like applying an elastic mesh onto the photograph that slightly modifies the face,” Oliva says. “So the face still looks like you, but maybe with a bit of lifting.”The team then selected photographs of 500 people and modified them to produce both a memorable and forgettable version of each. When they tested these images on a group of volunteers, they found that the algorithm succeeded in making the faces more or less memorable, as required, in around 75 percent of cases.
Familiarity breeds likability
Making a face appear familiar can also make it seem more likable, Oliva says. She and Bainbridge have published a complimentary paper in the journal Cognitive Science and Social Psychology on the attributes that make a face memorable. The first time we see a face, we tend to “tag” it with attributes based on appearance, such as intelligence, kindness, or coldness. “If we tag a person with familiarity, because we think this is a face we have seen before, we have a tendency to like it more, and for instance to think the person is more trustworthy,” she says.
The team is now investigating the possibility of adding other attributes to their model, so that it could modify faces to be both more memorable and to appear more intelligent or trustworthy, for example. “So you could imagine having a system that would be able to change the features of your face to make you whatever you would wish for, but always in a very subtle way,” Oliva says.
The research was funded by grants from Xerox, Google, Facebook, and the Office of Naval Research.


Written by Helen Knight, MIT News correspondent

Additional background
What makes an image memorable: http://web.mit.edu/newsoffice/2011/memorable-images-0524.html
MIT researchers find memory capacity much bigger than previously thought: http://web.mit.edu/newsoffice/2008/vision-memory-0908.html


Massachusetts Institute of Technology, 77 Massachusetts Avenue Building 11-400, Cambridge, MA 02139-4307 United States

16 December 2013

MIT News: Study finds piece-by-piece approach to emissions policies can be effective

New analysis shows that policies addressing energy consumption and technology choices individually can play an important part in reducing emissions


CAMBRIDGE, Mass. — Discussions on curbing climate change tend to focus on comprehensive, emissions-focused measures: a global cap-and-trade scheme aimed at controlling carbon, or a tax on all carbon emissions. But a new study by researchers at MIT finds that a “segmental” approach — involving separate targeting of energy choices and energy consumption through regulations or incentives — can play an important role in achieving emission reductions.

The new study, by assistant professor of engineering systems Jessika Trancik, is being published this week in the journal Environmental Science and Technology. Trancik is joined on the paper by three MIT graduate students: Michael Chang and Christina Karapataki of the Engineering Systems Division and Leah Stokes of the Department of Urban Studies and Planning.

“A policy that’s focused on controlling carbon emissions is a different kind of policy than one that’s focused on the underlying demand-side and supply-side technology drivers,” Trancik says. And while those calling for sweeping, emission-focused policies have often faced uphill battles in regions, states, and nations, a wide variety of segmental policies have been adopted by such jurisdictions, making it important to understand the effectiveness of such approaches, she says.

“There are some things that these segmental policies do very well,” Trancik says — in particular dealing with the inertia associated with existing infrastructure. “It will be expensive to retire new power plants early, and so with each power plant built we are committing to emissions not just today, but in future years,” she says.

“Compliance with a carbon-focused policy can come either from changes in energy consumption levels or technological change, and a set of segmental policies can ensure that both types of change happen concurrently,” Trancik says. Comprehensive, carbon-limiting policies would not allow that kind of targeted approach, she adds.

The issue is urgent, Trancik says: The paper shows that when accounting for infrastructural inertia, the carbon intensity of new plants built over the coming decade — that is, the amount of carbon dioxide emitted per megawatt of power produced — will need to be reduced by 50 percent, as compared to today’s levels, in order to meet emissions-reduction commitments that have been made by most nations.

“Many nations are generally moving in the direction of segmental policies,” Trancik says, so it is important to understand how effective these policies can be.

The study found pluses and minuses to both segmental and carbon-focused approaches to reducing emissions, Trancik says, adding that what may ultimately be needed is a carefully planned combination of both. The ideal may be a hierarchical approach, she says, “that would involve capping carbon dioxide emissions, but then using these segmental policies to address particular areas of concern, where the market alone may not have sufficient foresight.”

These issues are at the heart of climate negotiations. Trancik notes that “understanding the various drivers of emissions, and how influencing each can affect overall emissions, is important to moving these discussions forward. A global agreement on carbon emissions would be most effective at reducing the risks of climate change, but in the meantime a segmental approach can be helpful.”

An added benefit, Trancik notes, is that discussing segmental approaches is likely to lead to a greater understanding of where emissions reductions might come from, which may eventually make it easier to reach an agreement on limiting carbon emissions directly.

Decisions made over the next decade will have long-lasting effects on overall emissions, Trancik says, so it’s important to perform such analysis now. “Can we control emissions sufficiently through these segmental policies?” she asks. “How might approaches focused on new technologies and on energy efficiency work together?”

The research was funded by the Solomon Buchsbaum Research Fund.


Written by David Chandler, MIT News Office


Massachusetts Institute of Technology, 77 Massachusetts Avenue Building 11-400, Cambridge, MA 02139-4307 United States

14 December 2013

Keeping the Lights On

UCSB mechanical engineer Igor Mezic finds a way to predict cascading power outages


Photo Credit: Noe Lecocq/Wikimedia/
Creative Commons
A method of assessing the stability of large-scale power grids in real time could bring the world closer to its goal of producing and utilizing a smart grid. The algorithmic approach, developed by UC Santa Barbara professor Igor Mezic along with Yoshihiko Susuki from Kyoto University, can predict future massive instabilities in the power grid and make power outages a thing of the past.
“If we can get these instabilities under control, then people won’t have to worry about losing power,” said Mezic, who teaches in UCSB’s Department of Mechanical Engineering, “And we can put in more fluctuating sources, like solar and wind.”
While development of more energy efficient machines and devices and the emergence of alternative forms of energy give us reason to be optimistic for a greener future, the promise of sustainable, reliable energy is only as good as the infrastructure that delivers it. Conventional power grids, the system that still distributes most of our electricity today, were built for the demands of almost a century ago. As the demand for energy steadily rises, not only will the supply become inadequate under today’s technology, its distribution will become inefficient and wasteful.
Igor Mezic,
Photo Credit: Sonia Fernandez
“Each individual component does not know what the collective state of affairs is,” said Mezic. Current methods rely on a steady, abundant supply, producing enough energy to flow through the grid at all times, regardless of demand, he explained. However, should part of a grid already operating at capacity fail — say in times of disaster, attack or malfunction — widespread blackouts all over the system can occur.
“Everybody shuts down,” Mezic said. The big surges of power left unregulated by the malfunctioning component can either overload and burn out other parts of the grid, or cause them to shut down to avoid damage, he explained. The result is a massive power outage and subsequent economic and physical damage. The Northeast Blackout of 2003 was one such event, affecting several U.S. states and part of Canada, crippling transportation, communication and industry.
One alternative to solve the situation could be to build more power plants to produce the steady supply to feed the grid and have the capacity to handle unpredictable failures, fluctuations and shutdowns. It’s a solution that’s costly both for the environment and for the checkbook.
However, the method developed by Mezic and partners promises to prevent the cascade of blackouts and their subsequent effects by monitoring the entire grid for early signs of failure, in real time. Called the Koopman Mode Analysis (KMA), it is a dynamical approach based on a concept related to chaos theory, and is capable of monitoring seemingly innocuous fluctuations in measured physical power flow. Using data from existing monitoring methods, like Supervisory Control And Data Acquisition (SCADA) and Phasor Measurement Units (PMUs) KMA can track power fluctuations against the greater landscape of the grid and predict emerging events. The result is the ability to prevent and control large-scale blackouts and the damage they can cause.
Additionally, this approach can also lead to wider development of, demand for and use of renewable sources of energy, said Mezic. Because energy from systems like wind, water and sun are weather-dependent, they tend to fluctuate naturally, and this ability to respond to fluctuations can dispel what reservations utilities may have about relying on them to a greater degree.
Mezic’s research is published in the Institute of Electrical and Electronics Engineers journal Transactions of Power Systems. Other collaborators in Koopman Mode Analysis research include researchers from Princeton University, Tsinghua University in China and the Royal Institute of Technology in Sweden.

CONTACT:
Sonia Fernandez
(805) 893-4765

12 December 2013

MIT News Release: Sulfurous chemical serves as clarion call for coral pathogens

Sulfurous chemical known as ‘smell of the sea’ serves as clarion call for coral pathogens



CAMBRIDGE, Mass-- Coral reefs, the most biodiverse ecosystems in the world’s oceans, provide safe harbor for fish and organisms of many sizes that make homes among the branches, nooks, and crannies of the treelike coral. But reefs — even the well-protected Great Barrier Reef off the coast of northeastern Australia — are declining because of disease and bleaching, conditions exacerbated by rising ocean temperatures.


Coral is really an ecosystem within the reef ecosystem: a colony of invertebrate polyps that excrete a calcium-carbonate skeleton. Living within the polyps are photosynthetic algae that produce nutrients the polyps use as food. When a coral is stressed by rising water temperatures, it expels the algae, causing bleaching. This consequently increases the coral’s stress to a greater extent, making it more susceptible to attack by pathogenic bacteria. 

While the steep decline in the health of coral reefs has prompted additional scientific study, little is known about the ecological interactions between the pathogens and the weakened coral at the microscale. 

However, researchers at MIT have identified one mechanism by which pathogenic bacteria identify their prey: They’ve found that stressed Pocillopora damicornis coral produce up to five times more of a sulfurous compound called dimethylsulfoniopropionate (DMSP). The abundance of DMSP appears to serve as a clarion call, inciting the pathogen cells, which sense the amplified chemical and charge in for attack, changing their swimming direction and speed as they home in on the weakened coral. 

“This is the first time we’ve been able to sneak a peek at a coral pathogen’s behavior in real time, as it responds to the chemical cues leaking into the seawater from its host,” says postdoc Melissa Garren of MIT’s Department of Civil and Environmental Engineering (CEE), first author on a paper about this work that appears Dec. 12 in the International Society for Microbial Ecology Journal. Professor Roman Stocker of CEE is lead researcher on the project. 
“By doing so, we discovered that these tiny cells have an amazing array of tricks up their sleeves for finding a host, and that they can preferentially navigate toward hosts that are stressed,” Garren says.

DMSP — whose sulfur smell is familiar to anyone who’s been on a beach at low tide — is just one of the many molecules in the mucus covering the coral’s surface. The coral produces the mucus continuously as a means of cleansing and defense, and DMSP is produced both by the coral polyp and its symbiotic algae. However, no one knows exactly why the mucus contains elevated levels of DMSP during periods of stress or why the pathogens home in on that specific molecule.

Stocker had shown in earlier research unrelated to corals that ocean microbes are attracted to DMSP and will swim along chemical gradients — a behavior called chemotaxis — to reach it. However, this is the first study to show that DMSP attracts coral pathogens and that the pathogens are able to alter their swimming direction as well as their speed — a behavior called chemokinesis — to reach their targets.

In the field, Stocker and Garren collected small amounts of coral from the Great Barrier Reef and performed experiments at Heron Island Research Station, subjecting coral samples to a water temperature increase of 1.5 degrees daily over a week. Coral fragments from all donor colonies displayed the same response to the heat, exuding five times more DMSP in their mucus than the control samples, which remained at ambient temperatures.

Back in the lab, the researchers used microfluidics and videomicroscopy to test the swimming directions and speeds of the coral pathogen Vibrio corallilyticus. When in the presence of the mucus, the bacterium increased its swimming speed by up to 50 percent. Once the cells reached the richest layer of mucus, their movement went back to normal speed, about 50 body lengths per second. 
Surprisingly, unlike many other marine bacteria that use DMSP as an important source of food, this Vibrio didn’t metabolize DMSP at all, indicating that the chemical compound may serve purely as a signal to attack.

“I’m intrigued by how certain key processes keep popping up in oceanography — as if there was some universality to them,” Stocker says. “Because DMSP is involved in chemical signaling among so many other marine animals — ranging from birds to turtles to fish to seagulls, and now coral pathogens — it seems that the chemical must be the currency of signaling and sensing in the sea, though no one really knows why.”

To ensure that what they were seeing was actually chemokinesis, Kwangmin Son, a graduate student in Stocker’s lab, created a mathematical model that simulated what the scientists had observed: the swimming of bacteria toward the mucus layer. In the model, the accumulation of bacteria in the mucus layer was 50 percent higher and 50 percent faster than if the microbes had not changed their swimming speed.

“This research by Dr. Garren and her colleagues has tremendous implications for the way we study coral disease and host-pathogen interactions,” says Courtney Couch, a postdoc at the Hawaii Institute of Marine Biology who specializes in coral disease ecology. “Thanks to their innovative research, we can for the first time visualize how microbes migrate toward corals, which is a fundamental part of the infection process.” 

Co-authors on the paper, in addition to Garren, Stocker, and Son, are postdocs Roberto Rusconi and Filippo Menolascina and former MIT postdoc Orr Shapiro of Stocker’s lab; Justin Seymour and Jessica Tout of the University of Technology, Sydney; and David Bourne and Jean-Baptiste Raina of the Australian Institute of Marine Sciences. The research was funded by the National Science Foundation’s Human Frontiers in Science Program. 

“These experiments have helped us get one step closer to understanding the mechanisms for coral disease, because we have been able to directly visualize the microscopic pathogens of the corals as they swim with great vigor towards the coral surface,” Stocker says. “It goes without saying that this access to the microscale provides a whole new appreciation for the mechanisms of disease.”


Written by Denise  Brehm, MIT News Office



Massachusetts Institute of Technology, 77 Massachusetts Avenue Building 11-400, Cambridge, MA 02139-4307 United States

HALO flies to the Caribbean – cloud research for better climate models

An additional container for scientific instruments is installed underneath
the fuselage and wings, 
Credit: DLR (CC-BY 3.0).

Clouds can both warm and cool Earth's atmosphere. In current climate models, detailed conditions for cloud cover as a climatic factor are still not clearly understood. There is a shortage of precise measurements on how the water, humidity, ice particles and aerosols that form water droplets are distributed in towering cumulus clouds. HALO, the High Altitude and Long Range Research Aircraft operated by the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR), can overcome these limitations and acquire measurements of clouds over large distances and at very high altitudes above the Atlantic. Precipitation is also measured. A total of three flights from the DLR site in Oberpfaffenhofen to Barbados are scheduled between 10 and 25 December. This will be the first HALO measurement flight focusing on cloud research. DLR staff, colleagues at the Max Planck Institute for Meteorology (MPI-M), the University of Hamburg, the Universities of CologneLeipzig and Heidelberg and the Jülich Research Centre (Forschungszentrum Jülich) will all be taking off from Oberpfaffenhofen en route to the Caribbean island. Scientific leadership of the measurement flights is in the hands of the MPI-M, which is sending two of its renowned cloud researchers on the mission – Bjorn Stevens and Lutz Hirsch. HALO is a collective initiative involving German environmental and climate research institutions.
Additional containers for scientific instruments can be attached under
 the fuselage and under HALO’s w
ings; Credit: DLR (CC-BY 3.0).
"With the research flights about to be conducted over the Atlantic, we are adding another chapter to the use of HALO," says head of DLR research flight operations Oliver Brieger. "During the 10-hour direct flight, HALO will once again demonstrate its capabilities as a research tool through its unrivalled range and flying altitude – this time in the area of cloud research."
Lasers measuring clouds
The current flight is part of the NARVAL (Next-generation Aircraft Remote-Sensing for Validation Studies) project, which is aimed at giving atmospheric researchers detailed information on the composition of tropical clouds. The flights across the Atlantic from Oberpfaffenhofen to Barbados will build on the static observations at the cloud observatory there. Measurements with the LIDAR (Light Detection and Ranging) measurement device developed at and operated by the DLR Institute of Atmospheric Physics will also be making important contributions here. "Besides the distribution of moisture in the atmosphere, we will be using laser measurements to obtain information on particles suspended in the air," says Markus Rapp, Head of the DLR Institute of Atmospheric Physics. "These particles, known as aerosols, have a direct effect on the formation of clouds." The data will contribute to a better understanding of cloud and precipitation processes. LIDAR and other remote sensing measurement devices, both within the aircraft and installed in the 'belly pod' underneath the aircraft fuselage, will determine vertical profiles for the temperature and humidity, as well as the distribution of cloud droplets and aerosols. What are known as 'dropsondes' will also be released during the flight. These radio probes normally ascend from the surface under a weather balloon and measure the wind, temperature and humidity on their way up through the atmosphere. In this case, they will be released and will make their way back to Earth with a parachute.
Flight in parallel with a satellite
On the flights to and from the island of Barbados, and with the MPI-M cloud observatory there, the scientists ideally want to run comparison measurements with the CloudSat satellite as well. The satellite monitors the cloud cover above the Atlantic in strips transverse to the line of flight. The short flights of HALO, in parallel to these satellite strips, will enable the satellite's measurements to be checked. This is because the aircraft will be flying at a significantly lower altitude and will have a much better view of the clouds. The first three flights between Oberpfaffenhofen and Barbados form the 'NARVAL South' part of the mission. For the second flight, a local trip is planned – the aircraft will head east from Barbados through the trade wind cloud cover. The aim is to capture data on clouds that are moving towards the Barbados measurement station and compare the measurement flight data with the data from the ground measurement station.
Second measurement campaign above the North Atlantic
The second part of the mission, 'NARVAL North', will commence in January, under the direction of the University of Hamburg. HALO will be stationed in Iceland for this, to investigate the rear edge of cloud fronts above the North Atlantic. The flights in the measurement area will be operated along and across the cloud fronts, measuring the structure of the front systems more precisely via the unique combination of active and passive remote sensing methods. There is currently a great deal of debate in the scientific community regarding precipitation levels because satellite observations and model computations give different results. "There is a shortage of measurement data, because no ships travel in these typically stormy zones," explains project leader Felix Ament from the Center for Earth System Research and Sustainability (Centrum für Erdsystemforschung und Nachhaltigkeit; CEN) at the University of Hamburg. "If the HALO mission is successful, it could provide important information that will certainly fill an obvious gap in the scientific picture."
About HALO
The HALO research aircraft is a joint initiative involving German environmental and climate research institutions. HALO is supported by grants from the Federal Ministry for Education and Research (BMBF), theGerman Research Association (DFG), the Helmholtz Association, the Max Planck Society (MPG), theLeibniz Association, the Free State of Bavaria, the Karlsruhe Institute of Technology (KIT), the German Research Centre for Geosciences (GFZ) in Potsdam, the Jülich Research Centre and the German Aerospace Center (DLR).

Contacts:

Falk Dambowsky
German Aerospace Center (DLR)
Corporate Communications
Editor Aeronautics
Tel.: +49 2203 601-3959
mailto:Falk.Dambowsky@dlr.de 

Markus Rapp
German Aerospace Center (DLR) 
Director of the DLR Institute of Atmospheric Physics
Tel.: +49 8153 28-2521
Fax: +49 8153 28-1841
mailto:Markus.Rapp@dlr.de 

Oliver Brieger
German Aerospace Center (DLR) 
DLR Flight Experiments facility
Tel.: +49 8153 28-2966
mailto:Oliver.Brieger@dlr.de 

Bjoern Stevens
Max Planck Institute for Meteorology
Tel.: +49 40 41173-422
mailto:bjorn.stevens@mpimet.mpg.de