28 October 2013

Researcher From UCSB's Earth Research Institute Documents the Enduring Contaminant Legacy of the California Gold Rush

Full description below *, Credit: UCSB

(Santa Barbara, Calif.) –– An unintended legacy of California's gold rush, which began in 1848, endures today in the form of mercury-laden sediment. New research by Michael Singer, associate researcher at UC Santa Barbara's Earth Research Institute, shows that sediment-absorbed mercury is being transported by major floods from the Sierra Nevada mountains to Central Valley lowlands. The findings appear today in the Proceedings of the National Academy of Science (PNAS).

Contamination of food webs as a result of mercury-laden sediment, coupled with regional shifts in climate, poses a huge risk to the lowland ecosystems and to the human population as well because a lot of people eat fish from this system.

Full description below **, Credit: UCSB


"This new study addresses a gap in the general theory of the evolution of toxic sediment emplaced by industrial mining, which enables anticipation, prediction and management of contamination to food webs," Singer said.

His research shows that mercury stored in immense Sierran man-made sediment deposits is carried by the Yuba River and other nearby streams to the Central Valley lowlands during 10-year flood events, most recently in 1986 and 1997. His team used several independent datasets and modeling of the episodic process to demonstrate how mercury-laden sediment stored in deep river valleys more than 150 years ago travels hundreds of miles into ecologically sensitive regions.


The discovery of this process was serendipitous. Singer and a colleague were working in California's Central Valley studying how quickly floodplains filled up with sediment when they came across Burma-Shave signs that said, "SAND."
Michael Singer, associate researcher at UCSB's
Earth research institute, Credit: UCSB
"We thought that was quite strange because the floodplains around us were so much finer –– composed of silt and clay materials," recalled Singer. "So we followed the signs and ended up in a huge sand mine. They were mining sand by the truckload for the construction industry and said they would be doing so for at least the next several decades."

It turns out that a massive flood in 1986 in the Yuba River Basin brought enough sand with it to bury a major rice field, which a savvy farmer then leased to the sand-mining operator. According to Singer, the upstream Yuba was the biggest gold-mining drainage of all the Sierra drainages used in the 19th century, so it made sense to think about possible mercury contamination because gold rush miners used mercury to separate gold.

"They didn't just pan for gold," Singer said. "That's a romantic notion of gold mining. It was actually an industrial process whereby they sprayed giant high-pressure hoses, invented in 1852, at upland hillsides to wash the sediment downstream. Sides of mountains were washed away and sent downstream, and the sediment started filling in these confined river valleys, actually spreading all the way out to San Francisco Bay. This caused problems for steamboat operations and increased flooding on lowland farms. The U.S. government ultimately got involved and stopped the mining in 1884, which basically ended the gold rush overnight."

Singer says mercury is currently a big problem in San Francisco Bay and the Delta. "People know there was gold mining in the Sierra Nevada and they know that there was mercury mining in the Coast Ranges, but they're not really sure of the modern-day impact, especially when the contaminant sources are not directly by the bay," he said. "People want to know what is causing contamination of the food webs of the Central Valley."

The PNAS paper begins to answer that by documenting flood-driven fan erosion, sediment redistribution and a process called progradation, the growth of a sedimentary deposit farther out into the valley over time, which, in this case, spread the mercury-laden sediment into parts of the basin where there is higher risk of it being taken up by food webs.

The research team compared gold rush data with modern topographic datasets, which showed that the Yuba River was progressively cutting through the sediment and in the process leaving behind massive contaminated terraces along the riverbank. Flood data and modeling indicate that these terraces move only when a flood event is big enough to saturate them so that the terraces fail and the mercury-laden sediment is carried and driven downstream.

"There is a lot of sediment left in the system that is highly contaminated and readily available to be remobilized and sent downstream just because it's sitting in unconsolidated sediments along the margins of a river that can become very big during a storm," Singer said. "That susceptibility, coupled with projections for climate change in the region indicating more massive storms in the future, means that there is a dangerous synergy."


[RETURN TO TOP]  * Yuba Fan: Red circles indicate mercury sediment sampling locations and the yellow line is the longitudinal transect along which mining sediment travels from the Sierra to San Francisco Bay-Delta.
[RETURN TO TOP]  ** b. NASA 1997 flood image. Bright colors indicate high reflectance by suspended sediment. c. Modeled suspended sediment concentration for the 1997 flood peak captures the turbid signal in the actual flood image. Red circles indicate Feather-Yuba confluence on both images.

UC Santa Barbara, Santa Barbara, CA 93106 • (805) 893-8000

MIT New Release: Eliminating unexplained traffic jams

If integrated into adaptive cruise-control systems, a new algorithm could mitigate the type of freeway backup that seems to occur for no reason


CAMBRIDGE, MA -- Everybody’s experienced it: a miserable backup on the freeway, which you think must be caused by an accident or construction, but which at some point thins out for no apparent reason.

Such “traffic flow instabilities” have been a subject of scientific study since the 1930s, but although there are a half-dozen different ways to mathematically model them, little has been done to prevent them.

At this month’s IEEE Conference on Intelligent Transport Systems, Berthold Horn, a professor in MIT’s Department of Electrical Engineering and Computer Science, presented a new algorithm for alleviating traffic flow instabilities, which he believes could be implemented by a variation of the adaptive cruise-control systems that are an option on many of today’s high-end cars.

A car with adaptive cruise control uses sensors, such as radar or laser rangefinders, to monitor the speed and distance of the car in front of it. That way, the driver doesn’t have to turn the cruise control off when traffic gets backed up: The car will automatically slow when it needs to and return to its programmed speed when possible.

Counterintuitively, a car equipped with Horn’s system would also use sensor information about the distance and velocity of the car behind it. A car that stays roughly halfway between those in front of it and behind it won’t have to slow down as precipitously if the car in front of it brakes; but it will also be less likely to pass on any unavoidable disruptions to the car behind it. Since the system looks in both directions at once, Horn describes it as “bilateral control.”

Traffic flow instabilities arise, Horn explains, because variations in velocity are magnified as they pass through a lane of traffic. “Suppose that you introduce a perturbation by just braking really hard for a moment, then that will propagate upstream and increase in amplitude as it goes away from you,” Horn says. “It’s kind of a chaotic system. It has positive feedback, and some little perturbation can get it going.”

Doing the math

Horn hit upon the notion of bilateral control after suffering through his own share of inexplicable backups on Massachusetts’ Interstate 93. Since he’s a computer scientist, he built a computer simulation to test it out.

The simulation seemed to bear out his intuition, but to publish, he needed mathematical proof. After a few false starts, he found that bilateral control could be modeled using something called the damped-wave equation, which describes how oscillations, such as waves propagating through a heavy fluid, die out over distance. Once he had a mathematical description of his dynamic system, he used techniques standard in control theory — in particular, the Lyapunov function — to demonstrate that his algorithm could stabilize it.

Horn’s proof accounts for several variables that govern real-life traffic flow, among them drivers’ reaction times, their desired speed, and their eagerness to reach that speed — how rapidly they accelerate when they see gaps opening in front of them. Horn found that the literature on traffic flow instabilities had proposed a range of values for all those variables, and within those ranges, his algorithm works very efficiently. But in fact, for any plausible set of values, the algorithm still works: All that varies is how rapidly it can smooth out disruptions.

Horn’s algorithm works, however, only if a large percentage of cars are using it. And laser rangefinders and radar systems are relatively costly pieces of hardware, which is one of the reasons that adaptive cruise control has remained a high-end option.

Digital cameras, on the other hand, have become extremely cheap, and many cars already use them to monitor drivers’ blind spots. “There are several techniques,” Horn says. “One is using binocular stereo, where you have two cameras, and that allows you to get distance as well as relative velocity. The disadvantage of that is, well, two cameras, plus alignment. If they ever get out of alignment, you have to recalibrate them.”

Time to impact

Horn’s chief area of research is computer vision, and his group previously published work on extracting information about distance and velocity from a single camera. “We’ve developed monocular methods that allow you to very accurately get the ratio of distance to velocity,” Horn says — a ratio known in transportation studies as “time to contact,” since it captures information about the imminence of collision. “Strangely, while it’s, from a monocular camera, difficult to get distance accurately without additional information, and it’s difficult to get velocity accurately without additional information, the ratio can be had.” In ongoing work, Horn is investigating whether his algorithm can be adapted so that it uses only information about time to contact, rather than absolute information about speed and distance.


Written by Larry Hardesty, MIT News Office



Massachusetts Institute of Technology, 77 Massachusetts Avenue Building 11-400, Cambridge, MA 02139-4307 United States

24 October 2013

UCSB's NCEAS Take First Steps in Documenting the Intangible Effects of Nature on Human Well-Being

Full description below *
(Santa Barbara, Calif.) –– Nature may turn out to be the best medicine when it
comes to human well-being. Providing such necessities of life as food, water and shelter, nature not only underpins and controls the conditions in which people live, it also provides important intangible benefits. A new synthesis of multidisciplinary peer-reviewed research identifies the ways in which nature's ecosystems deliver crucial benefits –– and thus contribute culturally and psychologically to human well-being in nonmaterial ways.

The research was conducted by a working group of UC Santa Barbara's National Center for Ecological Analysis and Synthesis (NCEAS). The findings are published in the Annual Review of Environment and Resources.
The study brings together diverse research and highlights gaps in our understanding of these vital connections.
"Many of these bits of information are out there, but they're scattered across disciplines with dramatically different ways of knowing and doing research," said lead author Roly Russell, of the Sandhill Institute for Sustainability and Complexity in British Columbia. "We hoped that we could make a first attempt at bringing together these disparate pieces of research into a cohesive whole that could help demonstrate just how pervasive the intangible connections of this relationship between nature and human well-being are."

Full description below **
According to co-author Kai M.A. Chan of the University of British Columbia, such cultural ecosystem services represent psychological, philosophical, social and spiritual links between people and ecosystems, which are at the very core of human preferences and values.

For example, clinical trials have shown that patients recovering from heart attacks progress more rapidly when they can see trees outside their hospital room windows or –– even though the effect is lessened –– plasma screens with the same views of nature. These are tangible physiological effects of experiencing nature mediated through intangible connections to nature –– in this case, seeing it.

"While assessing the intangible benefits we experience from nature is difficult using traditional methods, it is possible and important," said Frank Davis, director of NCEAS. "These findings are a significant step toward developing a fuller understanding of human connectedness to nature."

Using a conceptual framework, the nine-member research team organized the literature by delineating the channels of experience through which people connect with nature –– knowing, perceiving, interacting with and living within –– and then exploring how those channels link to various components of human well-being.

Frank Davis, director of UCSB's
National Center for Ecological
Analysis and Synthesis,
Credit: UCSB
To initiate the process of documenting these intangibles, the researchers used 10 constituents to structure and organize their synthesis: physical health, mental health, spirituality, certainty and sense of control and security, learning/capability, inspiration/fulfillment of imagination, sense of place, identity/autonomy, connectedness/belonging and subjective (overall) well-being.

Empirical data varied widely. The literature in the area of physical health indicates that experiences of nature in various forms result in positive health responses. Similarly, mental-health literature supports the role of natural settings in reducing stress and increasing patience, self-discipline, the capacity for attention, and recovery from mental fatigue, crisis or psychophysiological imbalance.

The review of literature pertaining to learning and capability also showed some empirical results that suggest interactions with nature provide a significant benefit to human cognition. Encounters with nature also contribute positively to the creation of the identity of individuals as well as those of communities and have been shown to contribute a sense of connection to something greater than oneself and to the natural world.

The literature for other constituents is less definitive. The majority of citations for certainty and sense of control and security addressed "the ways in which natural systems degrade well-being through a lack of control and security, and fear." While sense-of-place literature focuses on more intimate channels of interaction, the bulk of the literature favors interacting and living-within channels more than knowing and perceiving.

Peer-reviewed literature for spirituality, inspiration/fulfillment of imagination and subjective (overall) well-being remain scarce largely because many of the richest sources of this kind of data are not typically published in traditional journals.

In the synthesis of available empirical literature regarding the contributions of ecosystems, the research team ultimately hopes to bring appropriate attention to these important intangible benefits.
The team also sought to identify what aspects of these relationships have been well studied and what aspects remain hypotheses that are poorly explored empirically.

"We concluded that though there are significant gaps in empirical research, the weight of the literature shows clearly that connections to nature contribute positively to our health and happiness," said co-author Anne Guerry, lead scientist of the Natural Capital Project and a researcher at the Stanford Woods Institute for the Environment.

Additional authors are Patricia Balvanera of the National Autonomous University of Mexico, Rachelle K. Gould of Stanford University, Xavier Basurto of Duke University, and Sarah Klain, Jordan Levine and Jordan Tam of the University of British Columbia.

This work was conducted as a part of the Cultural Ecosystem Services Working Group supported by NCEAS, a center funded by the National Science Foundation, UCSB and the state of California.





Top image: Four channels of human interactions with ecosystems: (a) knowing – thinking about an ecosystem or just the concept of an ideal ecosystem; (b) perceiving – remote interactions with ecosystem components; (c) interacting – physical, active, direct multisensory interactions with ecosystem components; and (d ) living within – everyday interactions with the ecosystem in which we live. Credit: UCSB

** Middle image: A synthesis of the overall quantity of relevant empirical literature. The size of the circle in each cell indicates the amount of research, with small circles indicating minimal research and large circles indicating plentiful research. The generalizability of the research available is represented by cell shading: Red indicates that most research focuses on very specific aspects of the channel-constituent pair and green indicates broadly applicable research. Credit: UCSB

UC Santa Barbara, Santa Barbara, CA 93106 • (805) 893-8000

MIT News Release: ‘Anklebot’ helps determine ankle stiffness

Data could aid in rehabilitation from strokes, other motor disorders


CAMBRIDGE, MA -- For most healthy bipeds, the act of walking is seldom given a second thought: One foot follows the other, and the rest of the body falls in line, supported by a system of muscle, tendon, and bones.

Upon closer inspection, however, locomotion is less straightforward. In particular, the ankle — the crucial juncture between the leg and the foot — is an anatomical jumble, and its role in maintaining stability and motion has not been well characterized.

“Imagine you have a collection of pebbles, and you wrap a whole bunch of elastic bands around them,” says Neville Hogan, the Sun Jae Professor of Mechanical Engineering at MIT. “That’s pretty much a description of what the ankle is. It’s nowhere near a simple joint from a kinematics standpoint.”

Now, Hogan and his colleagues in the Newman Laboratory for Biomechanics and Human Rehabilitation have measured the stiffness of the ankle in various directions using a robot called the “Anklebot.”

The robot is mounted to a knee brace and connected to a custom-designed shoe. As a person moves his ankle, the robot moves the foot along a programmed trajectory, in different directions within the ankle’s normal range of motion. Electrodes record the angular displacement and torque in specific muscles, which researchers use to calculate the ankle’s stiffness.

From their experiments with healthy volunteers, the researchers found that the ankle is strongest when moving up and down, as if pressing on a gas pedal. The joint is weaker when tilting from side to side, and weakest when turning inward.

Interestingly, their measurements indicate that the motion of the ankle from side to side is independent of the ankle’s up and down movement. The findings, Hogan notes, may help clinicians and therapists better understand the physical limitations caused by strokes and other motor disorders.

The researchers report their findings in the journal IEEE Transactions on Neural Systems and Rehabilitation Engineering. The paper’s co-authors are Hyunglae Lee, Patrick Ho, and Hermano Krebs from MIT and Mohammad Rastgaar Aagaah from Michigan Technological University.

A robotic walking coach

Hogan and Krebs, a principal research scientist in MIT’s Department of Mechanical Engineering, developed the Anklebot as an experimental and rehabilitation tool. Much like MIT-Manus, a robot they developed to improve upper-extremity function, the Anklebot is designed to train and strengthen lower-extremity muscles in a “cooperative” fashion, sensing a person’s ankle strength and adjusting its force accordingly.

The team has tested the Anklebot on stroke patients who experience difficulty walking. In daily physical therapy sessions, patients are seated in a chair and outfitted with the robot. Typically during the first few sessions, the robot does most of the work, moving the patient’s ankle back and forth and side to side, loosening up the muscles, “kind of like a massage,” Hogan says. The robot senses when patients start to move their ankles on their own, and adapts by offering less assistance.

“The key thing is, the machine gets out of the way as much as it needs to so you do not impose motion,” Hogan says. “We don’t push the limb around. You the patient have to do something.”

Many other robotic therapies are designed to do most of the work for the patient in an attempt to train the muscles to walk. But Hogan says such designs are often not successful, as they impose motion, leaving little room for patients to move on their own.

“Basically you can fall asleep in these machines, and in fact some patients do,” Hogan says. “What we’re trying to do with machines in therapy is equivalent to helping the patients, and weaning them off the dependence on the machine. It’s a little bit like coaching.”

Ankle mechanics

In their most recent experiments, the researchers tested the Anklebot on 10 healthy volunteers to characterize the normal mechanics of the joint.

Volunteers were seated and outfitted with the robot, as well as surface electrodes attached to the ankle’s four major muscles. The robot was connected to a video display with a pixelated bar that moved up and down, depending on muscle activity. Each volunteer was asked to activate a specific muscle — for example, to lift the foot toe-up — and maintain that activity at a target level, indicated by the video bar. In response, the robot pushed back against the ankle movement, as volunteers were told not to resist the robot’s force.

The researchers recorded each muscle’s activity in response to the robot’s opposing force, and plotted the results on a graph. They found that in general, the ankle was stiffest when toe-up or toe-down, while less stiff from side to side. When turning inward, the ankle was least stiff — a finding that suggests this direction of movement is most vulnerable to injury.

Understanding the mechanics of the ankle in healthy subjects may help therapists identify abnormalities in patients with motor disorders. Hogan adds that characterizing ankle stiffness may also be useful in designing safer footwear — a field he is curious to explore.

“For example,” Hogan says, “could we make aesthetically pleasing high heels that are stiffer in the inversion/eversion [side to side] direction? What is that effect, and is it worth doing? It’s an interesting question.”

For now, the team will continue its work in rehabilitation, using the Anklebot to train patients to walk.
###

Written by Jennifer Chu, MIT News Office


Massachusetts Institute of Technology, 77 Massachusetts Avenue Building 11-400, Cambridge, MA 02139-4307 United States

MIT News Release: Building culture in digital media

Fox Harrell’s new book presents a ‘manifesto’ detailing how computing can create powerful new forms of expression and culture


CAMBRIDGE, Mass-- The video game “Grand Theft Auto V,” which recently grossed $1 billion in its first three days on sale, is set in the fictional city of Los Santos. But if you’ve played the game, you probably don’t need anyone to tell you that Los Santos is a simulation of Los Angeles. The setting, the characters, and the objects in the game all draw upon — and reinforce — a reservoir of existing cultural images about theft, violence, urban life, and other aspects of U.S. society.

Such elements of stories, and indeed many cultural images based on particular worldviews, are “phantasms,” as MIT associate professor of digital media Fox Harrell writes in his new book about computing and expression. A phantasm, as Harrell writes, is “an image integrated with cultural knowledge and beliefs.” Such images help imbue stories with meaning — constructing imaginative worlds that may affect an audience member’s understanding of society or even sense of self, for better or for worse. 

Harrell’s book, “Phantasmal Media,” published this week by MIT Press, outlines an approach to analyzing many forms of digital media that prompt these images in users, and then building computing systems — seen in video games, social media, e-commerce sites, or computer-based artwork — with enough adaptability to let designers and users express a wide range of cultural preferences, rather than being locked into pre-existing options. 

“A lot of people take interfaces we use everyday in media, such as online stores or video games, for granted,” says Harrell, who is a faculty member in both MIT’s Program in Comparative Media Studies/Writing and the Computer Science and Artificial Intelligence Laboratory. “They think that’s just the way the world is structured. But when we see images or characters in a video-game world, or when we see a virtual world, developers are building values into all these systems.”

Poetry and programming
Why does it matter which images we process? Because it affects the way we think about ourselves, for one thing. In a famed 1947 experiment that Harrell notes in the book, African-American children were asked to play with two dolls that were identical except for their coloration: One was pale, blue-eyed, and blond, and the other was darker-skinned, brown-eyed, and dark-haired. The study showed that a majority of the children thought the light doll looked “nice” and that the darker doll looked “bad.” 
Clearly, “the children had internalized negative self-conceptions,” as Harrell states in the book, which, he adds, were “based on the dominant worldview of the time.”

In much of “Phantasmal Media,” however, Harrell argues that, conversely, it is possible to build empowering phantasms, rather than oppressive ones, and finds examples ranging from the website of a creative record label to works of science fiction. Such novel imagery can also reveal phantasms, shaking up habits of content development that may otherwise rely on conventional cultural assumptions. 

“It’s not that people are engineering values into images with the aim of manipulating everyone,” Harrell says. “But people are building systems based on their training and experiences, and at some point there are subjective decisions being made and values are being implemented into these systems.” 
And precisely because computational media are expanding, Harrell — who also founded and directs MIT’s Imagination, Computation, and Expression Laboratory (ICE Lab) —would like to seize the moment and nudge designers, programmers, and engineers in the direction of creating content with depth and meaning, while exercising sharp self-awareness of their own cultural assumptions. In one chapter of the book, Harrell takes some of the symbolic analysis that cognitive scientists have produced about poet Robert Frost’s “The Road Not Taken” and presents, in tandem, an analysis of developer Jason Rohrer’s 2007 video game “Passage,” which is built thematically around life, mortality, and death. Then, Harrell charts in detail the programming decisions that go into a game such as “Passage.” 

The larger point is not that creative expression should always involve weighty, inward-looking content like Frost’s poems or Rohrer’s game, but that it is possible to think systematically about the values embodied in media works, and create various blueprints for digital designers today. In so doing, programmers can think about how to build media works such as games that express the thoughts and feelings of players, rather than games in which players deploy characters representing familiar cultural tropes. 

“On the engineering side, people want something that can be rigorously pinned down,” Harrell says. “I’m showing how you can describe the structure of these [digital media] systems in precise mathematical ways, and use very structured tools to think about their values.”

A manifesto for computational media
Scholars have responded well to “Phantasmal Media”: George Lewis, a music professor at Columbia University, calls it a “bold and audacious view of the relationship between computing and the imagination,” and adds that it “is what a groundbreaking book looks like.” 

Harrell is not explicitly judgmental about the mass-market video game content that produces blockbuster hits. Instead, his project is meant to spur people to think about the creative possibilities of digital media — that it can enable products and programs other than adrenaline-heavy games. 

“The powerful thing for me about many works of art, literature, and cinema, is their ability to both create imaginative worlds and poetically express ideas that cause us to reflect upon and even change our societies and cultures,” Harrell says. “If you look at art that contains that kind of poetic social commentary, you can ask what would it take for computing to get there? This is a manifesto to say that computational media has that potential.” 


Written by Peter Dizikes, MIT News Office


Massachusetts Institute of Technology, 77 Massachusetts Avenue Building 11-400, Cambridge, MA 02139-4307 United States

UCSB Anthropologist Examines the Motivating Factors Behind Hazing


(Santa Barbara, Calif.) –– It happens in military units, street gangs and even among athletes on sports teams. In some cultures, the rituals mark the transition from adolescence to adulthood. And in fraternities and sororities, it's practically a given.

With a long history of seemingly universal acceptance, the practice of hazing is an enduring anthropological puzzle. Why have so many cultures incorporated it into their group behavior? Aldo Cimino, a lecturer in the Department of Anthropology at UC Santa Barbara, seeks to answer that question. His work is highlighted in the online edition of the journal Evolution and Human Behavior.

"Hazing exists in radically different cultures around the world, and the ethnographic record is replete with examples of initiation rites that include hazing," said Cimino. "It is a practice that cultures continually rediscover and invest themselves in. The primary goal of my research is to understand why."

One hypothesis Cimino is exploring involves evolved psychology. "The human mind may be designed to respond to new group members in a variety of ways, and one of those ways may be something other than a hug," he said. "I'm not claiming that hazing is inevitable in human life, that everyone will haze, or that nothing will reduce hazing. But I am suggesting that the persistence of hazing across different social, demographic and ecological environments suggests that our shared, evolved psychology may be playing a role."

Hazing and bullying have a lot in common –– individuals who possess some kind of power abuse those who don't –– but what makes hazing strange, according to Cimino, is that it's directed at future allies. "It's very rare for bullies to say, ‘I'm going to bully you for three months, but after that we're going to be bros,' but that's the sort of thing that happens with hazing."

Cimino suggested that in some human ancestral environments, aspects of hazing might have served to protect veteran members from threats posed by newcomers. "It's almost as though the period of time around group entry was deeply problematic," he said. "This may have been a time during which coalitions were exploited by newcomers. Our intuitions about how to treat newcomers may reflect this regularity of the past. Abusing newcomers –– hazing –– may have served to temporarily alter their behavior, as well as select out uncommitted newcomers when membership was non-obligatory."

Cimino performed a study on a representative sample of the United States, in which participants imagined themselves as members of hypothetical organizations. Organizations that participants believed had numerous benefits for newcomers (e.g., status, protection) were also those that inspired more hazing. "In my research I've found that group benefits that could quickly accrue for newcomers –– automatic benefits –– predict people's desire to haze," he said.

"This isn't the only variable that matters –– there's some effect of age and sex, for example –– but the effect of automatic benefits suggests that potential vectors of group exploitation alter people's treatment of newcomers in predictable ways," Cimino continued.

He cautioned that scientists are a long way from understanding hazing completely. "Hazing is a complex phenomenon that has more than one cause, so it would be a mistake to believe that I have solved the puzzle. However, every study brings us a little closer to understanding a phenomenon that seems increasingly visible and important," he said.


UC Santa Barbara, Santa Barbara, CA 93106 • (805) 893-8000

21 October 2013

Escaping the warmth: Atlantic cod conquers the Arctic


As a result of climate change the Atlantic cod has moved so far north that it’s juveniles now can even be found in large numbers in the fjords of Spitsbergen. This is the conclusion reached by biologists of the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI), following an expedition to this specific region of the Arctic Ocean, which used to be dominated by the Polar cod. The scientists now plan to investigate whether the two cod species compete with each other and which species can adapt more easily to the altered habitats in the Arctic.

RV Heincke in Spitzbergen. (Photo: K Baer, Alfred Wegener Institute)

August 2013. The German research vessel Heincke is heading towards the waters off the north easterly coast of Spitsbergen. On board, six biologists from the Alfred Wegener Institute are preparing for the first fishing haul of their Arctic expedition. They want to catch juvenile Atlantic cod and Polar cod at the 80th parallel north. But as the vessel reaches its destination, the thermometer shows a water temperature of 4.5 degrees Celsius. Far too warm for the Polar cod who prefers temperatures of around 0 degrees. “These warm water masses come from the Atlantic and in the summer months are superimposed on the cold Arctic water masses from the Barents Sea in the fjords,” explains Dr. Felix Mark, biologist at the Alfred Wegener Institute.


The Polar cod Boreogadus saida. Photo: Hauke Flores, Alfred-Wegener-Institut


He is leading the ship expedition to the Arctic. Together with his AWI colleagues and a PhD student from the Heinrich Heine University in Düsseldorf he wants to investigate the spread of Atlantic cod and Polar cod in the fjords of Spitsbergen. However, after the first haul mostly juvenile Atlantic cod are thrashing around in the net: a sign of fundamental changes in the Arctic. “The rising water temperatures mean that Atlantic cod is finding an ideal habitat here. We expect that the juveniles of this species, which used to be at home in the North Sea, are already dominating the warmer surface waters around Spitsbergen,” explains Dr. Felix Mark.

His question now is whether and to what extent do Atlantic and Polar cod compete with each other and to what extent an increasing acidification of the ocean influences any rivalry. “Ocean acidification presumably not only has an effect on the bodily functions of both fish species but also influences their prey,” says Dr. Felix Mark. 

Whilst the Atlantic cod hunts different copepods, sea butterflies and also small fish, and therefore enjoys a varied diet, the Polar cod only has its sights set on certain types of crustaceans. However, if they were available only in small amounts due to the increasing acidification of the Arctic waters, the Polar cod would be left with little to eat. “The aim of our expedition to Spitsbergen was therefore to catch Atlantic cod, Polar cod and their main prey, the copepods, and to transport the animals alive to Bremerhaven. Only in our laboratories do we have the opportunity to examine how the fish and the zooplankton react to a drop in the pH value of the water”, says the biologist.

He and his colleagues suspect that the Atlantic cod can adapt better to increased ocean acidification and will therefore be able to displace the Polar cod from the common habitat in this way. “A fight for the upper hand like this would have far-reaching consequences for the Arctic ecosystem because the Polar cod is an important part of the Arctic food web and food for other fish species as well as birds and marine mammals such as whales or seals,” says Dr. Felix Mark.

The investigations into the Atlantic and Polar cod are part of BIOACID, the national research project on ocean acidification. The name is an acronym for “Biological Impacts of Ocean Acidification“ within which 14 institutes explore how marine organisms react to ocean acidification and the impact on the food web, the ecosystems in the sea and ultimately also on the economy and society. Dr. Felix Mark heads the so-called fish consortium as part of this programme. 


Your contact partners are Dr. Felix Mark, tel. 0049 471 4831-1015 (e-mail: Felix-Christopher.Mark(at)awi.de) and Sina Löschke, Communications and Media Department, tel. 0049 471 4831-2008 (e-mail: medien(at)awi.de).

Further information on ocean acidification research at the Alfred Wegener Institute is also to be found in the “Focus” column of the AWI website: http://www.awi.de/en/news/focus/2013/ocean_acidification/.  

The Alfred Wegener Institute also reported on the Atlantic cod in its press release of 17 April 2013: http://www.awi.de/en/news/press_releases/detail/item/more_stress_for_atlantic_cod/?tx_list_pi1%5Bmode%5D=6&cHash=7fe268dc99439908bd0a05fa97e9b1db

Follow the Alfred Wegener Institute on Twitter and Facebook for all current news and information on everyday stories from the life of the Institute. 

The Alfred Wegener Institute conducts research in the Arctic and Antarctic and in the high and mid-latitude oceans.  The Institute coordinates German polar research and provides important infrastructure such as the research icebreaker Polarstern and stations in the Arctic and Antarctic to the international scientific world. The Alfred Wegener Institute is one of the 18 research centres of the Helmholtz Association, the largest scientific organisation in Germany.

BIOACID in brief: under the roof of BIOACID “Biological Impacts of Ocean Acidification“ 14 institutes explore how marine organisms react to ocean acidification and the impact on the food web, the ecosystems in the sea and ultimately also on the economy and society. The project started in 2009 and moved into a second three year phase in September 2012. Germany's Federal Ministry for Education and Research (BMBF) supports the current work with EUR 8.77 million. A list of the member institutions, information on the scientific programme and the BIOACID committees as well as facts on ocean acidification are available at www.bioacid.de .

19 October 2013

2013 World Food Prize Laureates Address Crowd, Urge Advances in Biotechnology


Des Moines, Iowa (Oct. 18, 2013) -- The 2013 World Food Prize Laureates - recognized as the original pioneers of agricultural biotechnology - today addressed a crowd of over 1,000 colleagues and others at the World Food Prize Borlaug Dialogue, urging the world to advance biotechnology to help feed the growing population.
Video footage of their remarks today at the Laureate Luncheon is at this link (starting at the 1:04 time mark).
Credit: The World Food Prize
Dr. Marc Van  Montagu of Belgium, and Dr. Mary-Dell Chilton and Dr. Robert T. Fraley of the United States were officially awarded the 2013 World Food Prize during a ceremony Thursday night at the Iowa Capitol, in the home state of Dr. Norman Borlaug, founder of the World Food Prize and a Nobel Peace Prize Laureate recognized for saving 1 billion lives with his innovations in plant breeding and farming. The award citation and biographies of the laureates are available at www.worldfoodprize.org/laureates. 

Quotes from their remarks at the Laureate Award Ceremony last night:

Mary-Dell Chilton:
"Our work, which began as curiosity-driven, fundamental research, now finds worldwide application in agriculture with great promise of benefitting all mankind. Nothing could be more gratifying than that.”
“The choice of plant biotechnology researchers for the World Food Prize 2013 recognizes the valuable contribution of this science to agriculture. When I began this work, I scarcely could have imagined the profound effect it would have on agriculture today. Neither could I have imagined the controversy that has accompanied our discoveries and advances. It is my hope that we can put to rest this misguided opposition and convince the public of the safety, benefit, and ecological value of this new and useful technology. It is a wonderful tool for plant breeders to help them grow food for a hungry future. We will need it.”
Credit: The World Food Prize
"I also thank the committee for its role in increasing the recognition of the contribution of women to science and innovation. I also hope that school-age girls around the world will be encouraged to pursue science and know that their achievements can make important contributions to society.”

Robert T. Fraley:
“I’d also like to thank the selection committee, Ambassador Quinn, and Mr. Ruan for recognizing biotechnology. That took courage. And we really appreciate the forum that it provides to have this really important discussion about the role of innovation in technology and in agriculture.”
“I’d like to accept the award on behalf….of the plant scientists across the industry and academia who have worked so hard to get us to this point and importantly are going to work so hard to get us to where we need to be in the future.”
“It just seems like yesterday that I was in the laboratory…trying to figure out how to put genes into petunia plants and now we’ve got biotech crops growing in 30 countries around the world. And there’s so much more potential to come. We’re going to need it because we need to double the food supply to feed 9 billion people by 2050. I think we can do it. But it’s a tremendous challenge. And it is the greatest challenge facing us and all mankind in the future.”
“You know, I think if Dr. Borlaug were here this evening he’d be pretty proud of the scientific progress we’ve made. And then the first thing he would ask is, ‘How are we doing on that rust gene in wheat?’ And then he’d ask me, ‘Now how does that drought gene really do this year in field trails?’ And before he did anything else, he would look around and he would probably have a conversation with every student who’s in this beautiful place, because he loved youth. And then it would take about that long (snaps fingers) and Norm would say, ‘We’ve got a lot of work to do, let’s get on with it.”

Marc Van Montagu:
“I’m very, very grateful to the World Food Prize Foundation that they have recognized plant genetic engineering as a tool that will bring and that has already proven that it can bring progress for feeding those who need it most. But I’m frustrated, of course, that it takes so long. So long before this technology can help those who need it most.”
"And I hope that with the effort that the foundation has done, with the former laureates and with all the colleagues that are here, that we can mobilize to explain why it is needed. And I just want to stress that it’s – the fact that from fundamental research, looking how in nature bacteria makes transgenic plant, that that start must – should for many bring – that it is the priority to do outstanding fundamental research and be ready and that we should have the structures, that this science can then be applied. And for that, people in fundamental research should have close, close links with breeders. And that is what here at the World Food Prize is present. That breeders and the hope that the molecular biologists with the breeders can act better for making the products."

A recording of the 2013 World Food Prize Laureate Award Ceremony is available at this link.  

Bios are available at www.worldfoodprize.org/laureates.

ABOUT THE WORLD FOOD PRIZE:  The World Food Prize was founded in 1986 by Dr. Norman E. Borlaug, recipient of the 1970 Nobel Peace Prize. Since then, The World Food Prize has honored outstanding individuals who have made vital contributions to improving the quality, quantity or availability of food throughout the world. The Prize also hosts the annual Borlaug Dialogue international symposium on global food security issues and a variety of youth programs that aim to inspire the next generation to work in the fields surrounding global agriculture.


Media Contact: Megan Forgrave, Director of Communications, mforgrave@worldfoodprize.org or 515-229-1705 (Cell)

Contact information: The World Food Prize Foundation , The World Food Prize - CST, 666 Grand Ave Ste 1700, Des Moines, 503092500 

18 October 2013

DLR begins operating new test facility at Jülich solar tower


Researchers at the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR) have started operating a receiver test facility on the tower of the solar power plant in Jülich. In a solar power plant, solar radiation is converted into heat in the receiver. The test facility is located on a research level integrated in the tower beneath the main receiver. A new generation of solar receivers, developed under the leadership of DLR, is intended to significantly boost efficiency in the conversion of solar energy into heat and electricity and thus to make the technology more cost effective.
Sunlight is reflected off over 2000 mirrors onto the high-temperature receiver
 of the power plant tower in Jülich, 
Credit: DLR (CC-BY 3.0).

Hot air – efficient and always available
Sunlight is reflected off over 2000 mirrors onto the high-temperature receiver of the power plant tower in Jülich. Researchers at the DLR Institute of Solar Research are pursuing a novel approach in which porous ceramic cubes absorb the solar radiation, converting it into heat and generating temperatures surpassing 700 degrees Celsius. The thermal energy produced is transferred to ambient air, which passes over the receiver and carries the energy to a process within the power plant. By using the constantly available ambient air, this solar receiver offers a particularly high degree of operational robustness and is therefore ideal for use in dry, sunny regions.
The main receiver, a type known as an open volumetric receiver, is mounted at the top of the 60-metre tower in Jülich and already operates using this principle. The efficiency of the overall system was measured for the first time during trials conducted at the solar power plant, which has an electrical output of 1.5 megawatts. "The purpose of this test facility is to continue developing this principle," says Peter Schwarzbözl, who heads the project at the Institute of Solar Research. "The research level lets us easily modify the test receiver and it also allows the use of extensive metrology to accurately monitor its operation. These are ideal conditions for increasing the efficiency of the technology."
Finer honeycomb and new materials
Over the next two years, the researchers will test finer pores in the honeycomb ceramic structure. They will also investigate ceramic materials that have a sponge-like structure. Both of these innovations will provide very large surfaces for effective thermal exchange with air flowing through. New materials such as metal alloys, allowing even finer porous structures, will also be tested. "High operating temperatures are a major advantage for tower technology, making the conversion of solar energy into electricity very efficient. This advantage will be even clearer if we succeed in further enhancing the receiver efficiency," says Schwarzbözl. The research project INDUSOL (Industrialisation of Ceramic Solar Components) is being conducted in cooperation with theDLR Institute of Materials Research.
Another research project, SiBopS (Simulationsunterstützte Betriebsoptimierung für Solarturmkraftwerke – simulation-assisted operational optimisation for solar tower power plants), is developing, among other things, a software-based method for optimising the target points at which the mirrors direct the radiation onto a receiver. Here, the test receiver is used mainly to validate the simulation models created over recent years at DLR. The test receiver will then be available for additional projects, targeted at continuing the development of open volumetric receiver technology. The insight this yields will be incorporated into the design of future commercial power plants.
The construction, commissioning and operation of the test facility are being financed by the State of North-Rhine Westphalia and the German Federal Ministry for the Environment, Nature Conservation and Nuclear Safety in accordance with a decision of the German Parliament. Funded by the State of North-Rhine Westphalia, the experimental solar-thermal power plant in Jülich is undergoing further development, incorporating additional facilities constructed in association with partners from industry and research, to become a centre for solar research.

Contacts

Dorothee Buerkle
German Aerospace Center (DLR)
Tel.: +49 2203 601-3492
Fax: +49 2203 601-3249
mailto:Dorothee.Buerkle@dlr.de 

Peter Schwarzboezl
German Aerospace Center (DLR)
DLR Institute of Solar Research
Tel.: +49 2203 601-2967
mailto:Peter.Schwarzboezl@dlr.de 

Automatic speaker tracking in audio recordings

A new system dispenses with the human annotation of training data required by its predecessors but achieves comparable results


CAMBRIDGE, Mass-- A central topic in spoken-language-systems research is what’s called speaker diarization, or computationally determining how many speakers feature in a recording and which of them speaks when. Speaker diarization would be an essential function of any program that automatically annotated audio or video recordings.
To date, the best diarization systems have used what’s called supervised machine learning: They’re trained on sample recordings that a human has indexed, indicating which speaker enters when. In the October issue of IEEE Transactions on Audio, Speech, and Language Processing, however, MIT researchers describe a new speaker-diarization system that achieves comparable results without supervision: No prior indexing is necessary. 
Moreover, one of the MIT researchers’ innovations was a new, compact way to represent the differences between individual speakers’ voices, which could be of use in other spoken-language computational tasks.
“You can know something about the identity of a person from the sound of their voice, so this technology is keying in to that type of information,” says Jim Glass, a senior research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and head of its Spoken Language Systems Group. “In fact, this technology could work in any language. It’s insensitive to that.”
To create a sonic portrait of a single speaker, Glass explains, a computer system will generally have to analyze more than 2,000 different acoustic features; many of those may correspond to familiar consonants and vowels, but many may not. To characterize each of those features, the system might need about 60 variables, which describe properties such as the strength of the acoustic signal in different frequency bands. 

E pluribus tres
The result is that for every second of a recording, a diarization system would have to search a space with 120,000 dimensions, which would be prohibitively time-consuming. In prior work, Najim Dehak, a research scientist in the Spoken Language Systems Group and one of the new paper’s co-authors, had demonstrated a technique for reducing the number of variables required to describe the acoustic signature of a particular speaker, dubbed the i-vector.
To get a sense of how the technique works, imagine a graph that plotted, say, hours worked by an hourly worker against money earned. The graph would be a diagonal line in a two-dimensional space. Now imagine rotating the axes of the graph so that the x-axis is parallel to the line. All of a sudden, the y-axis becomes irrelevant: All the variation in the graph is captured by the x-axis alone.
Similarly, i-vectors find new axes for describing the information that characterizes speech sounds in the 120,000-dimension space. The technique first finds the axis that captures most of the variation in the information, then the axis that captures the next-most variation, and so on. So the information added by each new axis steadily decreases.
Stephen Shum, a graduate student in MIT’s Department of Electrical Engineering and Computer Science and lead author on the new paper, found that a 100-variable i-vector — a 100-dimension approximation of the 120,000-dimension space — was an adequate starting point for a diarization system. Since i-vectors are intended to describe every possible combination of sounds that a speaker might emit over any span of time, and since a diarization system needs to classify only the sounds on a single recording, Shum was able to use similar techniques to reduce the number of variables even further, to only three.

Birds of a feather
For every second of sound in a recording, Shum thus ends up with a single point in a three-dimensional space. The next step is to identify the bounds of the clusters of points that correspond to the individual speakers. For that, Shum used an iterative process. The system begins with an artificially high estimate of the number of speakers — say, 15 — and finds a cluster of points that corresponds to each one.
Clusters that are very close to each other then coalesce to form new clusters, until the distances between them grow too large to be plausibly bridged. The process then repeats, beginning each time with the same number of clusters that it ended with on the previous iteration. Finally, it reaches a point at which it begins and ends with the same number of clusters, and the system associates each cluster with a single speaker.
 “What was completely not obvious, what was surprising, was that this i-vector representation could be used on this very, very different scale, that you could use this method of extracting features on very, very short speech segments, perhaps one second long, corresponding to a speaker turn in a telephone conversation,” Kenny adds. “I think that was the significant contribution of Stephen’s work.”


Written by Larry Hardesty, MIT News Office


Massachusetts Institute of Technology, 77 Massachusetts Avenue Building 11-400, Cambridge, MA 02139-4307 United States