Latest Technology: NEWS SCIENCE ENGINEERING DEVELOPMENTS/INVANTIONS

Sunday 12 February 2012

Researchers Designing Eye-Enhancing Virtual Reality Contact Lenses for Soldiers

eye enhancing virtual reality contact lenses
As part of the SCENICC program, DARPA researchers are working on futuristic contact lenses that will offer a dual purpose. These lenses will allow users to focus on objects that are close up and far away simultaneously, while enhancing normal vision by allowing a wearer to view virtual and augmented reality images.
Currently being developed by DARPA researchers at Washington-based Innovega iOptiks are contact lenses that enhance normal vision by allowing a wearer to view virtual and augmented reality images without the need for bulky apparatus. Instead of oversized virtual reality helmets, digital images are projected onto tiny full-color displays that are very near the eye. These novel contact lenses allow users to focus simultaneously on objects that are close up and far away. This could improve ability to use tiny portable displays while sill interacting with the surrounding environment.
Developed as part of DARPA’s Soldier Centric Imaging via Computational Cameras (SCENICC) program, SCENICC’s objective is to eliminate the ISR capability gap that exists at the individual Soldier level. The program seeks to develop novel computational imaging capabilities and explore joint design of hardware and software that give warfighters access to systems that greatly enhance their awareness, security and survivability.
Source: DARPA
Image: DARPA

Scientists Develop “PV Value” to Accurately Appraise PV Systems

Sandia National Laboratories develop a tool that can appraise photovoltaic installations on homes and businesses
Sandia National Laboratories researcher Geoff Klise worked with Solar Power Electric to develop a tool that can be used to appraise photovoltaic installations on homes and businesses.
Scientists at Sandia National Laboratories and Solar Power Electric teamed together to develop a tool to appraise homes and commercial buildings with photovoltaic (PV) installations. Dubbed PV Value, this electronic form will serve as a standard methodology for appraisers, real estate agents and mortgage underwriters that need to accurately value PV systems.
ALBUQUERQUE, New Mexico – Consistent appraisals of homes and businesses outfitted with photovoltaic (PV) installations are a real challenge for the nation’s real estate industry, but a new tool developed by Sandia National Laboratories and Solar Power Electric and licensed by Sandia addresses that issue. Sandia scientists, in partnership with Jamie Johnson of Solar Power Electric, have developed PV Value, an electronic form to standardize appraisals. Funded by the Department of Energy’s Office of Energy Efficiency and Renewable Energy, the tool will provide appraisers, real estate agents and mortgage underwriters with more accurate values for PV systems.
“Previous methods for appraising PV installations on new or existing construction have been challenging because they were not using standard appraisal practices,” said Geoff Klise, the Sandia researcher who co-developed the tool. “Typically, appraisers develop the value of a property improvement based on comparable properties with similar improvements as well as prevailing market conditions. If there aren’t PV systems nearby, there is no way to make an improvement comparison. When a PV system is undervalued or not valued at all, it essentially ignores the value of the electricity being produced and the potential savings over the lifetime of the system. By developing a standard methodology for appraisers when comparables are not available, homeowners will have more incentive to install PV systems, even if they consider moving a few years after system installation.”
The tool uses an Excel spreadsheet, tied to real-time lending information and market fluctuations, to determine the worth of a PV system. An appraiser enters such variables as the ZIP code where the system is located, the system size in watts, the derate factor – which takes into account shading and other factors that affect a system’s output – tracking, tilt and azimuth, along with a few other factors, and the spreadsheet returns the value of the system as a function of a pre-determined risk spread. The solar resource calculation in the spreadsheet is based on the PVWatts simulator developed by the National Renewable Energy Laboratory, which allows the spreadsheet to value a PV system anywhere in the U.S.
“With PV Value, appraisers can quickly calculate the present value of energy that a PV system can be estimated to produce during its remaining useful lifetime, similar to the appraisal industry’s income approach,” said Johnson. “Additionally, a property owner thinking about installing PV can now estimate the remaining present value of energy for their future PV system and what it could be worth to a purchaser of their property at any point in time in the event a sale of the property takes place before the estimated payback date is reached.”
The tool is being embraced by the Appraisal Institute, which is the nation’s largest professional association of real estate appraisers. “From my perspective as an appraiser, I see that this is a great tool to assist the appraiser in valuations, and it connects to the Appraisal Institute’s recent Residential Green and Energy Efficient Addendum. It’s an easy, user-friendly spreadsheet that will not bog the appraiser down with a lot of extra time in calculations, and if they fill out the addenda properly, they’ll be able to make the inputs and come up with some numbers fairly quickly,” said Sandy Adomatis, SRA, a real estate appraiser and member of the Appraisal Institute.
Although the tool is licensed for solar PV installations, it could be used for other large green features in a home that generate income, such as wind turbines. The spreadsheet, user manual and webinar explaining the tool are available for download at http://pv.sandia.gov/pvvalue.
Solar Power Electric located in Port Charlotte, Fla., is an electrical contracting and solar integration company specializing in the installation of commercial and residential photovoltaic systems.
Source: Sandia National Laboratories
Image: Randy Montoya, Sandia National Laboratories

Researchers Study Butterfly Flight Dynamics to Create Small Airborne Robots

Engineers at Johns Hopkins are studying butterflies using high-speed video cameras to gain a better understand of their flight dynamics. With funding from U.S. defense agencies, the researchers hope to use this knowledge to create micro aerial vehicles that will mimic the butterflies airborne maneuvers and carry out reconnaissance, search-and-rescue and environmental monitoring missions.
To improve the next generation of insect-size flying machines, Johns Hopkins engineers have been aiming high-speed video cameras at some of the prettiest bugs on the planet. By figuring out how butterflies flutter among flowers with amazing grace and agility, the researchers hope to help small airborne robots mimic these maneuvers.
U.S. defense agencies, which have funded this research, are supporting the development of bug-size flyers to carry out reconnaissance, search-and-rescue and environmental monitoring missions without risking human lives. These devices are commonly called micro aerial vehicles or MAVs.
“For military missions in particular, these MAVs must be able to fly successfully through complex urban environments, where there can be tight spaces and turbulent gusts of wind,” said Tiras Lin, a Whiting School of Engineering undergraduate who has been conducting the high-speed video research. “These flying robots will need to be able to turn quickly. But one area in which MAVs are lacking is maneuverability.”
butterfly research will aid the development of flying bug-size robots.
To address that shortcoming, Lin has been studying butterflies. “Flying insects are capable of performing a dazzling variety of flight maneuvers,” he said. “In designing MAVs, we can learn a lot from flying insects.”
The butterfly research will aid the development of flying bug-size robots. Pictured is an insect-inspired flapping-wing micro air vehicle under development at Harvard. Photo provided by Robert J. Wood, associate professor, and Pratheev Sreetharan, Harvard Microrobotics Lab, Harvard University.
Lin’s research has been supervised by Rajat Mittal, a professor of mechanical engineering. “This research is important because it attempts to not only address issues related to bio-inspired design of MAVs, but it also explores fundamental questions in biology related to the limits and capabilities of flying insects,” Mittal said.
To conduct this study, Lin has been using high-speed video to look at how changes in mass distribution associated with the wing flapping and body deformation of a flying insect help it engage in rapid aerial twists and turns. Lin, a junior mechanical engineering major from San Rafael, Calif., recently presented some of his findings at the annual meeting of the American Physical Society’s Division of Fluid Dynamics. The student also won second-prize for his presentation of this research at a regional meeting of the American Institute of Aeronautics and Astronautics.
“Ice skaters who want to spin faster bring their arms in close to their bodies and extend their arms out when they want to slow down,” Lin said. “These positions change the spatial distribution of a skater’s mass and modify their moment of inertia; this in turn affects the rotation of the skater’s body. An insect may be able to do the same thing with its body and wings.”
Butterflies move too quickly for someone to see these wing tactics clearly with the naked eye, so Lin, working with graduate student Lingxiao Zheng, used high-speed, high-resolution videogrammetry to mathematically document the trajectory and body conformation of painted lady butterflies. They accomplished this with three video cameras capable of recording 3,000 one-megapixel images per second. (By comparison, a standard video camera shoots 24, 30 or 60 frames per second.)
The Johns Hopkins researchers anchored their cameras in fixed positions and focused them on a small region within a dry transparent aquarium tank. For each analysis, several butterflies were released inside the tank. When a butterfly veered into the focal area, Lin switched on the cameras for about two seconds, collecting approximately 6,000 three-dimensional views of the insect’s flight maneuvers. From these frames, the student typically homed in on roughly one-fifth of a second of flight, captured in 600 frames. “Butterflies flap their wings about 25 times per second,” Lin said. “That’s why we had to take so many pictures.”
The arrangement of the three cameras allowed the researchers to capture three-dimensional data and analyze the movement of the insects’ wings and bodies in minute detail. That led to a key discovery.
Earlier published research pointed out that an insect’s delicate wings possess very little mass compared to the bug’s body. As a result, those scholars concluded that changes in spatial distribution of mass associated with wing-flapping did not need to be considered in analyzing an insect’s flight maneuverability and stability. “We found out that this commonly accepted assumption was not valid, at least for insects such as butterflies,” Lin said. “We learned that changes in moment of inertia, which is a property associated with mass distribution, plays an important role in insect flight, just as arm and leg motion does for ice skaters and divers.”
He said this discovery should be considered by MAV designers and may be useful to biologists who study insect flight dynamics.
Lin’s newest project involves even smaller bugs. With support from a Johns Hopkins Provost’s Undergraduate Research Award, he has begun aiming his video cameras at fruit flies, hoping to solve the mystery of how these insects manage to land upside down on perches.
The insect flight dynamics research was funded by the U.S. Air Force Office of Scientific Research and the National Science Foundation.
Source: Johns Hopkins University
Images: Johns Hopkins University

DARPA’s HACMS Program Seeks to Create New Technology

DARPA Seeks to Improve Embedded Computer Systems Security
DARPA’s The High-Assurance Cyber Military Systems program seeks to improve the security of embedded computer systems. To do this, researchers are looking to create new technology for the construction of systems by adopting a method-based approach to enable semi-automated code synthesis from executable, formal specifications.
Embedded computer systems play a part in every aspect of DoD technology. The software in these systems does everything from managing large physical infrastructures, to running peripherals such as printers and routers, to controlling medical devices such as pacemakers and insulin pumps. Networking these embedded computer systems enables remote retrieval of diagnostic information, permits software updates, and provides access to innovative features, but it also introduces vulnerabilities to the system via remote attack.
The High-Assurance Cyber Military Systems (HACMS) program seeks to create technology for the construction of systems that are functionally correct and satisfy appropriate safety and security properties,” explained, Kathleen Fisher, DARPA program manager. “Our vision for HACMS is to adopt a clean-slate, formal method-based approach to enable semi-automated code synthesis from executable, formal specifications.”
In addition to generating code, HACMS seeks a synthesizer capable of producing a machine-checkable proof that the generated code satisfies functional specifications as well as security and safety policies. A key technical challenge is the development of techniques to ensure that such proofs are composable, allowing the construction of high-assurance systems out of high-assurance components.
Key HACMS technologies include semi-automated software synthesis systems, verification tools such as theorem provers and model checkers, and specification languages. HACMS aims to produce a set of publicly available tools integrated into a high-assurance software workbench, widely distributed to both defense and commercial sectors. In the defense sector, HACMS plans to enable high-assurance military systems ranging from unmanned ground, air and underwater vehicles, to weapons systems, satellites, and command and control devices.
Source: DARPA
Image: DARPA

SOLITAIRE Flow Restoration Device Improves the Removal of Stroke-Causing Blood Clots

New device removes stroke-causing blood clots
Research presented at the American Stroke Association’s 2012 international conference shows that the SOLITAIRE Flow Restoration Device removes stroke-causing blood clots far better than the FDA approved MERCI Retrieve device. SOLITAIRE opened blocked vessels without causing symptomatic bleeding in or around the brain in 61 percent of patients and use of the device led to better survival rates.
An experimental device for removing blood clots in stroke patients dramatically outperformed the standard mechanical treatment, according to research presented by UCLA Stroke Center director Dr. Jeffrey L. Saver at the American Stroke Association’s 2012 international conference in New Orleans on Feb. 3.
The SOLITAIRE Flow Restoration Device is among an entirely new generation of devices designed to remove blood clots from blocked brain arteries in patients experiencing stroke. It has a self-expanding, stent-like design and, once inserted into a clot using a thin catheter tube, it compresses and traps the clot. The clot is then removed by withdrawing the device, thus reopening the blocked blood vessel.
In the first U.S. clinical trial of SOLITAIRE, the device opened blocked vessels without causing symptomatic bleeding in or around the brain in 61 percent of patients. The standard Food and Drug Administration–approved mechanical device — a corkscrew-type clot remover called the MERCI Retriever — was effective in 24 percent of cases.
The use of the new device also led to better survival three months after a stroke. There was a 17.2 percent mortality rate with the new device, compared with a 38.2 percent rate with the older one.
“This new device heralds a new era in acute stroke care,” said Saver, the study’s lead author and a professor of neurology at the David Geffen School of Medicine at UCLA. “We are going from our first generation of clot-removing procedures, which were only moderately good in reopening target arteries, to now having a highly effective tool. This really is a game-changing result.”
About 87 percent of all strokes are caused by blood clots blocking a blood vessel supplying the brain. The stroke treatment that has received the most study is the FDA–approved clot-busting drug known as tissue plasminogen activator, but this drug must be given within four-and-a-half hours after the onset of stroke symptoms, and even more quickly in older patients.
When clot-busting drugs cannot be used or are ineffective, the clot can sometimes be mechanically removed during, or beyond, the four-and-a-half–hour window. The current study, however, did not compare mechanical clot removal to drug treatment.
For the trial, called SOLITAIRE With the Intention for Thrombectomy (SWIFT), researchers randomly assigned 113 stroke patients at 18 hospitals to receive either SOLITAIRE or MERCI therapy within eight hours of stroke onset, between January 2010 and February 2011. The patients’ average age was 67, and 68 percent were male. The time from the beginning of stroke symptoms to the start of the clot-retriever treatment averaged 5.1 hours. Forty percent of the patients had not improved with standard clot-busting medication prior to the study, while the remainder had not received it.
At the suggestion of a safety monitoring committee, the trial was ended nearly a year earlier than planned due to significantly better outcomes with the experimental device.
Other statistically significant findings included:
  • 2 percent of SOLITAIRE-treated patients had symptoms of bleeding in the brain, compared with 11 percent of MERCI patients.
  • At the 90-day follow-up, overall adverse event rates, including bleeding in the brain, were similar for the two devices.
  • 58 percent of SOLITAIRE-treated patients had good mental/motor functioning at 90 days, compared with 33 percent of MERCI patients.
  • The SOLITARE device also opened more vessels when used as the first treatment approach, necessitating fewer subsequent attempts with other devices or drugs.
“Nearly a decade ago, our UCLA Stroke Center team invented the first stroke retrieval device — the MERCI Retriever — and now we are pleased to have helped develop and successfully test a superior, next-generation clot removing device,” said Dr. Reza Jahan, associate professor of radiology at UCLA and the study’s principal neurointerventional investigator, who also led the pre-clinical studies. “It is exciting to have a highly effective new tool that can improve the outcomes for more stroke patients.”
Source: Amy Albin, UCLA Newsroom
Image: UCLA Newsroom

Researchers at ESA Develop Augmented Reality Headset for Medical Diagnosis

The Computer Assisted Medical Diagnosis and Surgery System, CAMDASS
The Computer Assisted Medical Diagnosis and Surgery System, CAMDASS, is a wearable augmented reality prototype. Augmented reality merges actual and virtual reality by precisely combining computer-generated graphics with the wearer’s view. CAMDASS is focused for now on ultrasound examinations but in principle could guide other procedures.
Examining astronauts in need of medical help while in space is about to get a lot easier. Researchers at the European Space Agency developed a head-mounted display for 3D guidance in diagnosing problems and performing surgery. By using a stereo head-mounted display and an ultrasound tool tracked via an infrared camera, CAMDASS merges actual and virtual reality by precisely combining computer-generated graphics with the wearer’s view.
A new augmented reality unit developed by ESA can provide just-in-time medical expertise to astronauts. All they need to do is put on a head-mounted display for 3D guidance in diagnosing problems or even performing surgery.
The Computer Assisted Medical Diagnosis and Surgery System, CAMDASS, is a wearable augmented reality prototype.
Augmented reality merges actual and virtual reality by precisely combining computer-generated graphics with the wearer’s view.
CAMDASS is focused for now on ultrasound examinations but in principle could guide other procedures.
Ultrasound is leading the way because it is a versatile and effective medical diagnostic tool, and already available on the International Space Station.
CAMDASS headset being tried out on a plastic head
The CAMDASS headset being tried out on a plastic head during the October 2011 International Symposium on Mixed and Augmented Reality in Basel, Switzerland.
Future astronauts venturing further into space must be able to look after themselves. Depending on their distance from Earth, discussions with experts on the ground will involve many minutes of delay or even be blocked entirely.
“Although medical expertise will be available among the crew to some extent, astronauts cannot be trained and expected to maintain skills on all the medical procedures that might be needed,” said Arnaud Runge, a biomedical engineer overseeing the project for ESA.
CAMDASS uses a stereo head-mounted display and an ultrasound tool tracked via an infrared camera. The patient is tracked using markers placed at the site of interest.
An ultrasound device is linked with CAMDASS and the system allows the patient’s body to be ‘registered’ to the camera and the display calibrated to each wearer’s vision.
3D augmented reality cue cards are then displayed in the headset to guide the wearer. These are provided by matching points on a ‘virtual human’ and the registered patient.
This guides the wearer to position and move the ultrasound probe.
Reference ultrasound images give users an indication of what they should be seeing, and speech recognition allows hands-free control.
The prototype has been tested for usability at Saint-Pierre University Hospital in Brussels, Belgium, with medical and nursing students, Belgian Red Cross and paramedic staff.
Untrained users found they could perform a reasonably difficult procedure without other help, with effective probe positioning.
“Based on that experience, we are looking at refining the system – for instance, reducing the weight of the head-mounted display as well as the overall bulkiness of the prototype,” explained Arnaud.
“Once it reaches maturity, the system might also be used as part of a telemedicine system to provide remote medical assistance via satellite.
“It could be deployed as a self-sufficient tool for emergency responders as well.
“It would be interesting to perform more testing in remote locations, in the developing world and potentially in the Concordia Antarctic base. Eventually, it could be used in space.”
Funded by ESA’s Basic Technology Research Programme, the prototype was developed for the Agency by a consortium led by Space Applications Services NV in Belgium with support from the Technical University of Munich and the DKFZ German Cancer Research Centre.
Source: European Space Agency
Image: ESA/Space Applications Service NV

Scientists Develop Material that Absorb Carbon Dioxide from the Air

USC scientists develop material that can scrub large amounts of carbon dioxide from the airBy using fumed silica impregnated with polyethlenimine, researchers at the USC Loker Hydrocarbon Research Institute aim to recycle harmful excess carbon dioxide in the atmosphere. Their new material can absorb carbon dioxide from both dry and humid air and can release it simply by heating it up. With ongoing research, the scientists hope this technology will help turn carbon dioxide into a renewable fuel source for humanity.
A team of USC scientists has developed an easy-to-make material that can scrub large amounts of carbon dioxide from the air.
One day in the future, large artificial trees made from the material could be used to lower the concentrations of the greenhouse gas in the Earth’s atmosphere. Until then, the material can be used to scrub the air inside submarines and spacecraft, as well as certain kinds of batteries and fuel cells.
The material is the latest advance in an ongoing project at the USC Loker Hydrocarbon Research Institute that aims to recycle the harmful excess of carbon dioxide in the atmosphere into a renewable fuel source for humanity – an anthropogenic (caused by human activity) chemical carbon cycle. The institute is housed at the USC Dornsife College of Letters, Arts and Sciences.
The project seeks to solve two of the world’s greatest problems at once: the increase in atmospheric greenhouse gases and the dwindling supply of fossil fuels burned to create that issue.
“Carbon dioxide is not a problem,” said George Olah, Distinguished Professor of Chemistry atUSC Dornsife. “Nature recycles it. Mankind should too.”
Olah collaborated on the project with fellow corresponding authors G. K. Surya Prakash and Alain Goeppert, as well as Miklos Czaun, Robert B. May and S. R. Narayanan. The results were published in the Journal of the American Chemical Society in November.
Olah described his work on the anthropogenic carbon cycle as the most important work of his career – eclipsing even his work on carbocations in superacids that earned him a Nobel Prize in Chemistry in 1994.
The researchers’ new material is a fumed silica (the thickening agent in milkshakes) impregnated with polyethlenimine (a polymer) – and was found to absorb carbon dioxide well from both dry and humid air. Once the carbon dioxide is captured, the material can be made to release it simply by heating it up.
Though the work is ongoing, Olah and Prakash hope to find a low-cost, low-energy method of turning the captured carbon dioxide into methanol – which can be burned as a fuel source and used as a chemical feedstock.
“It is basically assuring a long-lasting renewable source of one of the essential elements of life on Earth,” Olah said.
The research was supported by the Loker Hydrocarbon Research Institute, the U.S. Department of Energy and the department’s Advanced Research Projects Agency-Energy.
Source: University of Southern California
Image: Pamela J. Johnson

NASA Mobile Launcher Reacted as Expected During Move to Launch Pad 39B

Mobile Launcher Tests Confirm Designs
The mobile launcher was returned to its park site beside the Vehicle Assembly Building in November 2011 following checkouts at Launch Pad 39B at NASA's Kennedy Space Center in Florida.
The results are in and multiple sensors showed that NASA’s 355-foot-tall Mobile Launcher reacted as expected during its move to Launch Pad 39B in November 2011. Engineers were testing to see if the if the structure and crawler would be up to the challenge and stated that the actual results varied less than 5 percent from the predicted computer models used in designing the ML.
The 355-foot-tall Mobile Launcher, or ML, behaved as expected during its move to Launch Pad 39B in November 2011, an analysis of multiple sensors showed. The top of the tower swayed less than an inch each way.
“I would think you would have perceived it,” said NASA’s Chris Brown, the lead design engineer for the ML.
The tests showed that computer models used in designing the massive structure were correct. The actual results varied less than 5 percent of what was predicted.
“This gives us much higher confidence in the models,” Brown said. “We know that our approach is valid.”
The computer models for the launch support structures and the models for the Space Launch System rocket the ML will work with will be used together to fine tune both designs.
mobile launcher as it stood at Launch Pad 39B
The mobile launcher as it stood at Launch Pad 39B. Note the Vehicle Assembly Building in the background.
Engineers had the tower wired with dozens of accelerometers and strain gauges along with wind sensors to record the launcher’s movement during its slow ride atop a crawler-transporter from a park site beside the Vehicle Assembly Building to the launch pad.
The ML is expected to make the same trip numerous times during its career as the support structure for NASA’s Space Launch System, or SLS, a huge rocket envisioned to launch astronauts into deep space. The move and testing was planned to show designers whether the structure and crawler would be up to the challenge.
Crawler drivers performed several speed changes during the six-mile journeys to and from the pad. While at the pad, which is being refurbished after decades of hosting space shuttles, workers connected ventilation, fire support and alarm systems and other water lines.
The instruments used in the testing are very precise, accurate enough to record even the most subtle of vibrations and movements.
“We were measuring milli-g’s,” Brown said.
The readings will also be used for determining how fast the crawler will be allowed to go as it carries the rocket to the launch pad. For instance, there is substantial vibration at 0.8 mph, so engineers want drivers to stay away from that particular speed, but that does not necessarily mean the crawler will be ordered to slow it down.
The ML, designed for the Ares I rocket of the cancelled Constellation program, is due for major modifications in the coming few years as it is strengthened to support the much-heavier SLS. It took two years to build and was completed in August 2010.
A structural design contract is expected to be awarded this year and a construction contractor in 2013. Umbilical arms reaching from the tower to the rocket are scheduled to be installed in 2015.
The ML is the biggest structure of its kind since the Launch Umbilical Towers were constructed to support the Apollo/Saturn V. Those towers saw numerous modifications through their lives as trial-and-error showed where changes were needed, Brown said.
“Our goal here is to have less of those kinds of problems,” Brown said.
Computer models were also used when NASA designed the Apollo towers, but those models were much simpler than today’s versions simply by virtue of the computing power available now, Brown said.
“We can run in five minutes what would have taken them days to run,” Brown said.
Source: Steven Siceloff, NASA’s Kennedy Space Center
Images: NASA/Kim Shiflett

Researchers Develop IE2 for Analyzing Solar Cell Materials

Impurities to Efficiency (known as I2E) tells how efficient the resulting solar cell would be.jpg
A team of researchers believe they have developed a more efficient way to make a silicon solar cell. Using the free online tool “Impurities to Efficiency” (known as I2E), researchers can plug in descriptions of their alternative manufacturing strategies and get almost instant feedback on how efficient the resulting solar cell would be.
To make a silicon solar cell, you start with a slice of highly purified silicon crystal, and then process it through several stages involving gradual heating and cooling. But figuring out the tradeoffs involved in selecting the purity level of the starting silicon wafer — and then exactly how much to heat it, how fast, for how long, and so on through each of several steps — has largely been a matter of trial and error, guided by intuition and experience.
Now, MIT researchers think they have found a better way.
An online tool called “Impurities to Efficiency” (known as I2E) allows companies or researchers exploring alternative manufacturing strategies to plug in descriptions of their planned materials and processing steps. After about one minute of simulation, I2E gives an indication of exactly how efficient the resulting solar cell would be in converting sunlight to electricity.
One crucial factor in determining solar cell efficiency is the size and distribution of iron particles within the silicon: Even though the silicon used in solar cells has been purified to the 99.9999 percent level, the tiny remaining amount of iron forms obstacles that can block the flow of electrons. But it’s not just the overall amount that matters; it’s the exact distribution and size of the iron particles, something that is both hard to predict and hard to measure.
Graduate student David Fenning, part of the MIT team behind I2E, compares the effect of iron atoms on the flow of electrons in a solar cell to a group of protesters in a city: If they gather together in one intersection, they may block traffic at that point, but cars can still find ways around and there is little disruption. “But if there’s one person in the middle of every intersection, the whole city could shut down,” he says, even though it’s the same number of people.
A team led by assistant professor of mechanical engineering Tonio Buonassisi, including Fenning, fellow graduate student Douglas Powell and collaborators from the Solar Energy Institute at Spain’s Technical University of Madrid, found a way to use basic physics and a detailed computer simulation to predict exactly how iron atoms and particles will behave during the wafer-manufacturing process. They then used a highly specialized measurement tool — an X-ray beam from a synchrotron at Argonne National Laboratory — to confirm their simulations by revealing the actual distribution of the particles in the wafers.
“High-temperature processing redistributes the metals,” Buonassisi explains. Using that sophisticated equipment, the team took measurements of the distribution of iron in the wafer, both initially and again after processing, and compared that with the predictions from their computer simulation.
Free of charge, the I2E website has been online since July, and users have already carried out approximately 2,000 simulations. The details of how the system works and examples of industrial impact will be reported soon in a paper in the trade journal Photovoltaics International. The U.S. Department of Energy, which supported the research, has also reported on the new tool in an entry that will be posted on the agency’s blog.
Already, Powell says, I2E has been used by “research centers from around the world.”
By using the tool, a company called Varian Semiconductor Equipment Associates (recently acquired by Applied Materials), which makes equipment for producing solar cells, was able to fine-tune one of the furnaces they sell. The changes enabled the equipment to produce silicon wafers for solar cells five times faster than it originally did, even while slightly improving the overall efficiency of the resulting cells.
The company “started with a process that was fairly long,” Buonassisi says. They initially found a way to speed it up, but with too much of a sacrifice in performance. Ultimately, he says, using I2E, “we came up with a process that was about five times faster than the original, while performing just as well.”
Without the tool, there are simply too many possible variations to test, so people end up selecting the best from a small number of choices. But with I2E, Buonassisi says, “you can look for the global optimum” — that is, the best possible solution for a given set of requirements. “We can really speed up the innovation process,” he says.
Russell Low, a manager at Varian who was not involved in the work with MIT, says, “I would consider the work being carried out at MIT to be leading edge — combining computation physics with high-resolution experimentation. Given that silicon is still the major cost component of producing a solar cell, any technique that is capable of making use of [cheaper materials] … is a significant achievement.”
Fenning says that companies generally “can’t afford to do these large experiments” needed to figure out the best process for a given application. The physics of what goes on inside the wafer during the processing is complex, he says: “There are a number of competing mechanisms that cloud the picture of exactly what is going on,” which is why developing the simulation was a long and complex process.
Now that the simulation tool is available, Fenning says, it helps manufacturers balance product quality against production time. Because there are so many variations in the supplies of starting material, he says, “it’s a constantly evolving problem. That’s what makes it interesting.”
Source: David L. Chandler, MIT News Office
Image: Patrick Gillooly

Researchers Study Zeolite for Filtering Out Carbon Dioxide

roughly octagonal pores in zeolite SSZ-13 are like stop signs for carbon dioxide
The roughly octagonal pores in zeolite SSZ-13 are like stop signs for carbon dioxide, capturing molecules of the greenhouse gas while apparently letting other substances through. The material could prove to be an economical smokestack filter.
Researchers from the National Institute of Standards and Technology and the University of Delaware are working together to reduce greenhouse gas emissions. They believe a material called a zeolite, which has octagonal “windows” between its interior pore spaces, might help reduce carbon dioxide coming from factory smokestacks far more efficiently than current models.
Filtering carbon dioxide, a greenhouse gas, from factory smokestacks is a necessary, but expensive part of many manufacturing processes. However, a collaborative research team from the National Institute of Standards and Technology (NIST) and the University of Delaware has gathered new insight into the performance of a material called a zeolite that may stop carbon dioxide in its tracks far more efficiently than current scrubbers do.*
Zeolites are highly porous rocks—think of a sponge made of stone—and while they occur in nature, they can be manufactured as well. Their toughness, high surface area (a gram of zeolite can have hundreds of square meters of surface in its myriad internal chambers) and ability to be reused hundreds of times makes them ideal candidates for filtering gas mixtures. If an unwanted molecule in the gas mixture is found to stick to a zeolite, passing the mixture through it can scrub the gas of many impurities, so zeolites are widely used in industrial chemistry as catalysts and filters.
The team explored a zeolite created decades ago in an industrial lab and known by its technical name, SSZ-13. This zeolite, which has octagonal “windows” between its interior pore spaces, is special because it seems highly capable of filtering out carbon dioxide (CO2) from a gas mixture. “That makes SSZ-13 a promising candidate for scrubbing this greenhouse gas out of such things as factory smokestacks,” says Craig Brown, a researcher at the NIST Center for Neutron Research (NCNR). “So we explored, on an atomic level, how it does this so well.”
Using neutron diffraction, the team determined that SSZ-13’s eight-sided pore windows are particularly good at attracting the long, skinny carbon dioxide molecules and holding onto their “positively-charged” central carbon atoms, all the while allowing other molecules with different shapes and electronic properties to pass by unaffected. Like a stop sign, each pore halts one CO2 molecule—and each cubic centimeter of the zeolite has enough pores to stop 0.31 grams of CO2, a quantity that makes SSZ-13 highly competitive when compared to other adsorbent materials.
Brown says a zeolite like SSZ-13 probably will become a prime candidate for carbon scrubbing because it also could prove more economical than other scrubbers currently used in industry. SSZ-13’s ability to attract only CO2 could mean its use would reduce the energy demands of scrubbing, which can require up to 25 percent of the power generated in a coal or natural gas power plant.
“Many industrial zeolites attract water and carbon dioxide, which are both present in flue exhaust—meaning both molecules are, in a sense, competing for space inside the zeolite,” Brown explains. “We suspect that this novel CO2 adsorption mechanism means that water is no longer competing for the same site. A zeolite that adsorbs CO2 and little else could create significant cost savings, and that’s what this one appears to do.”
Brown says his team is still collecting data to confirm this theory, and that their future efforts will concentrate on exploring whether SSZ-13 is equally good at separating CO2 from methane—the primary component of natural gas. CO2 is also released in significant quantities during gas extraction, and the team is hopeful SSZ-13 can address this problem as well.
Source: National Institute of Standards and Technology
Image: National Institute of Standards and Technology
* M.R. Hudson, W.L. Queen, J.A. Mason, D.W. Fickel, R.F. Lobo and C.M. Brown. Unconventional, highly selective CO2 adsorption in zeolite SSZ-13. Journal of the American Chemical Society Published on the Web Jan. 10, 2012. DOI: 10.1021/ja210580b

DARPA’s LS3 to Ease Physical Load on Troops

DARPA prototype LS3 robotic pack mule
Prototype robotic “pack mule” stands up, lies down and follows leader carrying 400 lbs. of squad’s gear.
To help carry gear, DARPA is continuing research into developing a highly mobile, semi-autonomous legged robot to integrate with a squad of Marines or Soldiers. As seen in the video, a recent prototype underwent a series of outdoor tests and DARPA plans to continue refinements over the next 18 months.
Today’s dismounted warfighter can be saddled with more than 100 pounds of gear, resulting in physical strain, fatigue and degraded performance. Reducing the load on dismounted warfighters has become a major point of emphasis for defense research and development, because the increasing weight of individual equipment has a negative impact on warfighter readiness. The Army has identified physical overburden as one of its top five science and technology challenges. To help alleviate physical weight on troops, DARPA is developing a highly mobile, semi-autonomous legged robot, the Legged Squad Support System (LS3), to integrate with a squad of Marines or Soldiers.

Recently the LS3 prototype underwent its first outdoor exercise, demonstrating the ability to follow a person using its “eyes”—sensors that allow the robot to distinguish between trees, rocks, terrain obstacles and people. Over the course of the next 18 months, DARPA plans to complete development of and refine key capabilities to ensure LS3 is able to support dismounted squads of warfighters.
Features to be tested and validated include the ability to carry 400lbs on a 20-mile trek in 24-hours without being refueled, and refinement of LS3’s vision sensors to track a specific individual or object, observe obstacles in its path and to autonomously make course corrections as needed. Also planned is the addition of “hearing” technology, enabling squad members to speak commands to LS3 such as “stop,” “sit” or “come here.” The robot also serves as a mobile auxiliary power source— troops may recharge batteries for radios and handheld devices while on patrol.
DARPA seeks to demonstrate that an LS3 can carry a considerable load from dismounted squad members, follow them through rugged terrain and interact with them in a natural way, similar to the way a trained animal and its handler interact.
“If successful, this could provide real value to a squad while addressing the military’s concern for unburdening troops,” said Army Lt. Col. Joe Hitt, DARPA program manager. “LS3 seeks to have the responsiveness of a trained animal and the carrying capacity of a mule.”
The 18-month platform-refinement test cycle, with Marine and Army involvement, kicks off this summer. The tests culminate in a planned capstone exercise where LS3 will embed with Marines conducting field exercises.
LS3 is based on mobility technology advanced by DARPA’s Big Dog technology demonstrator, as well other DARPA robotics programs which developed the perception technology for LS3’s “eyes” and planned “ears.”
The DARPA LS3 performer is Boston Dynamics of Waltham, Mass.
Source: DARPA
Image: DARPA

NASA Searching for Green Propellant Technology

Green Propellant Technology Demonstrations
NASA is looking for a greener alternative for fuel instead of using highly toxic hydrazine. They are working with local American companies to develop new technology and are seeking a high performance green propellant that will benefit the American space industry and decrease environmental hazards and pollutants.
WASHINGTON — NASA is seeking technology demonstration proposals for green propellant alternatives to the highly toxic fuel hydrazine. As NASA works with American companies to open a new era of access to space, the agency seeks innovative and transformative fuels that are less harmful to our environment.
Hydrazine is an efficient and ubiquitous propellant that can be stored for long periods of time, but is also highly corrosive and toxic. It is used extensively on commercial and defense department satellites as well as for NASA science and exploration missions. NASA is looking for an alternative that decreases environmental hazards and pollutants, has fewer operational hazards and shortens rocket launch processing times.
“High performance green propulsion has the potential to significantly change how we travel in space,” said Michael Gazarik, director of NASA’s Space Technology Program at the agency’s headquarters in Washington. “NASA’s Space Technology Program seeks out these sort of cross-cutting, innovative technologies to enable our future missions while also providing benefit to the American space industry. By reducing the hazards of handling fuel, we can reduce ground processing time and lower costs for rocket launches, allowing a greater community of researchers and technologists access to the high frontier.”
Beyond decreasing environmental hazards and pollutants, promising aspects of green propellants also include reduced systems complexity, fewer operational hazards, decreased launch processing times and increased propellant performance.
Maturing a space technology, such as green propellants, to mission readiness through relevant environment testing and demonstration is a significant challenge from a cost, schedule and risk perspective. NASA has established the Technology Demonstration Missions Program to perform this function, bridging the gap between laboratory confirmation of a technology and its initial use on an operational mission.
NASA anticipates making one or more awards in response to this solicitation, with no single award exceeding $50 million. Final awards will be made based on the strength of proposals and availability of funds. The deadline for submitting proposals is April 30.
Source: David E. Steitz, NASA; Kimberly Newton, Marshall Space Flight Center
Image: NASA

Lithium Iron Phosphate Batteries Could Lead to Cheaper, More Efficient Solar Energy

LiFePO4 batteries could lead to cheaper, more efficient solar energy.jpg
Researchers at the University of Southampton and REAPsystems have found that using lithium iron phosphate batteries as the storage device for photovoltaic systems have the potential to greatly improve the efficiency and reduce the cost of solar power.
A joint research project between the University of Southampton and lithium battery technology company REAPsystems has found that a new type of battery has the potential to improve the efficiency and reduce the cost of solar power.
The research project, sponsored by REAPsystems, was led by MSc Sustainable Energy Technologies student, Yue Wu and his supervisors Dr Carlos Ponce de Leon, Professor Tom Markvart and Dr John Low (currently working at the University’s Research Institute for Industry, RIfI). The study looked specifically into the use of lithium batteries as an energy storage device in photovoltaic systems.
Student Yue Wu says, “Lead acid batteries are traditionally the energy storage device used for most photovoltaic systems. However, as an energy storage device, lithium batteries, especially the LiFePO4 batteries we used, have more favorable characteristics.”
Data was collected by connecting a lithium iron phosphate battery to a photovoltaic system attached to one of the University’s buildings, using a specifically designed battery management system supplied by REAPsystems.
Yue adds, “the research showed that the lithium battery has an energy efficiency of 95 per cent whereas the lead-acid batteries commonly used today only have around 80 per cent. The weight of the lithium batteries is lower and they have a longer life span than the lead-acid batteries reaching up to 1,600 charge/discharge cycles, meaning they would need to be replaced less frequently.”
Although the battery will require further testing before being put into commercial photovoltaic systems the research has shown that the LiFePO4 battery has the potential to improve the efficiency of solar power systems and help to reduce the costs of both their installation and upkeep. Dr Carlos Ponce de Leon and Dr. John Low now plan to take this project further with a new cohort of Masters students.
Dr Dennis Doerffel, founder of REAPsystems and former researcher at the University of Southampton, says; “For all kinds of energy source (renewable or non-renewable), the energy storage device – such as a battery – plays an important role in determining the energy utilization. Compared with traditional lead acid batteries, LiFePO4 batteries are more efficient, have a longer lifetime, are lighter and cost less per unit. We can see the potential of this battery being used widely in photovoltaic application, and other renewable energy systems.”
Source: University of Southampton

Coding Scheme Guarantees Fastest Possible Delivery of Data

In the upcoming issue of the journal IEEE Transactions on Information Theory, researchers from
error-correcting codes guarantee the fastest possible rate of data transmission
MIT, Tel Aviv University and Google prove mathematically and explain a new coding scheme that guarantees the fastest possible delivery of data over fluctuating wireless connections. Their system uses a strategy that sends the codeword in sections without repeating transmissions and stops once the receiver has received enough symbols to decode the underlying message.
Error-correcting codes are one of the triumphs of the digital age. They’re a way of encoding information so that it can be transmitted across a communication channel — such as an optical fiber or a wireless connection — with perfect fidelity, even in the presence of the corrupting influences known as “noise.”
An encoded message is called a codeword; the noisier the channel, the longer the codeword has to be to ensure perfect communication. But the longer the codeword, the longer it takes to transmit the message. So the ideal of maximally efficient, perfectly faithful communication requires precisely matching codeword length to the level of noise in the channel.
Wireless devices, such as cellphones or Wi-Fi transmitters, regularly send out test messages to gauge noise levels, so they can adjust their codes accordingly. But as anyone who’s used a cellphone knows, reception quality can vary at locations just a few feet apart — or even at a single location. Noise measurements can rapidly become outdated, and wireless devices routinely end up using codewords that are too long, squandering bandwidth, or too short, making accurate decoding impossible.
In the next issue of the journal IEEE Transactions on Information Theory, Gregory Wornell, a professor in the Department of Electrical Engineering and Computer Science at MIT, Uri Erez at Tel Aviv University in Israel and Mitchell Trott at Google describe a new coding scheme that guarantees the fastest possible delivery of data over fluctuating wireless connections without requiring prior knowledge of noise levels. The researchers also received a U.S. patent for the technique in September.
Say ‘when’
The scheme works by creating one long codeword for each message, but successively longer chunks of the codeword are themselves good codewords. “The transmission strategy is that we send the first part of the codeword,” Wornell explains. “If it doesn’t succeed, we send the second part, and so on. We don’t repeat transmissions: We always send the next part rather than resending the same part again. Because when you marry the first part, which was too noisy to decode, with the second and any subsequent parts, they together constitute a new, good encoding of the message for a higher level of noise.”
Say, for instance, that the long codeword — call it the master codeword — consists of 30,000 symbols. The first 10,000 symbols might be the ideal encoding if there’s a minimum level of noise in the channel. But if there’s more noise, the receiver might need the next 5,000 symbols as well, or the next 7,374. If there’s a lot of noise, the receiver might require almost all of the 30,000 symbols. But once it has received enough symbols to decode the underlying message, it signals the sender to stop. In the paper, the researchers prove mathematically that at that point, the length of the received codeword is the shortest possible length given the channel’s noise properties — even if they’ve been fluctuating.
To produce their master codeword, the researchers first split the message to be sent into several — for example, three — fragments of equal length. They encode each of those fragments using existing error-correcting codes, such as Gallager codes, a very efficient class of codes common in wireless communication. Then they multiply each of the resulting codewords by a different number and add the results together. That produces the first chunk of the master codeword. Then they multiply the codewords by a different set of numbers and add those results, producing the second chunk of the master codeword, and so on.
Tailor-made
In order to decode a message, the receiver needs to know the numbers by which the codewords were multiplied. Those numbers — along with the number of fragments into which the initial message is divided and the size of the chunks of the master codeword — depend on the expected variability of the communications channel. Wornell surmises, however, that a few standard configurations will suffice for most wireless applications.
The only chunk of the master codeword that must be transmitted in its entirety is the first. Thereafter, the receiver could complete the decoding with only partial chunks. So the size of the initial chunk is calibrated to the highest possible channel quality that can be expected for a particular application.
Finally, the complexity of the decoding process depends on the number of fragments into which the initial message is divided. If that number is three, which Wornell considers a good bet for most wireless links, the decoder has to decode three messages instead of one for every chunk it receives, so it will perform three times as many computations as it would with a conventional code. “In the world of digital communication, however,” Wornell says, “a fixed factor of three is not a big deal, given Moore’s Law on the growth of computation power.”
H. Vincent Poor, the Michael Henry Strater University Professor of Electrical Engineering and dean of the School of Engineering and Applied Science at Princeton University, sees few obstacles to the commercial deployment of a coding scheme such as the one developed by Wornell and his colleagues. “The codes are inherently practical,” Poor says. “In fact, the paper not only develops the theory and analysis of such codes but also provides specific examples of practical constructions.”
Because the codes “enable efficient communication over unpredictable channels,” he adds, “they have an important role to play in future wireless-communication applications and standards for connecting mobile devices.”
Source: Larry Hardesty, MIT News Office
Image: Christine Daniloff

‘SUPERSTAR’ Reactor has Important Safety Features

Sustainable Proliferation-resistance Enhanced Refined Secure Transportable Autonomous Reactor
Researchers at the Argonne National Laboratory designed a new small reactor cooled by lead that has several key safety features. The Sustainable Proliferation-resistance Enhanced Refined Secure Transportable Autonomous Reactor, or SUPERSTAR, has rods that will automatically drop into the core without electricity and it uses natural circulation, a process that exploits a law of physics to move the coolant instead of relying on electricity.
Though most of today’s nuclear reactors are cooled by water, we’ve long known that there are alternatives; in fact, the world’s first nuclear-powered electricity in 1951 came from a reactor cooled by sodium. Reactors cooled by liquid metals such as sodium or lead have a unique set of abilities that may again make them significant players in the nuclear industry.
At the U.S. Department of Energy’s (DOE) Argonne National Laboratory, a team led by senior nuclear engineer James Sienicki has designed a new small reactor cooled by lead—the Sustainable Proliferation-resistance Enhanced Refined Secure Transportable Autonomous Reactor, or SUPERSTAR for short.
Small modular reactors, or SMRs, are small-scale nuclear plants that are designed to be factory-manufactured and shipped as modules to be assembled at a site. They can be designed to operate without refueling for 15 to 30 years. The concept offers promising answers to many questions about nuclear power—including proliferation, waste, safety and start-up costs.
SUPERSTAR is an example of a so-called “fast reactor,” a type fundamentally different from the light-water reactors common today. Light-water reactors use water both as a coolant and as a moderator to slow down neutrons created in the fuel as it fissions. Instead, fast reactors use materials that don’t slow down neutrons—often a liquid metal, such as sodium or lead.
Like all new generations of reactors, SUPERSTAR has “passive” safety systems—backup safety measures that kick in automatically, without human intervention, in case of accidents. For example, all reactors have control rods incorporating substances that absorb neutrons and stop nuclear chain reactions. SUPERSTAR’s rods can be suspended above the reactor core held in place by electricity. If the plant loses power, the control rods will automatically drop into the core and stop the reaction.
In addition, SUPERSTAR’s lead coolant is circulated around the core by a process called natural circulation. While existing plants use electrically-driven pumps to keep the water moving, SUPERSTAR exploits a law of physics to move the coolant.
“In any closed loop, with heat at the bottom and cooling on top, a flow will develop, with the heated stream rising to the top and cooled stream going down,” explained Anton Moisseytsev, an Argonne nuclear engineer also working on the reactor design. “The SUPERSTAR design takes advantage of this feature—its lead coolant is circulated solely by natural circulation, with no pumps needed. And of course, having no pumps means no pump failures.”
This means that if the plant loses power, as happened at the Fukushima Daiichi plant in Japan, the reactor does not need electricity to cool the core after shutdown.
Although the SMR concept has been around for decades, the idea has gained greater traction in recent years. Both President Obama and U.S. Department of Energy Secretary Steven Chu have extolled the virtues of SMRs; Secretary Chu said their development could give American manufacturers a “key competitive edge.”
For example, the smaller size of SMRs gives them greater flexibility. “A small grid in a developing nation or a rural area may not need the 1,000 megawatts that a full-size reactor produces,” Sienicki said. “In addition, SUPERSTAR can adjust its own power output according to demand from the grid.”
Sienicki and his colleagues designed the reactor so that it could be shipped, disassembled, on a train. SMRs have been pinpointed for use in developing nations or outlying areas; these plants could be dropped off at a site and easily installed.
Because the plant runs for decades on a single installment of fuel—and operators need never directly interact with the fuel, which is sealed in the core—SMRs also address proliferation concerns. Reducing access to the fuel lowers all the risks associated with creating and changing fuel, such as uranium enrichment technology.
Finally, SMRs could also offer cost benefits. After major cost overruns on plants in the 1980s, investors have been wary of financing new nuclear plants. Small modular reactors reduce the risk in investing in new plants; the start-up cost would be less than those for full-size reactors. In addition, the parts for the reactors could be manufactured in assembly lines at factories, further diminishing the cost.
Several European countries have shown interest in lead-cooled reactors, Sienicki said. Studies such as Cinotti et. al (“The ELSY Project,” International Conference on the Physics of Reactors, 2008) suggest that they may be cheaper to build than sodium-cooled reactors.
A paper, “An Improved Natural Circulation, Lead-Cooled, Small Modular Fast Reactor for International Deployment,” was presented at the 2011 International Congress on Advances in Nuclear Power Plants. Argonne nuclear engineers Jim Sienicki and Anton Moisseytsev co-authored the paper, along with Argonne’s Gerardo Aliberti, Sara Bortot at the Politecnico di Milano, Italy and Qiyue Lu at the University of Illinois at Urbana-Champaign.
Source: Louise Lerner, Argonne National Laboratory
Image: Argonne National Laboratory

NASA Plans Space Outpost on the Far Side of the Moon at Earth Moon Lagrange Point 2

earth-moon-lagrange-eml2
NASA wants to plan a space outpost parked near the Lagrange point, where the Earth’s and moon’s gravitation fields nearly cancel each other out, making it a lot easier to stage manned space missions into space.
orion-earth-moon-telerobotics
The Lagrange points, or libration points or L-points, would allow the outpost to remain in an almost fixed position. There are five Lagrange points around Earth. For this outpost, NASA is looking at the Earth-moon Lagrange point 2 (EML-2) as the best possible site for the next manned outpost. EML-2 is actually situated beyond the moon.
moon-telepresence
The outpost could prove a staging ground for operations on and around the moon, but also act as a launch point toward Mars, the Kuiper Belt and beyond. This project will need international support, and the agency is currently checking the possibility of an EML-2 outpost. The report will be handed in by March 30, 2012.
lagrange-pointsAn EML-2 waypoint could also enable a significant telerobotic presence on the far side of the moon, and could serve as a platform for solar and Earth-based scientific observation, radio astronomy, and other scientific pursuits in the quite zone behind the moon. If the EML-2 waypoint is established it would represent the farthest that humans have traveled from Earth to date. Extended stays at EML-2 could provide advancements in life sciences and radiation shielding for long-duration missions outside the Van Allen

Researchers Focus on Using High-Energy Electrons to Treat Cancer

February 10, 2012
High-Energy Electrons to Treat Cancer
Image of more targeted radiation (in red) via high-energy electrons directed into a "phantom" (model) lung.
Researchers are working on new technology that could dramatically reduce the time needed for cancer radiation treatments. By using high-energy electrons for radiation therapy, the researchers believe you could reduce the average treatment time of 15 to 60 minutes down to less than a second for most tumors.
Accelerator physicists at SLAC and cancer specialists from Stanford are working on a new technology that could dramatically reduce the time needed for cancer radiation treatments. The team ran an initial experiment using high-energy electrons in January and has asked the National Institutes of Health for $1.25 million to finance further studies.
As Eric Colby, SLAC’s director of accelerator research, put it, “While the result is extremely preliminary – people won’t be lining up for treatment anytime soon – it is a great example of the sort of innovative, high-social-value research that SLAC and Stanford are capable of, and is the start of what we hope will be a growing collaboration.”
It all started when Dr. Billy W. Loo, a radiation oncologist at Stanford University School of Medicine, read a paper in Physics and Medicine in Biology that proposed using high-energy electrons, rather than X-rays, for radiation therapy. Loo realized that with the right delivery system, this could reduce the typical length of a radiation treatment from the current average of 15 to 60 minutes down to less than a second for a wide range of tumors.
He discussed the idea with his close collaborator, medical physicist Peter Maxim, and then asked Magdalena Bazalova, a radiation oncology instructor on his staff, to run computer simulations to see if the method could work.
Bazalova had become an expert in a technique called Monte Carlo simulation while working in high-energy physics at CERN, the European center for particle physics. Named after the gambling mecca in Monaco, Monte Carlo simulations use multiple, repeated random data points as square one for predicting outcomes. In her initial report to Loo, Bazalova said the simulations confirmed that high-energy electrons should provide more targeted treatment than photons of X-ray light.
His email response came back in less than 10 minutes: “Holy smokes! We definitely need to talk more about this.”
Loo and Maxim knew SLAC had a beam line – the Next Linear Collider Test Accelerator, NLCTA – staffed by researchers with experience in supporting small accelerator physics experiments. What’s more, when they approached SLAC, they learned that Sami Tantawi, associate professor of particle physics and astrophysics at the lab, had independently come up with the idea of building a machine to treat cancer with high-energy electrons. His ultimate goal was to shrink it to table-top size, small enough for use in a patient-care suite.
The Stanford and SLAC groups quickly teamed up and decided to apply for an NIH grant to finance a research program. The application deadline loomed just days away, and they would first need to demonstrate that they were capable of carrying out experiments. Would that allow enough time to design an experiment, reconfigure the NLCTA accelerator, source the necessary materials, run the experiment and analyze the initial data?
And of course, there’s the matter of getting time on the NLCTA beam, which can be very competitive.
“We were asked to help with an urgent experiment. When the boss asks a question like that, the answer is ‘yes,’” said SLAC engineering physicist Keith Jobe, who would pull the team together and manage the logistics. “And furthermore, it’s exciting.”
The team would need to simulate a patient-treatment suite for radiation therapy. Jobe and his colleagues from the Test Facilities Department removed a section of the beam line at NLCTA and installed special windows.
They ran the experiment using “phantoms” that mimic human tissue – in this case, specially designed layers of polystyrene with sheets of radiation-sensitive film between them.
They hit the plastic layers with the high-energy electron beam, producing dark areas on the film. The results – the locations, sizes and darkness of the exposed spots – were consistent with what the simulation predicted, offering a proof of concept that high-energy electrons could one day be successful in targeting cancer cells.
It was a collaboration of parallel interests, said Jobe, and the team moved quickly: From the date of Loo’s initial request to reviewing the data, less than two weeks elapsed.
“Everything clicked in very fast. All the right people appeared at the right moment. Everything fell into place,” Tantawi said. “The investment (the country is making) in high-energy physics is paying off in other fields that are unexpected.”
What’s next? Bazalova says there will be more Monte Carlo simulations, more testing using phantoms, and later, in vitro testing with various cell lines.
Within the radiation physics community, there is a great desire to conduct similar experiments, Jobe said. “It was a spectacular team effort.”
Source: Diane Rezendes Khirallah, SLAC National Accelerator Laboratory
Image: Stanford School of Medicine, Department of Radiation Oncolog

Laser Relativity Satellite to Measure Frame-Dragging Effect of General Relativity

LARES-laser-satelliteThe Laser Relativity Satellite (LARES) will allow Italian astrophysicists to measure the frame-dragging effect of general relativity up to an accuracy of 1%. Laser ranging stations on Earth will fire lasers at LARES to calculate its distance and orbit, with a high degree of precision.
This precision is what’s necessary to determine frame-dragging, the Lense-Thirring effect, which is what general relativity predicts will happen when an object with a gravity well like Earth spins. Relativity states that massive objects can have some measurable effects on spacetime, even more so when they spin since they will drag spacetime around into the same direction of the spin.
spacetime-frame-dragging-expected
The local twisting of spacetime can have some strange effects, such as light on one side of the spin moving faster than on the other side. Such orbital procession is somewhat hard to detect, since the frame-dragging only amounts to about one part in a few trillion. Over time, it’s hoped that LARES’s orbit with shift eastward by a couple of feet per year, which could help determine the intensity of Earth’s frame-dragging.
There’s some skepticism that this will be possible, but it’s hoped that by 2016, there will be some experimental proof of the frame-dragging.

Saturday 11 February 2012

Electrical Engineers Build 'No-Waste' Laser

 A team of University of California, San Diego researchers has built the smallest room-temperature nanolaser to date, as well as an even more startling device: a highly efficient, "thresholdless" laser that funnels all its photons into lasing, without any waste.
The two new lasers require very low power to operate, an important breakthrough since lasers usually require greater and greater "pump power" to begin lasing as they shrink to nano sizes. The small size and extremely low power of these nanolasers could make them very useful components for future optical circuits packed on to tiny computer chips, Mercedeh Khajavikhan and her UC San Diego Jacobs School of Engineering colleagues report in the Feb. 9 issue of the journal Nature.
They suggest that the thresholdless laser may also help researchers as they develop new metamaterials, artificially structured materials that are already being studied for applications from super-lenses that can be used to see individual viruses or DNA molecules to "cloaking" devices that bend light around an object to make it appear invisible.
All lasers require a certain amount of "pump power" from an outside source to begin emitting a coherent beam of light or "lasing," explained Yeshaiahu (Shaya) Fainman, a professor in the Department of Electrical and Computer Engineering at UC San Diego and co-author of the new study. A laser's threshold is the point where this coherent output is greater than any spontaneous emission produced.
The smaller a laser is, the greater the pump power needed to reach the point of lasing. To overcome this problem, the UC San Diego researchers developed a design for the new lasers that uses quantum electrodynamic effects in coaxial nanocavities to alleviate the threshold constraint. Like a coaxial cable hooked up to a television (only at a much smaller scale), the laser cavity consists of a metal rod enclosed by a ring of metal-coated, quantum wells of semiconductor material. Khajavikhan and the rest of the team built the thresholdless laser by modifying the geometry of this cavity.
The new design also allowed them to build the smallest room-temperature, continuous wave laser to date. The new room-temperature nanoscale coaxial laser is more than an order of magnitude smaller than their previous record smallest nanolaser published in Nature Photonics less than two years ago. The whole device is almost half a micron in diameter -- by comparison, the period at the end of this sentence is nearly 600 microns wide.
These highly efficient lasers would be useful in augmenting future computing chips with optical communications, where the lasers are used to establish communication links between distant points on the chip. Only a small amount of pump power would be required to reach lasing, reducing the number of photons needed to transmit information, said Fainman.
The nanolaser designs appear to be scalable -- meaning that they could be shrunk to even smaller sizes -- an extremely important feature that makes it possible to harvest laser light from even smaller nanoscale structures, the researchers note. This feature eventually could make them useful for creating and analyzing metamaterials with structures smaller than the wavelength of light currently emitted by the lasers.
Fainman said other applications for the new lasers could include tiny biochemical sensors or high-resolution displays, but the researchers are still working out the theory behind how these tiny lasers operate. They would also like to find a way to pump the lasers electrically instead of optically.
Co-authors for the Nature study, "Thresholdless Nanoscale Coaxial Lasers," include Mercedeh Khajavikhan, Aleksandar Simic, Michael Kats, Jin Hyoung Lee, Boris Slutsky, Amit Mizrahi, Vitaliy Lomakin, and Yeshaiahu Fainman in the Department of Electrical and Computer Engineering at the UC San Diego Jacobs School of Engineering. The nanolasers are fabricated at the university's NANO3 facility. The research was funded by the Defense Advanced Research Projects Agency, the National Science Foundation, the NSF Center for Integrated Access Networks (CIAN), the Cymer Corporation and the U.S. Army Research Office.

Thermaltake's latest laptop cooler gets 200mm LED fan and angle adjustment

While there are many options to cool a gaming desktop PC, mobile gamers might find it a little more difficult to implement additional fans to chill their gaming laptop monsters. Thermaltake has unveiled plans to expand its laptop cooling stand lineup by releasing the Massive 23 GT Cooler - a lightweight metal stand that packs an LED-illuminated 200mm fan and, unlike previous offerings, five different angles adjustment.
    Thermaltake Massive 23 GT Cooler is tailored for laptops up to 17 inchAccording to Thermaltake, the cooler is also suitable for tabletsThermaltake Massive 23 GT Cooler's middle part is made of metal mesh with anti-slip rubber
    The new cooler supports up to 17 inch laptops and according to the Taiwanese manufacturer it also makes a suitable (and loud) stand for tablet PCs.
    The center section of the Thermaltake Massive 23 GT Cooler is made of metal mesh with an anti-slip rubber coating, while the sides and base are plastic. Its flip-up design with metal bracket allows for 5 different angles adjustment and it comes with two USB ports and a single mini USB port built-in.
    The speed of 200mm red LED fan is adjustable via a control knob offering speeds from 500 to 800 RPM, with maximum noise at 24 dBA.
    It's also light enough to throw in a bag at 903 grams (1.99lbs) and its dimensions are 352 x 293.1 x 41.4 mm (13.8 x 11.5 x 1.62 in).
    Pricing and release date haven't been revealed by Thermaltake at this point.

Experimental optical fibers utilize built-in electronics instead of separate chips

When data is transmitted as pulses of light along a fiber optic cable, chips at either end of that cable must convert the data from and back into an electronic signal - this is what allows an outgoing video image to be converted into light pulses, then back into video at the receiving end, for instance. There are a number of technical challenges in coupling chips to fibers, however. Now, an international team of scientists are developing an alternative ... fiber optics with the electronics built right into the fiber.
The main challenge regarding chips and optical fibers is a mechanical one - it's just plain difficult getting a round fiber to securely connect to a flat chip. It can also be quite a task making sure that all of the data gets from one to the other. An optical fiber is one-tenth the width of a human hair, while the light-guiding pathways on chips are even smaller, so getting everything lined up is a very fiddly business.
For the research project, the team deposited semiconducting materials within tiny holes at either end of optical fibers, to create high-speed electronic junction points - these would ordinarily be located where the fiber meets the chip. The scientists used high-pressure chemistry techniques to deposit the materials directly, layer by layer. Not only does this eliminate the need for an entire chip in the finished product, but the process can also be carried out with simple inexpensive equipment, as opposed to the clean-room facilities required for chip manufacturing.
"If the signal never leaves the fiber, then it is a faster, cheaper, and more efficient technology," said team co-leader Pier J. A. Sazio, of the University of Southampton. "Moving technology off the chip and directly onto the fiber, which is the more-natural place for light, opens up the potential for embedded semiconductors to carry optoelectronic applications to the next level. At present, you still have electrical switching at both ends of the optical fiber. If we can actually generate signals inside a fiber, a whole range of optoelectronic applications becomes possible."
Some of these applications could include improved telecommunications, laser technology, and remote-sensing devices. It would be interesting to see if the new fiber could be incorporated into the hybrid cable being developed by Sandia National Laboratories, which is capable of transmitting both data and power.
The electronic fiber project was initiated and is being led by Pennsylvania State University, and was funded by the U.S. National Science Foundation and the Engineering and Physical Sciences Research Council of the United Kingdom.
Source: Penn State