Thursday, 31 March 2016

Japan activates underground ice wall to seal away Fukushima's nuclear waste

Among the many problems plaguing the cleanup at Fukushima is the threat of radioactive water spilling from the site. The Japanese government is now ramping up its efforts to contain this problem, by flicking the switch on an underground ice wall that will enclose the failed nuclear facility to slow the spread of contaminated material.
An ice wall might sound like something out of science fiction, but is actually an engineering technique that has been used for tunnel boring and mining, albeit on a smaller scale. Refrigerated brine cooled to -30 degrees Celsius (-22 ° F) will be pumped through pipes plunging 30 m (98.5 ft) into the ground, freezing the soil and eventually sealing the four reactors damaged in the 2011 earthquake and tsunami inside a 1,500 m (5,000 ft) barrier.
Scientists have recently detected elevated levels of radiation in seawater samples collected near the reactors, and even as far away as the US west coast, confirming that there is an ongoing release of toxic materials from the plant. Workers at Fukushima have already filled purpose-built steel tanks with tons of toxic water from the reactors, but there are some areas that they simply can't access as radiation levels remain dangerously high, so high in fact that even robots sent in to investigate are having their wiring fried.
So, with 400 tons of groundwater flowing downhill into the reactor basements each day and some of that then spilling into the sea, there is a need for alternative solutions.
Construction on the frozen soil wall began in 2014 and is now complete. Japan's nuclear regulator gave the green light for the wall to be activated on Wednesday, setting in motion a plan to surround the four damaged reactors with an impenetrable barrier. The wall will be turned on in stages, but the first will see around 95 percent of the barrier activated. Plant operator Tepco says that initially leaving a gap in the wall will keep groundwater levels within the perimeter at a higher level than that of the basement water, therefore preventing the latter from spilling over.
Once it has assessed the success of this first stage, which it expects to cut the flow of groundwater into the buildings by 50 percent, it will seek approval to switch on the remaining portion to form a solid boundary around the four reactors. There is no set timeline, but the process is expected to take place over a period of months.

Gravity-measuring smartphone tech might save you from a volcano

The prototype mini gravimeter, known as the Wee-gAlthough you may not use a gravimeter to detect tiny changes in gravity (or for anything else), they are commonly used in fields such as oil exploration and environmental surveying. They could have more applications, were it not for the fact that they tend to be relatively large and expensive. Scientists at the University of Glasgow have set about addressing that limitation, by creating a compact gravimeter that incorporates smartphone technology.
Called the Wee-g, the prototype device utilizes the same micro-electromechanical systems (MEMS) that are manufactured for use in phones' accelerometers.
Whereas phone systems include "relatively stiff and insensitive" springs for maintaining orientation, however, the Wee-g incorporates a silicon spring which is 10 times thinner than a human hair. Combined with a 12-mm-square sensor, that added sensitivity allows the gravimeter to pick up even the most minute changes in the Earth's gravitational field.
To test the device, the researchers placed it in a basement room in the university, then used it to measure "Earth tides" – these are slight expansions and contractions of the Earth's crust, as caused by the gravitational pull of the Sun and Moon. Readings taken over a seven-day period from the Wee-g were consistent with mathematical models, which in turn have been shown to be an accurate measure of the tides.
"There are a lot of potential industrial applications for gravimeters, but their cost and bulkiness have made them impractical in many situations," says researcher Richard Middlemiss. "Wee-g opens up the possibility of making gravity measurement a much more realistic proposition for all kinds of industries: gravity surveys for geophysical exploration could be carried out with drones instead of planes; and networks of MEMS gravimeters could be places around volcanoes to monitor the intrusion of magma that occurs before an eruption – acting as an early warning system."
The scientists are now working on making the device even smaller, and are pursuing commercialization with industry partners.

Caribbean’s largest solar array goes online

The first phase of the facility comprises 132,000 solar panelsA 33.4 MW photovoltaic solar array in the Dominican Republic has gone live this week. The installation at the Monte Plata solar facility is claimed to be the largest in the Caribbean and a planned second phase of the project is expected to take it to a capacity of 67 MW.
Developed by Phanes Group in partnership with General Energy Solutions and Soventix, the first phase of the facility comprises 132,000 solar panels, reportedly tripling the number of solar panels in the Dominican Republic. Following the completion of the second phase, there will be 270,000 panels installed, with an expected annual output of over 50,000 MWh.
General Energy Solutions says the Monte Plata facility will provide clean electricity to more than 50,000 households and will save an estimated 70,000 tons of CO2, compared to electricity generated using fossil fuels. In addition, Phanes Group says it expects the plant to help free the country from high electricity prices caused by importing fuel that have restricted its economic development.
The first phase of the Monte Plata solar facility was inaugurated on Tuesday, with the second phase expected to be complete by the end of the year.

3D-printed bionic hand could soon be yours – if you need it

Youbionic's muscle-activated prosthetic handWhen we first reported on the relatively cheap 3D-printed robotic hand made by Youbionic back in 2014, we indicated that the device was only a prototype and that the makers were looking for funding to bring it to market. Well, apparently they've gotten the funding, because Youbionic is now taking pre-orders for the device.
Youbionic founder Federico Ciccarese told Gizmag that he hopes to fulfill orders this summer (Northern Hemisphere). He also said that he's used the time since that first prototype to refine the device and test it out both on himself and on people who've lost their hands. The action of the bionic hand is activated through muscular contractions and, Ciccarese says, will work with any muscle in the forearm, even though he uses his hand to actuate the robotic fingers in the video below.
The hand, which is run by the simple open-source Arduino processor, has the ability to perform simple gripping and pointing gestures. Using these gestures, it can pick up objects by applying just enough force to grip the items without crushing them. The pointing gesture can be used for typing or, ostensibly, for working a smartphone.
While Ciccarese's hand isn't advanced as some other models out there, such as the highly-articulated Steeper bebionic for example, it also doesn't come with as high a price tag. While Steeper's artificial hand costs US$11,000, Youbionic's version will cost a much more affordable $1,200.

Study suggests massive exoplanet characterized by vast magma pools

If 55 Cancri e is indeed a lava planet as the new study suggests, it will ...Using data collected by NASA's Spitzer Space Telescope, a team of astronomers have produced the first ever heat map of an Earth-like exoplanet. The alien climate map paints a grim picture of a world scorched by its close proximity to its host star, with extreme temperature variations noted between the star-facing and far side of the planet.
To describe 55 Cancri e as a "super-Earth" could be considered somewhat misleading when used in conjunction with our home planet, as the two bodies bear few reconciling traits. The planet, which boasts a mass of around eight times that of Earth in a body roughly twice the size, is incredibly inhospitable when compared to the blue marble on which we reside.
55 Cancri e is tidally locked much like our Moon, meaning that the exoplanet only ever displays one face to its parent star. The so-called super-Earth is also known to orbit very closely with its parent star, taking only 18 hours to complete a full cycle, resulting in hellish surface temperatures.
The new study drew on data collected by Spitzer over the course of 80 hours as it observed distinct phases of 55 Cancri e as it passed in front of its parent star. These phases when observed from Earth are very similar to the phases of our Moon, and allowed the astronomers to build up a global map of the unusual super-Earth detailing heat distribution and temperature changes across its surface.
The map displayed a surprising disparity in heat levels between the star-facing side of the exoplanet, which experiences a blistering temperature of 4,400º F (2,700 K), and the far side, which is believed to endure around 2,060º F (1,400 K).
The study jars with previous interpretations of data that had led some to believe that 55 Cancri e was something of a water world hosting a dense atmosphere that generated powerful winds responsible for distributing heat.
Instead, the team asserts that the large difference in temperature between the star-facing and far sides of the planet act as evidence for a lack of such a system. It is possible that the star-facing side of 55 Cancri e is characterized by vast lava flows and prevalent magma pools. On the far side, the temperature drops harshly enough for the flows to solidify, preventing heat from being distributed effectively.
The notion of lava flows and pools of magma existing on the surface of 55 Cancri e are strengthened by an observed shift in the location of the hottest point on 55 Cancri e to a position directly beneath the parent star.

When SRK told Google CEO Sundar Pichai he wanted to be a software engineer

SRK in conversation with Sundar Pichai at Google headquarters in California Oct 2014. PHOTO: NDTVBollywood superstar Shah Rukh Khan told Google’s new CEO Sundar Pichai last October that he had always wanted to be a software engineer, not an actor, NDTV reported.
The two met at Googleplex where they participated in a 30-minute chat during the promotional tour of Farah Khan’s Happy New Year last year. The star revealed that he was, in fact, very good at numbers. “No, really. I look stupid but I’m not, I’m really intelligent. I did electronics and got 98. Those were the days of diodes and triodes, not chips and things.”
Read: Sundar Pichai: the little-known new chief of Google
King Khan even danced with the Google flash mob on the song Indiawale from Happy New Year. Interestingly, during the chat, Pichai told SRK to let him know if he ever wanted to switch careers.
Pichai, 43, was named chief executive officer of the internet titan on Monday, as Google unveiled a new corporate structure creating an umbrella company dubbed Alphabet. He will oversee the biggest company under that umbrella, which will still be called Google and will continue to include some of its household products, including its search engine, ads, maps, apps, YouTube and Android system.

half diamond using for loop






#include<iostream>
#include<conio.h>
using namespace std;
main()
{
int x,y,z;
for (x=1;x<=5;x++)
{
for (y=x;y<=4;y++)
{
cout<<" ";
}
for (z=1;z<(2*x);z++)
{
cout<<"*";
}
cout<<endl;
}
return 0;
}

Toyota Setsuna Concept: A wooden time machine on wheels

The Setsuna is a concept vehicle designed and built by Kenji Tsuji and his team of ...Debuting in April at Milan Design Week, the Toyota Setsuna concept is a wooden masterpiece that combines this timeless material with a 100-year chronograph. Made to embody the family affection that is often imbued in our cars, the Setsuna is meant to be passed along as an heirloom.
The Setsuna is a concept vehicle designed and built by Kenji Tsuji and his team of Toyota engineers. The engineers say that they chose wood as their medium for the car's build because wood changes and gains character over time as it's cared for. The idea being that the Setsuna would be passed on from generation to generation and the wood it's made from would change in hue and texture with that passing time.
The team used specific woods for various parts of the working car, including nearly all of the structure. Metal comprises only a very small part of the Setsuna's overall design. The exterior is made of Japanese cedar for long-life and its particular hue. Japanese birch makes up the framing and some chassis components because of its rigid nature. Japanese zelkova, known for its durability, was used for the flooring while castor aralia was used for the seating.
The exterior has two grain patterns that can be swapped out when desired. A straight grain gives a flowing, simple look whereas a cross grain has a more natural, characteristic appeal. To join the woods, traditional Japanese woodworking was called upon, allowing most of the joints to be secured without nails or screws. Okuriari and Kusabi techniques were used. Okuriari is a housed dovetail joint that can be easily slipped free without tools, but which holds its position when under pressure. Kusabi, a type of mortise and tenon joint, is used on framing and other structural components of the Setsuna.
To finish the woods, a wipe-lacquering finish was applied by hand to many of the car's parts, including mirror housings, body banding lines, and the steering wheel. This multi-layer lacquering technique sees its applique in stages, being wiped on, with the grain, repeatedly. A few aluminum finish pieces band the Setsuna to augment the look of the aluminum steering wheel frame and wheels.
Key to the concept are the Setsuna emblem, made to symbolize the "accumulation of moments." The radial emblem mirrors the radial clock/meter inside the car, which sits on the dashboard and ticks away the time over a 100 year span. Hands on the clock denote the minutes and hours while a rolling meter denotes years.

Facebook-owned smartphone messaging service WhatsApp has hit the billion-user mark

PHOTO: PUBLICITYSAN FRANCISCO: Facebook-owned smartphone messaging service WhatsApp has hit the billion-user mark, according to the leading social network’s chief and co-founder Mark Zuckerberg.
“One billion people now use WhatsApp,” Zuckerberg said in a post on his Facebook page.
WhatsApp to drop subscription fees, no plans to launch ads
“There are only a few services that connect more than a billion people.”
Google’s free email service, Gmail, is the latest of the Internet giant’s offerings to crest the billion-user mark, chief Sundar Pichai said Monday during an earnings call.
The ranks of people using WhatsApp have more than doubled since California-based Facebook bought the service for $19 billion in late 2014, according to Zuckerberg.
“That’s nearly one-in-seven people on Earth who use WhatsApp each month to stay in touch with their loved ones, their friends and their family,” the WhatsApp team said in a blog post.
After buying WhatsApp, Facebook made the service completely free. The next step, according to Zuckerberg, is to make it easier to use the service to communicate with businesses.
Hidden WhatsApp feature will reveal your closest friends
Weaving WhatsApp into exchanges between businesses and customers has the potential to create revenue opportunity for Facebook.
Recent media reports have indicated that Facebook is working behind the scenes to integrate WhatsApp more snugly into the world’s leading social network by providing the ability to share information between the services.

Can you guess how much Google CEO Sundar Pichai took home in 2015

PHOTO: REUTERSIndian-born Google CEO received $100.5 million in total compensation for 2015, a company filing revealed on Tuesday.
Sundar Pichai, who rose through the ranks to become the CEO of Google last year, has a baseline salary of $652,500. However, the vast majority of that pay package, $99.82 million, will vest fully in 2017.
Google CEO Pichai receives stock grant worth about $199 million
According to data released by Economic Policy Institute last year, the average compensation for CEOs of the 350 largest firms in the US in 2014 was $16.3 million.
Google has awarded more than $600 million in stock options to Pichai that vest at various intervals over the coming years, according to data crunched by Bloomberg.
Pichai, a longtime Google executive who previously ran the Chrome and Android businesses, has been described by colleagues as “the better day-to-day CEO” compared to his predecessor and Google cofounder Larry Page.
In response to a question about his previously reported lavish stock options, Pichai told BuzzFeed, “I’m very fortunate. I take that as an opportunity to figure out thoughtfully how I give back to the world.”
When SRK told Google CEO Sundar Pichai he wanted to be a software engineer
Pichai, 43, was named chief executive officer of the Internet titan in 2015, as Google unveiled a new corporate structure creating an umbrella company dubbed Alphabet. He will oversee the biggest company under that umbrella, which will still be called Google and will continue to include some of its household products, including its search engine, ads, maps, apps, YouTube and the Android system.
Alphabet will be run by Google chief Larry Page, who showered praise upon Pichai, senior vice president of products. “I feel very fortunate to have someone as talented as he is to run the slightly slimmed down Google and this frees up time for me to continue to scale our aspirations,” according to Page in a blog post.
Page said he was impressed with his “progress and dedication to the company” and promised to continue to groom Pichai, who has been at Google since 2004. “I have been spending quite a bit of time with Sundar, helping him and the company in any way I can, and I will of course continue to do that.”

Mujhe Dushman ke Bachon ko Parhana Hai | ISPR New Song | APS Peshawar

Tidal forces pinpointed as catalyst for Enceladus' eruptions

Cassini image of Enceladus' southern pollResearchers from the University of Chicago and Princeton University have generated a new computer model that successfully simulates the mechanism driving impressive geyser eruptions observed taking place on the Saturnian moon Enceladus. The geysers have been active since Cassini first observed the phenomena in 2005, and were likely erupting long before the probe entered orbit around Saturn.
Enceladus has served as the focal point of repeated observation over the course of Cassini's prolonged mission characterizing Saturn and her moons. Yet, in spite of the generous attention paid to the moon, the mechanism that drives of the icy body's impressive geysers has remained a mystery.
These geysers spew forth vast quantities of frost and vapor from vast rents, or "tiger stripes" in the moon's south polar region. Furthermore, the subsurface ocean believed to be buried beneath Enceladus' icy exterior is considered one of the most likely places in our solar system to discover the presence of alien life.
In an attempt to gain a better understanding of the materials cast out by Enceladus impressive cryovolcanoes, the Cassini spacecraft dived through one of Enceladus' plumes last year in the search of clues as to whether the environment beneath the surface of the seemingly barren moon is hospitable to life.
The constant nature of the eruptions has given rise to a number of questions. For example, how have the geysers managed to operate continuously for over a decade without being sealed at least temporarily by a build-up of frost particles choking the entrances of the shafts?
Another anomaly has been observed wherein eruptions fail to achieve their peak activity until roughly five hours after the expected time, based on tidal response models. One theory attempted to explain away the lag by asserting that Enceladus boasted a spongy shell that took longer than anticipated to react to the tidal pressures exerted on the icy body by the nearby gas giant.
The researchers created a computer model of Enceladus, complete with a series of parallel slots located at the observed eruption sites that extended from the surface of the moon down to its underground ocean.
This model was then subjected to the tidal forces generated by Saturn's gravity as it interacts with the moon's interior. Tidal pumping in the slots causes turbulence, heating the water contained within. However, the activity did not occur all at once in the new model, with the key variable in eruption timing proving to be the diameter of the separate parallel shafts.
The influence of the gas giant's gravity on water in narrow shafts led to an eruption up to eight hours after the predicted peak level of activity, while wider shafts respond much faster. The sweet spot in between these two extremes creates an eruption with a five-hour delay, explaining the lag observed by the Cassini spacecraft.
According to the researchers, their model could be tested against the new data collected by Cassini during its recent flyby of Enceladus. If the heat driving the plumes is in fact being generated from deep within the vents by tidal pumping, then the surface of the south polar region between the vents would register as cold in the Cassini data.

Windows 10 review: Microsoft builds an OS for the future

Windows 10: can Microsoft fix the mistakes of Windows 8?After the underwhelming Windows 8, Windows 10 is Microsoft's second attempt to build an operating system that's ready for the future while staying loyal to the past. The Start menu is back, Cortana makes the jump to the desktop, and Microsoft has put together an OS that it hopes is truly ready for computers, tablets, phones, games consoles and beyond.
Windows 10 dials back some of the drastic changes introduced with Windows 8 without abandoning Microsoft's original goal of an OS that can work on any screen. The Start menu returns, but keeps Live Tiles; universal apps (coded to run on anything from a phone to a laptop) are still here, but they can work like normal desktop programs; there is a tablet mode, but it only appears when you're actually on a tablet; and so on.
It's a long series of compromises between Windows past and Windows future and, on the whole, it works very well.
The Settings app in Windows 10 brings over more of the options in Control Panel, for example, and seems far more fully realized than it was in Windows 8. It's indicative of Windows 10 as a whole, a more polished and well-thought out version of what its predecessor started. You'll spend less time wondering where settings are, and more time in the new interface, rather than digging back through legacy screens.
If you're using Windows 10 on a laptop or desktop, it's a much more satisfying experience. The confusing "hot corners" of Windows 8 have gone, and the phone and tablet elements of the OS are well hidden in the background. If you're upgrading from Windows 8 you'll be pleasantly surprised, and if you're moving up from Windows 7 you'll feel right at home.

Cortana and the revamped Start menu

On mobile devices we've seen a shift towards a greater use of voice control and intelligent assistant apps like Siri and Google Now. Microsoft has its own horse in this race in the form of Cortana, and the app is now available on your computer too (assuming you're in a country where Cortana is supported).
If you're new to Cortana, it handles everything from web searches to reminders. You can ask for a weather forecast or the number of miles in a kilometer, launch apps and even toggle Windows settings – it feels very much like a voice-controlled, context-aware extension of the Start menu itself, and Microsoft has managed to integrate it in a way that feels intuitive.
And if you don't want to shout instructions at your computer, you don't have to. You can type queries into the search box on the taskbar just as easily, and we found ourselves using the box very often to find apps, files, settings, websites and more besides. It feels like a natural extension of the Start menu.
Customizing said Start menu is simple to do and it would appear Microsoft has finally come up with something to please the majority of its users. On tablets, the full-screen, tile-based Start screen we saw in Windows 8 comes back, as it's much more suitable for tapping at with your fingertips, but if you're on a laptop or desktop you'll never see it.
Elsewhere on the desktop we found two new features very useful indeed: the ability to snap windows to four quarters of the screen (as well as each side) and the virtual desktops, an official feature at last, enabling you to move application windows to several desktop spaces rather than one. Both make it easier to arrange a lot of windows and applications on screen and work particularly well on bigger displays.
The new Task View works well too, showing all of your open windows on one screen so you can jump between them more conveniently (it's similar to Mission Control on a Mac). The desktop improvements are exactly what they should have been in Windows 8 – clever, useful and not completely out of step with everything that has gone before.

Apps and applications

Microsoft's universal app store is still something of a wasteland, no doubt due in part to the woeful take-up of Windows Phone on mobile as well as the old ARM-based Surfaces that didn't run desktop apps. Having apps that jump seamlessly from desktop to mobile is a noble aim but if your Surface Pro 4 can run Photoshop why would you build a cut-down touchscreen version as well?
There are some big names here – official apps for Netflix, Spotify, Dropbox and Evernote, for example – but no compelling reason why you would pick them over the desktop or even web-based equivalents (at least on a desktop or laptop machine). As polished as Windows 10 feels in general, the universal app initiative is still very much a work-in-progress.
For Windows old-schoolers like us, we didn't have much need to delve into the world of universal apps, such as the ones Microsoft has provided for email, contacts, photos and maps. Perhaps these will be more relevant for users of Windows 10 Mobile, but right now Microsoft looks like it's less committed to that particular version of the OS than ever.
There's a new browser in the form of Microsoft Edge, which seems designed to challenge Google Chrome head-on. It's certainly an improvement on the creaking Internet Explorer in terms of speed and looks (IE is still present for legacy purposes), but it doesn't yet feel smooth enough to take on Chrome – there's no extension support here, for example, although it's currently available in Previews and expected soon in public builds.
Xbox integration has been improved and keeps on improving: being able to stream games to your laptop from the console is a real bonus for gamers, as is support for next-gen kit like the HTC Vive and Oculus Rift (though older versions of Windows handle VR just as well) and eventually Microso
ft's own HoloLens.

An OS for the future

Taken as a whole, it's difficult to find fault with Windows 10, at least in its desktop and laptop form. It's intuitive, robust and well-designed, reversing some of the mistakes made with Windows 8 and making sure the software is suitable for tablets, 2-in-1s and everything that comes afterwards.
With its shiny new browser, intelligent assistant app and modern-looking UI, not to mention links with kit like the HoloLens, it feels very much like an operating system for the future. After several months of use, it simply blends into the background like any good OS should, giving you instant access to your applications, the web, and any settings you might need to get at along the way.
Windows 7 users can at last upgrade with confidence and even Mac owners may find themselves looking across enviously at the range of third-party hardware the OS can work with (from VR headsets to Android devices). Windows 10 isn't without its minor frustrations, but it gives Microsoft and its users a strong foundation for the next generation of computing.

Wednesday, 30 March 2016

Caterham Cars gives bicycle technology a try

The prototype Caterham chassis, influenced by bicycle technology Caterham Cars has lasted over six decades by delivering a series of lightweight sports cars known for their unique look and giant fun factor. By sometime next year, buyers of the company's iconic Caterham Seven may have the option of an even lighter-weight model incorporating the butted tube technology used in bicycles.
The bicycle influence came about when Reynolds Technology, a company known for making quality bicycle frame tubes, approached Caterham with the idea of making an ultra-lightweight chassis using the butted tube process it had patented back in 1897. Once Caterham agreed, Simpact Engineering was enlisted for its design expertise.
The trio then had six months to complete the research and design of the chassis based on the parameters set by the Niche Vehicle Network in the UK that supplied the funding.
The final result was a prototype vehicle incorporating a chassis that came in more than 10 percent lighter than the already slimmed-down model found in the current run of Caterham Seven models.
Caterham said the plan is to further refine the chassis with the goal of offering it as an option for models in the 2017 lineup, at an extra cost of between £1,000 and £2,000 (about US$1,438 and $2,876). The company will also continue to develop the prototype vehicle utilizing the butted chassis, with the intent of launching it as a new model at some point in the future.
The automaker additionally stated that the process and technology developed for this project will be available to license to companies that make trucks, cranes or any business in which weight savings would be beneficial. Caterham does, incidentally, make its own line of bikes – without butted frames.

Zeiss telephoto lens used on Apollo 15 up for auction

The 500mm lens was used on the Moon with a Hasselbald lunar camera similar to this ...In October 2015, Boston-based RR Auction set a new record for astronaut memorabilia when the only privately-owned watch to be worn on Moon sold for US$1.6 million. This watch, which was flown on Apollo 15 in 1972, is now followed to the auction block by a Zeiss telephoto lens from the same mission. The Zeiss Tele-Tessar 500mm f/8 lens by Carl Zeiss AG was used by Mission Commander David R Scott with a Hasselblad camera body to set a new standard of photography on manned lunar missions and is expected to fetch around half a million dollars.
Apollo 15 was the fourth lunar landing and the first long-stay mission that focused primarily on science. Flying from July 26 to August 7, 1971, its crew consisted of Commander David Scott, Lunar Module Pilot James Irwin, and Command Module Pilot Alfred Worden, and was regarded by NASA as the most successful manned mission up to that time.
On July 30, Scott and Irwin landed in the rugged Apennine region near Hadley rille in Mare Imbrium. It was the first landing to carry the electrically-powered Lunar Rover, and is remembered for the moment when Scott carried out the famous "Galileo test," where he dropped a feather and a hammer at the same time to prove that they fell at the same speed in a vacuum.
One major mission goal for Apollo 15 was to improve the quantity and quality of photography. Despite returning some of the most dramatic images in history, the Apollo landings were constantly struggling with how to get the best photos with cameras strapped to the front of a spacesuit and operated through thick, pressurized gloves.
One improvement was the inclusion of a bespoke 500mm lens for the Hasselblad Electric Data Camera (HDC) used by Scott, who lobbied hard for the lens during mission planning. The purpose of the lens was to allow the astronauts to take clear, detailed photographs of geological formations that they could see, but not visit. In all, the 500mm took 293 photos.
According to RR Auctions, the lens, which is somewhat the worse for wear, is 12 in (30.5 cm) long and is painted silver to keep the internal workings cool in the intense sunlight of the lunar surface. Near the mount are engraved NASA part numbers and the adjustment rings have special tabs to allow them to be used with spacesuit gloves.
The 500mm lens took a few knocks during the mission – especially when Scott fell down. Because of this, lengths of tape on the lens housing still hold particles of moon dust.
"After our three days on the Moon, [the lens] was returned to the Command Module in lunar orbit where it was used for two more days to photograph the surface of the Moon," said Scott in a letter that is included in the sale. "After the mission, I received the lens from NASA as a memento of the mission and it has been in my personal collection since that time."
The 500mm lens is part of the Space and Aviation Auction, which runs from April 14 to 21 and is expected to sell for $400,000 to $600,000.

$3,500 SolarSkiff electric boat rides to the water on your car roof

The SolarSkiff relies on a Torqeedo Travel 1003 electric motorNot everyone needs a large, expensive boat to get out on the water. Some folks just want something simple, affordable and easy to float. For those types, Mississippi-based Betta Boats LLC presents the SolarSkiff, a compact, dirt-simple solarized electric boat that doesn't require a trailer. If you want to motor through the water for fishing or leisure without the investment or hassles involved with other boats, the SolarSkiff looks like an intriguing option.
While we enjoy ultra-luxurious, high-tech ships, boats and water toys, too, there's something truly inspiring about simple, efficient vessels built for grabbing, going and getting onto the water. Whether it's a motorized lounge chair, a self-inflating paddleboard or a floating bike, these simple watercraft prove that you don't have to spend tens or hundreds of thousands of dollars to have a little fun on your local lake or river.
The SolarSkiff reminds us a lot of the flat-decked BeachRay we looked at in 2014, only with a few key differences: it's even simpler and more compact and uses an electric motor with available solar charging. Each SolarSkiff is powered by a small Torqeedo Travel 1003 motor with snap-on lithium-ion battery pack. That 3-hp motor and battery weigh under 30 lb (13.6 kg) together and offer a 10- to 20-mile (16- to 32-km) trolling range and 5 mph (3 km/h) top speed. The battery recharges in about seven hours from either a 120-volt outlet or an available 50 W solar charging set-up that can be mounted on deck and used to charge as you float.
Betta Boats says that it uses a patent-pending hull construction, covering a block of closed cell polystyrene foam with fiberglass-reinforced mesh and an acrylic coating. The hull has an 8-foot length and 4-foot beam (2.4, 1.2 m) and is designed to ride in the bed of a pickup truck, as well as on a car top. The owner removes the swiveling pedestal seat(s) and motor before securing the hull to the roof rack.
Obviously you're not going to undertake any grand sea crossings with the SolarSkiff, but Betta Boats recommends it for navigating smaller, calmer waterways, such as lakes, canals and bayous.
Betta Boats launched the SolarSkiff at last month's Miami International Boat Show, where it showed a prototype. It advertises one-, two- and four-person models with prices between US$3,500 for the 841 single-person plug-in boat ($4,250 for the solar version) and $5,250 for the 844 four-person with solar charging. The company offers custom hull sizes up to 24 feet (7.3 m), fishing-specific customization and camo wraps. Beyond the United States, it is looking at markets in the Caribbean, Mexico, Central America and Europe.

Lithium-oxygen breakthrough clears the air for boosted batteries

The formation of the lithium superoxide is the result of the spacing of iridium nanoparticles in ...Boasting an energy density similar to that of gasoline, lithium-air (or lithium-oxygen) batteries may one day prove the panacea for the range-anxiety associated with electric vehicles. But first there are a number of challenges that need to be overcome, one of which is the unwanted buildup of lithium peroxide on the electrode which hampers this type of battery's performance. Scientists have now figured out a way that this mess might be avoided – an advance they say could lead to batteries with five times the energy density of those currently available.
By doing away with clunky internal oxidizers and instead drawing on oxygen from the air to power its chemical reaction, lithium-air batteries could feature energy densities with many times that of current lithium-ion batteries.
But an undesirable byproduct of this chemical reaction is the formation of lithium peroxide, which obstructs the electrode's conducting surface. One potential way to overcome this is to alter the electrode and chemical makeup of the electrolyte so that it produces lithium hydroxide instead, as demonstrated by scientists at the University of Cambridge late last year.
But by focusing purely on the electrode, a team at the Argonne National Laboratory has worked out how such a battery could be made to produce lithium superoxide during discharge, rather than lithium peroxide. It says that the lithium superoxide is more easily broken down, dissociating into lithium and oxygen to allow for higher efficiency and an improved life cycle. It could also enable "closed system" lithium-air batteries, which wouldn't require intake of extra oxygen from the environment and would make them safer and more efficient.
"The stabilization of the superoxide phase could lead to developing a new closed battery system based on lithium superoxide, which has the potential of offering truly five times the energy density of lithium ion," says Khalil Amine, a member of the research team.
The formation of the lithium superoxide is attributed to the spacing of iridium nanoparticles in the electrode. While lithium superoxide has traditionally been hard to synthesize due to its thermodynamic instability, the researchers say the iridium atoms look to be a good recipe for its growth moving forward.
"This discovery really opens a pathway for the potential development of a new kind of battery," says Larry Curtiss, a battery scientist at Argonne. "Although a lot more research is needed, the cycle life of the battery is what we were looking for."

New catalyst could replace platinum in cheaper fuel cells

This chemical drawing of a nano-island of graphene into which iron-nitrogen complexes have been embedded shows ...A more cost-effective fuel cell catalyst material consisting of iron-nitrogen complexes embedded in tiny islands of graphene could be used in place of costly platinum. Research by teams at Helmholtz Zentrum Berlin and TU Darmstadt have produced the catalyst material and found that its efficiency approaches that of platinum.
To synthesize the mix, the researchers had to devise a way of reducing metal contaminants in the catalyst material to near-zero. Inorganics, usually metals, interfere with a catalyst's efficiency by reducing the oxygen reactions that are at the heart of a fuel cell's catalytic function.
The answer was an iron-nitrogen complex "doped" in graphene islands of just a few nanometers in diameter. This Fe-N-C catalyst is being tested and has been found to be capable of achieving levels of activity comparable to common – and expensive – Pt/C (platinum) catalyst materials.
This new process builds on a previous process developed by HZB, which held a world record for its efficiency. The new build aims for higher purification of the catalyst materials, using a combination of thermal treatment and etching, to reduce the amount of foreign material in the catalytic compound.
Junior professor Ulrike Kramm of TU Darmstadt used the process to create a low-cost catalyst that had graphene layers made up exclusively of FeN4 complexes, eliminating the need for iron nanoparticles, as was previously the case, while improving the reactivity of the catalyst greatly. This allows the catalyst's designers to add promoters to improve catalytic production to meet the needs of the intended fuel cell's purpose.
This new catalyst could be used in fuel cells of several types and for many purposes. Fuel cells in the automotive industry, which can include hydrogen, natural gas, and other fuels, would become far less expensive. Fuel cells for scientific and military use would also be improved at reduced cost, while consumer fuel cells for electronics and other devices would likewise see benefit from this new catalytic design.

Smartphone and laser attachment form cheap rangefinder

The prototype sensor used the camera in an ordinary smartphone and a commodity laser emitter that ...A team of researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) led by Li-Shiuan Peh has come up with a new infrared depth-sensing system. The new system, which works outdoors as well as in, was built by attaching a US$10 laser to a smartphone, with MIT saying the inexpensive approach could be used to convert conventional personal vehicles, such as wheelchairs and golf carts, into autonomous ones.
Inexpensive rangefinding devices, such as the Microsoft Kinect, have been a great help to robotics engineers. Using the off-the-shelf product that relies on an infrared laser to measure distance, they allow for rapid prototyping and the ability to create robots that can sense and navigate in their environments without having to constantly reinvent the necessary technology.
Unfortunately, Kinect and similar infrared-based systems tend to be a bit fussy when it comes to ambient light conditions. Sunlight, fire, and heat sources can put them off and even indoors subdued light is often required for them to work.
Commercial outdoor rangefinders have been common for over 30 years, but they work by firing high-energy infrared bursts, which are extremely short to minimize the danger of eye damage. In addition, such systems are very expensive – often costing tens of thousands of dollars.
The MIT system gets around the need for high-energy bursts by timing its measurements to the emission of low-energy bursts. It does this by capturing four frames of video, two which measure reflections of the laser light and two that record only the ambient infrared light, then subtracting the latter from the former to make range measurements.
CSAIL researchers are presenting a new infrared depth-sensing system built from off-the-shelf components, that works outdoors as well as in
In its current prototype form, the MIT system uses a smartphone with a 30-frame-per-second camera, which produces a delay of about an eighth of a second. This limits the accuracy of the system, though more advanced 240-frame-per-second cameras with a delay of a 60th of a second are available.
With what is called "active triangulation," the MIT system uses the attached laser to emit light in a single plane, which is measured by the camera's 2D sensor. MIT says that at ranges of three to four meters (10 to 12 ft) the device boasts an accuracy within millimeters, while at five meters (16 ft) this is reduced to six centimeters (2.3 in). However, when the team installed the system in the driverless golf cart developed by the Singapore-MIT Alliance for Research and Technology it could produce depth measurement suitable for travel at 15 km/h (9 mph).
According to the team, once the technology is mature, it could lead to a plug-in method of creating autonomous golf carts, wheelchairs or other small vehicles, package delivery drones, or expendable robotic vehicles.
"My group has been strongly pushing for a device-centric approach to smarter cities, versus today's largely vehicle-centric or infrastructure-centric approach," says Peh. "This is because phones have a more rapid upgrade-and-replacement cycle than vehicles. Cars are replaced in the timeframe of a decade, while phones are replaced every one or two years. This has led to drivers just using phone GPS today, as it works well, is pervasive, and stays up-to-date. I believe the device industry will increasingly drive the future of transportation."
The MIT team says that as new camera technology becomes available, it will improve the accuracy of the system. At the moment, mobile phone cameras use a rolling shutter technology, which creates an image by scanning across the surface of the sensor in a 30th of second. New phones will uses a global shutter, where all the photodetectors are scanned at once, which will allow the MIT system to use shorter, higher-energy bursts for longer range measurements.

Dramatic 3D images reveal super-small motors that drive bacteria

Three of the motors that drive the locomotion of different bacteriaWhen you want to get together with friends or family, chances are you employ a motor. That is to say, you likely get into a car or on some form of public transport to arrive at a meeting point. Bacteria really aren't very different. They have various means of getting around, but they all involve some kind of biological motor — and those motors have just been imaged in dramatic and colorful 3D by researchers at the California Institute of Technology (Caltech).
To image the micromotors, the team employed a technique known as electron cyrotomography. This involves freezing bacterial cells so quickly that the water molecules they contain don't have the time to arrange themselves into ice crystals. Once the cells are locked in their original structure this way, an electron microscope was used to take a bunch of 2D images that were then assembled in such a way that digital 3D images of the motors emerged. The technique was groundbreaking, with Caltech reporting that it was the first time bacteria's biological locomotion machinery has every been imaged in 3D.
"Bacteria are widely considered to be 'simple' cells; however, this assumption is a reflection of our limitations, not theirs," says Grant Jensen, a Caltech professor of biophysics and biology. "In the past, we simply didn't have technology that could reveal the full glory of the nanomachines – huge complexes comprising many copies of a dozen or more unique proteins – that carry out sophisticated functions."
Working with colleagues in the US, UK and Germany, Jensen and his team imaged two different kinds of bacterial motors.
The first, reported in the March 11 issue of Science, was from a soil bacteria known as Myxococcus xanthus and is called the type IVa pilus machine (T4PM). This mechanism lets bacteria move by sending out a long fiber called the pilus. This fiber attaches to a surface and then the bacterial reels itself forward along the tether.
To unravel the fine details of this mechanism, the researchers created a series of mutant cells, each lacking a different component of the T4PM, which they then compared to the intact bacteria so they could map the mechanism. In their observations, they found that the T4PM consists of four interconnected rings. They also found that it's quite powerful.

"In this study, we revealed the beautiful complexity of this machine that may be the strongest motor known in nature. The machine lets M. xanthus, a predatory bacterium, move across a field to form a 'wolf pack' with other M. xanthus cells, and hunt together for other bacteria on which to prey," Jensen says.
The second biological motor that was imaged by the Caltech team involved one that drives the flagellum — a tiny whip-like propellor — which they observed in several different bacteria.
They discovered that there are motors inside the bacteria made from proteins that turn the flagellum. What's more, these protein structures were often found quite far from the flagellum, which means they could generate significant torque. It's kind of like a small rotor on a fishing boat, versus a large one on a yacht. Their work with the flagellum motors was published in the March 29 issue of the journal PNAS.
"These two studies establish a technique for solving the complete structures of large macromolecular complexes in situ, or inside intact cells," Jensen says. "Our electron cryotomography technique is a good solution because it can be used to look at the whole cell, providing a complete picture of the architecture and location of these structures."

Airbnb offers a night of sleeping with the sharks

The underwater bedroom is located at France's Aquarium de ParisWe've all heard of monsters under the bed, but what about sharks? As its latest stunt, the likes of which have previously seen it offer a night in a floating house, Airbnb is offering the chance for three people, each with a friend or partner, to spend a night submerged in a shark tank.Though this is said to be the first time an underwater bedroom has been listed (of sorts) on Airbnb, we've seen underwater rooms make a splash elsewhere previously. The Manta Resort off the coast of Tanzania offers underwater sleeping quarters, for example, while the Maldives' Hurawalhi Island Resort and Spa will soon open with the "world's largest" underwater restaurant that will double as a honeymoon suite.
Located at France's Aquarium de Paris, the underwater bedroom is submerged 10 m (33 ft) deep in 3 million liters (660,000 gal) of water. It has a circular design with floor-to-ceiling windows held in place by a frame, effectively creating a 360-degree transparent wall, which will be all that separates guests from the 35 sharks in the tank.
The room was designed specifically for the aquarium and construction of it began around a year ago. The architect and builders worked closely with aquarium staff, including the head of the animal welfare, to ensure its safety for both the guests and the sharks. Airbnb says the room was "tested extensively" in the Mediterranean Sea before being installed in the shark tank.
Airbnb does offer up some stipulations for winners. Firstly, each winner and their guest must weigh no more than 190 kg (418 lb) combined, and they must not take photos after dark due to sharks being sensitive to light. Among the rest of the guidance provided is advice not to "eat the chum," to keep heads and feet in the bedroom at all times and not to sleepwalk, go night swimming or dive in.
Already in place in place, allowing the sharks time to acclimatize to it, the bedroom will remain after the guests have left. It will serve as a place from which the sharks can be studied and will allow for the observation of more natural shark behaviour by reducing the number of observational dives required.
For the chance to spend night in the underwater bedroom, individuals must enter a competition, telling Airbnb a little about themselves and and why they "belong with the sharks for a night." The winners will be welcomed by world record-breaking freediver, underwater photographer and shark conservationist Fred Buyle, provided with a guided tour of the aquarium and have a tank-side meal for two.

Flying electric scooter aces 46-minute maiden test flight

The German physicist behind the Evolo manned multicopter and the Volocopter 2-seater has just taken his first flight aboard another remarkable aircraft: a flying electric scooter. Thomas Senkel flew his Skyrider One prototype for some 46 minutes in the idyllic surroundings of the Canary Islands, marking what he believes is the first electric, road-registerable two-wheeler to take to the sky.
If flying car proponent Dezso Molnar is on the money, we should be thinking less about flying cars, and more about roadable aircraft. Simple, single-seat designs that can straddle the gap between the road and the sky to achieve multi-mode transport in the most efficient way possible.
On that axis, Thomas Senkel's Skyrider One scores very highly as a practical, simple and elegant design. It's a simple two-wheel electric scooter, with a 6-kW (8-hp) hub motor to drive the rear wheel, and a 13-kW (17-hp) motor driving a large rear-mounted propeller. A regular tandem paraglider canopy can be unfurled when you want to fly, and then it's a matter of gaining enough speed in scooter mode to fill up the 'chute, lifting off, then engaging the propeller drive to give you power in the air.

Flying prototype aircraft – especially hybrid designs like this one – must be a nerve-wracking experience. Indeed, as Senkel told us, "I was very nervous in the beginning and at the landing. I have some experience with powered paragliders," said Senkel, "but the behavior of the Skyrider One was unknown. After landing, I was relieved that everything went really fine. The next flight would be a lot easier."
Senkel sees simple designs like the Skyrider One as the quickest and easiest way to achieve flying car-like capabilities.
"You can drive to your airstrip, fly to somewhere, and drive home after landing," he says. "With all-electric drive, it's quiet and doesn't make any pollution. It can be used in areas where combustion engines are not allowed. And two wheels are enough, no need for more. Take off and landing is easy with some help from your feet."
Skyrider One can take off on any flat terrain or airstrip. The rider needs to face into a slight headwind; crosswinds aren't suitable. Once in the air, it's possible to switch the motor off altogether and ride thermals to keep yourself aloft for potentially hours at a time without draining the battery.
The prototype has just two small 3 kWh lithium polymer batteries, giving it a total range up to 120 km (75 mi) on the road with a maximum speed around 60 km/h (37 mph) or 30 minutes of powered flight if you run the propeller constantly.
Senkel believes it's the world's first flying electric two-wheeler: "All other powered paragliders I know come with three or four wheels and a combustion engine," he tells us. It's also extremely light, weighing in at just 108 kg (238 lb).

Senkel is now looking for production and marketing partners to take Skyrider One to the market. The production version will use a folding prop with no surrounding cage in order to make it easier to ride on the road, and Senkel's already thinking about what other improvements can be made between now and then.
Even though we're just at the dawn of the electric aviation age, Thomas Senkel has already built himself a pretty astounding CV. He's on the bleeding edge of the manned multirotor movement with the Evolo and Volocopter projects, and now with this small, practical electric flying scooter he's broken new ground in the multi-mode transport segment. Not to mention his work on the Hendo hoverboard and anti-gravity devices. We're officially putting him on our list of inventors to watch out for!

Brain-like supercomputing platform to explore new frontiers

The 16-chip IBM TrueNorth platform Lawrence Livermore will receive later this weekIn the old days, it was common to hear a computer chip referred to as an "electronic brain." Modern chip designs are now making such labels even more apt. Lawrence Livermore National Laboratory (LLNL) is set to take receipt of a brain-inspired supercomputing platform developed by IBM Research. The first-of-a-kind system is based on a neurosynaptic computer chip known as IBM TrueNorth, and can process the equivalent of 16 million neurons and 4 billion synapses while consuming just 2.5 watts of power.
The TrueNorth system is billed as a fundamental departure from the way computers have been designed for over 70 years and uses digital neurons and synapses that processes information in a manner similar to that of the living brain – specifically, the right hemisphere of the human cerebral cortex. This isn't the first time that such a computer has been attempted, but according to IBM, TrueNorth is so advanced that it not only overcomes certain critical bottlenecks in conventional von-Neumann-architecture, but requires new ways of thinking to exploit the new hardware.
IBM says that the TrueNorth technology capable of creating computers operating at exascale speeds or a billion billion calculations per second. This is fifty times faster than current petaflop computers, yet is much smaller and uses much less power.
Originally developed by DARPA as part of its Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program with the help of Cornell University, the computer being delivered to LLNL is made of 16 TrueNorth chips – each of which is made of 5.4 billion transistors that are put together to create one million digital neurons connected by 256 million electrical synapses. In contrast, there are 100 billion neurons in the human brain.
Despite carrying out 46 giga synaptic operations per second, IBM says that a TrueNorth processor uses only 70 milliwatts of power at 0.8 volts. The 16 chips working together are equivalent of 16 million neurons and four billion synapses while consuming little more electricity that a tablet computer.
Part of the reason for this performance is that by imitating the human brain, the TrueNorth neuromorphic processor overcomes some of the limitations of conventional von-Neumann-architecture. For example, program instructions and operation data can pass along the same route at the same time, which isn't possible in conventional processors. In addition, the TrueNorth processor doesn't need to be turned on all the time, but only when needed, which produces considerable power savings.
Another advantage, according to IBM, is that where standard computers focus on language and analytical thinking like the left-lobe of the human brain, TrueNorth is more like the right side with an emphasis on pattern recognition and integrated sensory processing as well as the ability to infer complex cognitive tasks.
The LLNL's TrueNorth system will part of the National Nuclear Security Administration's Advanced Simulation and Computing (ASC) program, where it will be used to study machine learning applications and deep learning algorithms and architectures, and to conduct general computing feasibility studies. The end game is to find ways to improve cyber security – especially in regard to protecting US nuclear weapons and ensuring their reliability without underground test explosions.
Under its contract with IBM Research, LLNL with receive the 16-chip TrueNorth system along with an "end-to-end ecosystem" to produce machines that can imitate the brain's capabilities for perception, action, and cognition. This will include a simulator; programming language; integrated programming environment; a library of algorithms, applications, and firmware; tools for making neural networks for deep learning; a teaching curriculum; and cloud enablement.
"Neuromorphic computing opens very exciting new possibilities and is consistent with what we see as the future of the high performance computing and simulation at the heart of our national security missions," says Jim Brase, LLNL deputy associate director for Data Science. "The potential capabilities neuromorphic computing represents and the machine intelligence that these will enable will change how we do science."

Reaching for the stars: How lasers could propel spacecraft to relativistic speeds

How do you send man-made probes to a nearby star? According to NASA-funded research at the University of California, Santa Barbara (UCSB), the answer is simple: assemble a laser array the size of Manhattan in low Earth orbit, and use it to push tiny probes to 26 percent the speed of light. Though the endeavour may raise a few eyebrows, it relies on well-established science – and recent technological breakthroughs have put it within our reach.

The problem with "Bring Your Own Fuel"

It took a short 66 years for humanity to go from the first powered flight to landing a man on the Moon, and according to NASA's tentative schedule it will be another 66 (in 2035) before the first human steps are taken on Mars. However, going from flags and footprints on the Red Planet to sending man or machine all the way to a nearby star would require a complete rethink of how rockets and probes travel through space.
The major issue with today's space propulsion technology is that it scales far too poorly to achieve anything close to interstellar speeds.
In space, fuel is used as reaction mass: in other words, the only way rockets and spacecraft can accelerate forward is by ejecting fuel backward as fast as possible. Unfortunately, this means that carrying more and more fuel along for the ride has quickly diminishing returns. The best example of this is on the launchpad, where fuel makes up well over 90 percent of the mass of a rocket. This is far from optimal as it means most of the thrust generated by the rocket goes to lifting fuel, not payload, off the ground.
Today, Voyager 1 is the spacecraft farthest from us and the only one to have reached interstellar space. Nearly four decades after it left Earth as the vanguard of human exploration, the probe remains just 10 light-hours away from us and, were it pointed in the right direction, it would take 40 millennia to reach the closest star.
A study at the Keck Institute for Space Studies (KISS) found that if a deep space exploration probe were built today, it could only reach speeds three to four times faster than Voyager's. Newer technologies like efficient ion engines might fare somewhat better, but there is no indication that they will ever make the cut for interstellar travel.
In short, it seems the current approach to space propulsion has hit a wall. If humanity ever finds a way to reach another star, current signs indicate it's unlikely it will be by burning fuel to get there.

The case for giant lasers

If bringing fuel along is a no-go for interstellar exploration, the natural alternative could be to provide thrust from an external source.
Solar sails are a great example of external propulsion. They are, essentially, large and lightweight mirrors that generate thrust whenever photons coming from the Sun bounce off them. Over months and years, this miniscule force can slowly build up and accelerate a probe to high speeds.
Laser sails would work on the same principle, except they would receive photons from a powerful laser array (on the ground or in Earth orbit) rather than the Sun. Because laser beams are highly focused and perfectly synchronized, laser sails could receive an irradiation 100,000 times greater than the Sun's and reach astonishing speeds. But building a laser large enough – particularly in orbit – has long been thought to be a near-impossible task.
Now, however, the team led by Professor Philip Lubin at UCSB has concluded that recent developments may have made this technology – and, in turn, interstellar travel – achievable over the next few decades.
"While a decade ago what we propose would have been pure fantasy, recent dramatic and poorly-appreciated technological advancements in directed energy have made what we propose possible, though difficult," says Lubin.

From fiction to reality

That somewhat obscure but key breakthrough was the development of modular arrays of synchronized high-power lasers, fed by a common "seed laser." The modularity removes the need for building powerful lasers as a single device, splitting them instead into manageable parts and powering the seed laser with relatively little energy.
Lockheed Martin has recently exploited this advance to manufacture powerful new weapons for the US Army. In March last year, the aerospace and defense giant demonstrated a 30 kW laser weapon (and its devastating effect on a truck). By October, the laser's power had already doubled to 60 kW and offered the option to reach 120 kW by linking two modules using off-the-shelf components.
The UCSB researchers refer to their own planned arrays as DE-STAR (Directed Energy System for Targeting of Asteroids and ExploRation), with a trailing number to denote their size. A DE-STAR-1 would be a square array 10 meters (33 ft) per side and about as powerful as Lockheed's latest; at the other end of the spectrum, a DE-STAR-4 would be a 70 GW array covering a massive area of 100 square kilometers (39 square miles).
"The size scale is set by the basic physics if we are at a wavelength of 1 micron and the goal is to propel small spacecraft to relativistic speeds," Lubin told Gizmag. "If we get to shorter wavelengths with the laser then we will be able to build a smaller array. The baseline is 1 micron and the needed array size is 1-10 km depending on the performance desired."
Because the atmosphere would interfere with the laser signal, the arrays would be best assembled in low Earth orbit rather than on the ground. Lubin stresses that even a relatively modest orbital array could offer interesting propulsion capabilities to CubeSats and nanosatellites headed beyond Earth orbit, and that useful initial tests would still be conducted on the ground first on one-meter (3-ft) arrays, gradually ramping up toward assembling small arrays in orbit.
While even a small laser array could accelerate probes of all sizes, the larger 70-GW system would of course be the most powerful, capable of generating enough thrust to send a CubeSat probe to Mars in eight hours – or a much larger 10,000-kg (22,000-lb) craft to the same destination in a single month, down from a typical six to eight.
"There's nothing that prevents us from doing this, it's just a matter of will," says Lubin. "The technology looks like it's in place, but launching enough elements in space is a problem. The mass in orbit is 100 times the ISS [International Space Station] mass, so it's significant but not completely crazy over a 50-year timescale."

Approaching light speed

Sending a CubeSat to Mars in only eight hours would mean reaching two percent of the speed of light. This is an already impressive speed far beyond our current capabilities; nonetheless, such a probe would still take about two centuries to reach Alpha Centauri. To reach a star in years rather than centuries, spacecraft would need to be designed from the ground up to shed as much mass as possible.
To that end, a long-term objective from Lubin and his team is to develop "wafer-scale spacecrafts" that would only weigh a few grams each, complete with a small laser sail for propulsion and long-distance communication.
"Photonic propulsion can be used at any mass scale, but lower mass systems are faster," Lubin tells Gizmag. "Wafer scale spacecraft is just one extremely low mass case. This is a new area and one with a tremendous amount of potential, but it is in its nascent phase. The core technologies already exist for the relevant miniaturization to proceed for some types of spacecraft."
Such probes would combine nanophotonics, a miniaturized radio thermal generator for 1 W of power, nanothrusters for attitude adjustment, thin-film supercapacitors for energy storage, and even a small camera.
Equipped with a laser sail just under one meter (3 ft) in diameter, such a spacecraft could be propelled by a 70 GW laser array to about 26 percent the speed of light in about 10 minutes and reach Alpha Centauri in only 15 years.
The relationship between the size of the array, mass of the spacecraft and achievable speed
With the orbiting laser array acting as a giant receiver, and using its mirror as a transmitter, the tiny spacecraft could even periodically send data and low-resolution pictures back to Earth.
"The laser would operate in a burst mode where energy is stored on board and the laser is turned on periodically at mission critical times (such as picture taking)," says Lubin. "The laser is nominally a 1 watt system with a burst data rate of about 1 kbs at Alpha Centauri when only the 10 cm wafer optics is used, or about 100 kbs if we use the 1 m reflector as a part of the laser communications system."

Intermediate targets

Part of the advantage with the modular approach to building powerful lasers is that even smaller, cheaper arrays built along the way can prove useful. Luckily, there's no dearth of interesting and unexplored territories within our solar system – destinations that should keep us engaged and motivated to ramp up the size of the laser system so we can gradually unlock more and more capabilities.
"We will have many targets, including the Solar System plasma and magnetic fields and its interface with the ISM [interstellar medium], the heliopause and heliosheath, asteroids, the Oort cloud and the Kuiper belt," Lubin notes.
Among the many, one target jumps out as perhaps the most worthwhile – the spot known as the solar gravitational lens focus. This is the area, between 500 and 700 AU (Sun-Earth distances) from the Sun, where a telescope could use the Sun as a gravitational lens to image distant exoplanets in unprecedented detail. While thus far each exoplanet has only ever been seen as a single pixel, from this spot, the effect would mean an exoplanet 100 light-years away could be imaged at a resolution of one pixel per square kilometer.
If the goal is however to set out for a specific destination (say, Mars), we'll have to resort to hybrid probes that are accelerated via the laser, but also carry their own fuel to slow back down when needed – because the alternative could be simply too challenging and expensive.
"A second phased laser array at the destination could be used in a 'ping-pong' arrangement to allow acceleration then deceleration, then the opposite to come back," Lubin tells us. "For Mars this makes sense in the long run, but even Mars would represent a significant challenge due to the difficulty of construction."

Reaching for the stars

For those probes light enough to accelerate to relativistic speeds, once past the Solar System there will be no obvious way for the spacecraft to slow down again and enter the orbit of another star. For that reason, the first interstellar missions would most likely be simple fly-bys.
Future options that might require a further upgrade of the laser array could involve sending a lightweight "mothership" that, upon approaching the target star, would eject hundreds of wafer-class probes in a grid layout for a thorough exploration of the system.
Building a gigawatt-grade laser array and gram-scale spacecraft would require a gargantuan economic and engineering effort. The saving grace is that the roadmap is incremental and sets clear intermediate objectives along the way.
"We are continuing lab-based experiments and have proposals in to expand to the next level," Lubin tells us. "We want to start the roadmap by building a class 0 [1-meter, 1 kW array] and then a class 1 [10-meter, 100 kW] system in the next five years."
Perhaps, in the end, this project will simply prove too taxing to ever see the light of day, and the costs and technological barriers too high to surmount. Still, the very notion that interstellar travel is now credibly achievable by relying on well-established science is food for thought. Challenging as it may be, reaching a foreign star before the end of the century is now a legitimate notion for scientists and engineers – not just science fiction fans.
There are over 150 stars and 17 known planetary systems, 14 of which appear capable of supporting planets in the habitable zone, in the 20 light-year radius around Earth. Perhaps, just as the images of the Moon landing inspired a new generation of scientists and engineers, the knowledge that all those foreign worlds could be within reach will inspire humanity's push to reach for the stars.

Tuesday, 29 March 2016

Faster, cheaper Chromebook Pixel unveiled

The Chromebook Pixel is back with the same premium look, a cheaper entry price and a ...Two years ago, Google launched the Chromebook Pixel, a premium Chrome OS notebook designed to show off its cloud-based operating system. Now the second generation Pixel is here, bringing better performance at a lower cost.
Physically, the 2015 Chromebook Pixel is a lot like the 2013 edition. The basic dimensions, the 12.85-inch 2,560 x 1,700 resolution touchscreen and most of the hardware design is the same. The new laptop is a touch lighter, however, and comes with two of the new USB-C ports (most recently seen on the latest Apple MacBook).
Unlike Apple's super-slim new laptop, the Pixel also gets two USB 3.0 ports and an SD card reader thrown into the mix too. Google is promising better battery life this time around as well, quoting 12 hours of normal use, but we'll have to wait until we get our hands on one to see if that stands up.
The biggest changes are under the hood, with a choice of a 2.2 GHz Intel i5 processor and 8 GB of RAM, or a 2.4 GHz i7 processor and 16 GB of RAM. The graphics get a bump up to Intel HD 5500 and you can choose from 32 GB or 64 GB of internal storage. LTE connectivity has been dropped, but Wi-Fi and Bluetooth are still present and correct.
With the cheapest configuration retailing at US$999 and the top-end specification costing $1,299, this is still an expensive laptop, albeit a step down from the original Pixel (which started at $1,299). And though it's come a long way in the last couple of years, there's still the Chrome OS to consider.

Living on the web


We've been following Chrome OS since the first Chromebooks started appearing. Back in 2011, the idea of doing everything online with a minimum of local storage – from music listening to photo editing – felt like a concept the world wasn't quite ready for. Today that idea feels a lot more palatable, as Wi-Fi access becomes more ubiquitous, and online software (from Google Drive to Spotify) becomes more advanced. Some apps, like Drive and Gmail, now offer limited offline functionality too.
Chromebooks (and Chromeboxes) have also proved popular with students and those wanting a cheap, lightweight second computer. That's why the Pixel feels like something of an anomaly. It ignores one of the main advantages of Chromebooks and goes for a price level more in line with what you'd expect to pay for a quality Windows laptop or a MacBook.
As in 2013 though, Google is positioning the Pixel as a reference device that shows the Chrome OS in the best light possible. It's not so much a notebook for the masses as a flagship device only a few Chrome OS enthusiasts are likely to consider. For everyone else there are plenty of capable yet inexpensive Chromebooks to choose from.
Those internal specs are likely to be overkill for an operating system that's essentially a browser with a few add-ons, but you're unlikely to run into difficulties opening dozens of tabs or streaming HD videos from YouTube. Of course, one of the plus sides of having little or no software installed locally is that most of the processing and updating is done in the cloud, keeping your laptop streamlined and speedy.