Boeing-Safran joint venture for auxiliary power units has a name: Initium Aerospace

Boeing-Safran joint venture for auxiliary power units has a name: Initium Aerospace

12:24pm, 13th February, 2019
The tail section of a FedEx 777 Freighter ecoDemonstrator flight-test airplane has been opened to reveal its auxiliary power unit, which contains a 3-D-printed titanium part. (Boeing Photo / Paul McElroy) The 50-50 to build auxiliary power units for airplanes now has a name: Initium Aerospace. Auxiliary power units, or APUs, are onboard engines that are used primarily to start an aircraft’s main engines. They also power aircraft systems on the ground when the main engines aren’t running, and can boost onboard power during flight if necessary. Boeing’s APUs are currently built by Honeywell and Pratt & Whitney, but Safran — which is headquartered in France — is raising its profile in the market. Initium’s rise is also part of Boeing’s drive to have a more vertically integrated supply chain, and boost its services business. “Initium” ccmes from the Latin word for “beginning” or “start,” which refers to an APU’s function as well as the thrust of the Boeing-Safran initiative. “This is an exciting milestone as we bring together the best of both companies to design and build an advanced APU that will create more lifecycle value for our customers,” Stan Deal, president and CEO of Boeing Global Services, . “This is further proof that Boeing is making strategic investments that strengthen our vertical capabilities and continue to expand our services portfolio.” The creation of Initium Aerospace follows . “I would like to congratulate everybody at Boeing and Safran who contributed to the creation of this new joint venture,” Safran CEO Philippe Petitcolin said. “Initium Aerospace is swiftly capitalizing on the vast expertise of both partners to provide state-of-the-art APUs and innovative solutions to customers. … We look forward to presenting the first demonstrator engine to the market.” The initial team consists of employees from the two parent companies, led by CEO Etienne Boisseau. Initial design and engineering work is being done in San Diego. Safran currently supplies a wide range of components to Boeing. It’s a partner with GE in the CFM International joint venture that produces LEAP-1B engines for the 737 MAX. Boeing and Safran also are partners in MATIS, a joint venture in Morocco that produces wiring products for several airframe and engine companies.
Xnor shrinks AI to fit on a solar-powered chip, opening up big frontiers on the edge

Xnor shrinks AI to fit on a solar-powered chip, opening up big frontiers on the edge

9:50am, 13th February, 2019
Xnor.ai machine learning engineer Hessam Bagherinezhad, hardware engineer Saman Naderiparizi and co-founder Ali Farhadi show off a chip that uses solar-powered AI. (GeekWire Photo / Alan Boyle) It was a big deal two and a half years ago when researchers the size of a candy bar — and now it’s an even bigger deal for Xnor.ai to re-engineer its artificial intelligence software to fit onto a solar-powered computer chip. “To us, this is as big as when somebody invented a light bulb,” Xnor.ai’s co-founder, Ali Farhadi, said at the company’s Seattle headquarters. Like the candy-bar-sized, Raspberry Pi-powered contraption, the camera-equipped chip flashes a signal when it sees a person standing in front of it. But the chip itself isn’t the point. The point is that Xnor.ai has figured out how to blend stand-alone, solar-powered hardware and edge-based AI to turn its vision of “artificial intelligence at your fingertips” into a reality. “This is a key technology milestone, not a product,” Farhadi explained. Shrinking the hardware and power requirements for AI software should expand the range of potential applications greatly, Farhadi said. “Our homes can be way smarter than they are today. Why? Because now we can have many of these devices deployed in our houses,” he said. “It doesn’t need to be a camera. We picked a camera because we wanted to show that the most expensive algorithms can run on this device. It might be audio. … It might be a way smarter smoke detector.” Outside the home, Farhadi can imagine putting AI chips on stoplights, to detect how busy an intersection is at a given time and direct the traffic flow accordingly. AI chips could be tethered to balloons or scattered in forests, to monitor wildlife or serve as an early warning system for wildfires. Xnor’s solar-powered AI chip is light enough to be lofted into the air on a balloon for aerial monitoring. In this image, the chip is highlighted by the lamp in the background. (Xnor. ai Photo) Sophie Lebrecht, Xnor.ai’s senior vice president of strategy and operations, said the chips might even be cheap enough, and smart enough, to drop into a wildfire or disaster zone and sense where there are people who need to be rescued. “That way, you’re only deploying resources in unsafe areas if you really have to,” she said. The key to the technology is reducing the required power so that it can be supplied by a solar cell that’s no bigger than a cocktail cracker. That required innovations in software as well as hardware. “We had to basically redo a lot of things,” machine learning engineer Hessam Bagherinezhad said. Xnor.ai’s head of hardware engineering, Saman Naderiparizi, worked with his colleagues to figure out a way to fit the software onto an FPGA chip that costs a mere $2, and he says it’s possible to drive the cost down to less than a dollar by going to ASIC chips. It only takes on the order of milliwatts of power to run the chip and its mini-camera, he told GeekWire. “With technology this low power, a device running on only a coin-cell battery could be always on, detecting things every second, running for 32 years,” Naderiparizi said in a news release. That means there’d be no need to connect AI chips to a power source, replace their batteries or recharge them. And the chips would be capable of running AI algorithms on standalone devices, rather than having to communicate constantly with giant data servers via the cloud. If the devices need to pass along bits of data, they could . That edge-computing approach is likely to reduce the strain of what could turn out to be billions of AI-enabled devices. “The carbon footprint of data centers running all of those algorithms is a key issue,” Farhadi said. “And with the way AI is progressing, it will be a disastrous issue pretty soon, if we don’t think about how we’re going to power our AI algorithms. Data centers, cloud-based solutions for edge-use cases are not actually efficient ways, but other than efficiency, it’s harming our planet in a dangerous way.” Farhadi argues that cloud-based AI can’t scale as easily as edge-based AI. “Imagine when I put a camera or sensor at every intersection of this city. There is no cloud that is going to handle all that bandwidth,” he said. “Even if there were, back-of-the-envelope calculations would show that my business will go bankrupt before it sees the light of day.” The edge approach also addresses what many might see as the biggest bugaboo about having billions of AI bugs out in the world: data privacy. “I don’t want to put a camera in my daughter’s bedroom if I know that the picture’s going to end up in the cloud,” Farhadi said. Xnor.ai was , or AI2, only a couple of years ago, and the venture is with millions of dollars of financial backing from Madrona Venture Group, AI2 and other investors. Farhadi has faith that the technology Xnor.ai is currently calling “solar-powered AI” will unlock still more commercial frontiers, but he can’t predict whether the first applications will pop up in the home, on the street or off the beaten track. “It will open up so many different things, the exact same thing when the light bulb was invented: No one knew what to do with it,” he said. “The technology’s out there, and we’ll figure out the exact products.”
Xnor shrinks AI to fit on a solar-powered chip, opening big frontiers on the edge

Xnor shrinks AI to fit on a solar-powered chip, opening big frontiers on the edge

9:20am, 13th February, 2019
Xnor.ai machine learning engineer Hessam Bagherinezhad, hardware engineer Saman Naderiparizi and co-founder Ali Farhadi show off a chip that can use solar-powered AI to detect people. (GeekWire Photo / Alan Boyle) It was a big deal two and a half years ago when researchers the size of a candy bar — and now it’s an even bigger deal for Xnor.ai to re-engineer its artificial intelligence software to fit onto a solar-powered computer chip. “To us, this is as big as when somebody invented a light bulb,” Xnor.ai’s co-founder, Ali Farhadi, said at the company’s Seattle headquarters. Like the candy-bar-sized, Raspberry Pi-powered contraption, the camera-equipped chip flashes a signal when it sees a person standing in front of it. But the chip itself isn’t the point. The point is that Xnor.ai has figured out how to blend stand-alone, solar-powered hardware and edge-based AI to turn its vision of “artificial intelligence at your fingertips” into a reality. “This is a key technology milestone, not a product,” Farhadi explained. Shrinking the hardware and power requirements for AI software should expand the range of potential applications greatly, Farhadi said. “Our homes can be way smarter than they are today. Why? Because now we can have many of these devices deployed in our houses,” he said. “It doesn’t need to be a camera. We picked a camera because we wanted to show that the most expensive algorithms can run on this device. It might be audio. … It might be a way smarter smoke detector.” Outside the home, Farhadi can imagine putting AI chips on stoplights, to detect how busy an intersection is at a given time and direct the traffic flow accordingly. AI chips could be tethered to balloons or scattered in forests, to monitor wildlife or serve as an early warning system for wildfires. Xnor’s solar-powered AI chip is light enough to be lofted into the air on a balloon for aerial monitoring. In this image, the chip is highlighted by the lamp in the background. (Xnor. ai Photo) Sophie Lebrecht, Xnor.ai’s senior vice president of strategy and operations, said the chips might even be cheap enough, and smart enough, to drop into a wildfire or disaster zone and sense where there are people who need to be rescued. “That way, you’re only deploying resources in unsafe areas if you really have to,” she said. The key to the technology is reducing the required power so that it can be supplied by a solar cell that’s no bigger than a cocktail cracker. That required innovations in software as well as hardware. “We had to basically redo a lot of things,” machine learning engineer Hessam Bagherinezhad said. Xnor.ai’s head of hardware engineering, Saman Naderiparizi, worked with his colleagues to figure out a way to fit the software onto an FPGA chip that costs a mere $2, and he says it’s possible to drive the cost down to less than a dollar by going to ASIC chips. It only takes on the order of milliwatts of power to run the chip and its mini-camera, he told GeekWire. “With technology this low power, a device running on only a coin-cell battery could be always on, detecting things every second, running for 32 years,” Naderiparizi said in a news release. That means there’d be no need to connect AI chips to a power source, replace their batteries or recharge them. And the chips would be capable of running AI algorithms on standalone devices, rather than having to communicate constantly with giant data servers via the cloud. If the devices need to pass along bits of data, they could . That edge-computing approach is likely to reduce the strain of what could turn out to be billions of AI-enabled devices. “The carbon footprint of data centers running all of those algorithms is a key issue,” Farhadi said. “And with the way AI is progressing, it will be a disastrous issue pretty soon, if we don’t think about how we’re going to power our AI algorithms. Data centers, cloud-based solutions for edge-use cases are not actually efficient ways, but other than efficiency, it’s harming our planet in a dangerous way.” Farhadi argues that cloud-based AI can’t scale as easily as edge-based AI. “Imagine when I put a camera or sensor at every intersection of this city. There is no cloud that is going to handle all that bandwidth,” he said. “Even if there were, back-of-the-envelope calculations would show that my business will go bankrupt before it sees the light of day.” The edge approach also addresses what many might see as the biggest bugaboo about having billions of AI bugs out in the world: data privacy. “I don’t want to put a camera in my daughter’s bedroom if I know that the picture’s going to end up in the cloud,” Farhadi said. Xnor.ai was , or AI2, only a couple of years ago, and the venture is with millions of dollars of financial backing from Madrona Venture Group, AI2 and other investors. Farhadi has faith that the technology Xnor.ai is currently calling “solar-powered AI” will unlock still more commercial frontiers, but he can’t predict whether the first applications will pop up in the home, on the street or off the beaten track. “It will open up so many different things, the exact same thing when the light bulb was invented: No one knew what to do with it,” he said. “The technology’s out there, and we’ll figure out the exact products.”
Paul Allen’s Petrel research vessel finds the USS Hornet, 77 years after sinking

Paul Allen’s Petrel research vessel finds the USS Hornet, 77 years after sinking

11:41am, 12th February, 2019
This 5-inch gun is part of the wreckage from the historic USS Hornet. (Photo courtesy of Paul G. Allen’s Vulcan Inc.) Chalk up another historic shipwreck discovery for the , the research vessel funded by the late Seattle billionaire Paul Allen: This time it’s the , the World War II aircraft carrier that was sunk by Japanese forces in 1942. The Hornet is best-known as the launching point for the , the first airborne attack on the Japanese home islands after Pearl Harbor and the United States’ entry into the war. Led by U.S. Army Lt. Col. Jimmy Doolittle, the raid in April 1942 provided a boost to American morale and put Japan on alert about our covert air capabilities. Two months later, the Hornet was one of three U.S. carriers that surprised and sunk four Japanese carriers during the tide-turning Battle of Midway. The Hornet was lost near the Solomon Islands in the South Pacific on Oct. 26, 1942, during the Battle of Santa Cruz. The carrier weathered a withering barrage from Japanese dive bombers and torpedo planes — but the crew eventually had to abandon ship, leaving the Hornet to its sinking. About 140 of the Hornet’s nearly 2,200 sailors and air crew members were lost.. “With the loss of Hornet and serious damage to Enterprise, the Battle of Santa Cruz was a Japanese victory, but at an extremely high cost,” retired Rear Adm. Samuel Cox, director of the U.S. Navy’s Naval History and Heritage Command, . “About half the Japanese aircraft engaged were shot down by greatly improved U.S. Navy anti-aircraft defenses. As a result, the Japanese carriers did not engage again in battle for almost another two years.” The Petrel took on the search for the Hornet as part of its mission to investigate scientific phenomena and historical mysteries in the South Pacific. The 250-foot research vessel’s previous shipwreck finds include the USS , the USS , the USS and the . The ship’s latest expedition took place in January,. “We had Hornet on our list of WWII warships that we wanted to locate because of its place in history as an aircraft carrier that saw many pivotal moments in naval battles,” said Robert Kraft, who heads the Petrel project as director of subsea operations for Vulcan. “Paul Allen was particularly interested in historically significant and capital ships, so this mission and discovery honor his legacy.” The Petrel’s 10-person expedition team zeroed in on the Hornet’s position by piecing together data from national and naval archives that included official deck logs and action reports from other ships engaged in the battle. Positions and sightings from nine other U.S. warships in the area were plotted on a chart to generate the starting point for the search grid. The discovery of the Hornet was made during the first dive mission of the Petrel’s autonomous underwater vehicle, at a depth of nearly 17,500 feet, and confirmed by video footage from the research ship’s remotely operated vehicle. , a 95-year-old California resident who was a gunner on the Hornet, and showed him video of the aft gun that he operated. “I used to stand on the right side of that gun, and that’s where my equipment was,” Nowatzki said. “If you go down to my locker, there’s 40 bucks in it. You can have it.” That might be tough: The precise location of the wreck is not being disclosed, to protect the underwater gravesite from being disturbed any further.
Paul Allen’s Petrel research vessel finds the USS Hornet, 76 years after sinking

Paul Allen’s Petrel research vessel finds the USS Hornet, 76 years after sinking

11:10am, 12th February, 2019
This 5-inch gun is part of the wreckage from the historic USS Hornet. (Photo courtesy of Paul G. Allen’s Vulcan Inc.) Chalk up another historic shipwreck discovery for the , the research vessel funded by the late Seattle billionaire Paul Allen: This time it’s the , the World War II aircraft carrier that was sunk by Japanese forces in 1943. The Hornet is best-known as the launching point for the , the first airborne attack on the Japanese home islands after Pearl Harbor and the United States’ entry into the war. Led by U.S. Army Lt. Col. Jimmy Doolittle, the raid in April 1942 provided a boost to American morale and put Japan on alert about our covert air capabilities. Two months later, the Hornet was one of three U.S. carriers that surprised and sunk four Japanese carriers during the tide-turning Battle of Midway. The Hornet was lost near the Solomon Islands in the South Pacific on Oct. 26, 1943, during the Battle of Santa Cruz. The carrier weathered a withering barrage from Japanese dive bombers and torpedo planes — but the crew eventually had to abandon ship, leaving the Hornet to its sinking. About 140 of the Hornet’s nearly 2,200 sailors and air crew members were lost.. “With the loss of Hornet and serious damage to Enterprise, the Battle of Santa Cruz was a Japanese victory, but at an extremely high cost,” retired Rear Adm. Samuel Cox, director of the U.S. Navy’s Naval History and Heritage Command, . “About half the Japanese aircraft engaged were shot down by greatly improved U.S. Navy anti-aircraft defenses. As a result, the Japanese carriers did not engage again in battle for almost another two years.” The Petrel took on the search for the Hornet as part of its mission to investigate scientific phenomena and historical mysteries in the South Pacific. The 250-foot research vessel’s previous shipwreck finds include the USS , the USS , the USS and the . The ship’s latest expedition took place in January,. “We had Hornet on our list of WWII warships that we wanted to locate because of its place in history as an aircraft carrier that saw many pivotal moments in naval battles,” said Robert Kraft, who heads the Petrel project as director of subsea operations for Vulcan. “Paul Allen was particularly interested in historically significant and capital ships, so this mission and discovery honor his legacy.” The Petrel’s 10-person expedition team zeroed in on the Hornet’s position by piecing together data from national and naval archives that included official deck logs and action reports from other ships engaged in the battle. Positions and sightings from nine other U.S. warships in the area were plotted on a chart to generate the starting point for the search grid. The discovery of the Hornet was made during the first dive mission of the Petrel’s autonomous underwater vehicle, at a depth of nearly 17,500 feet, and confirmed by video footage from the research ship’s remotely operated vehicle. CBS News caught up with Richard Nowatzki, a 95-year-old California resident who was a gunner on the Hornet, and showed him video of the aft gun that he operated. “I used to stand on the right side of that gun, and that’s where my equipment was,” Nowatzki told CBS. “If you go down to my locker, there’s 40 bucks in it. You can have it.” That might be tough: The precise location of the wreck is not being disclosed, to protect the underwater gravesite from being disturbed any further.
White House initiative will boost artificial intelligence research and data-sharing

White House initiative will boost artificial intelligence research and data-sharing

12:09am, 11th February, 2019
Artificial intelligence could open the door to applications in a variety of technological fields. (NIST Illustration / N. Hanacek) The White House is moving forward with the American AI Initiative, a set of policies aimed at focusing the full resources of the federal government on the frontiers of artificial intelligence. President Donald Trump is due to sign an executive order launching the initiative on Monday. Among its provisions is a call for federal agencies to prioritize AI in their research and development missions, and to prioritize fellowship and training programs to help American workers gain AI-relevant skills. The initiative also directs agencies to make federal data, models and computing resources more available to academic and industry researchers, “while maintaining the security and confidentiality protections we all expect.” “This action will drive our top-notch AI research toward new technological breakthroughs and promote scientific discovery, economic competitiveness and national security,” the White House said in a statement. As a trust-building measure, federal agencies are being asked to establish regulatory guidelines for AI development and use across different types of technology and industrial sectors. The National Institute of Standards and Technology is being given the lead role in the development of technical standards for reliable, trustworthy, secure and interoperable AI systems. The White House says an action plan will be developed “to preserve America’s advantage in collaboration with our international partners and allies.” “In , President Trump committed to investing in cutting-edge industries of the future,” Michael Kratsios, deputy assistant to the president for technology policy, said in a prepared statement. “The American AI Initiative follows up on that promise with decisive action to ensure AI is developed and applied for the benefit of the American people.” This week’s action comes amid rising concern about American competitiveness in artificial intelligence research and development. and the are both pushing ahead with multibillion-dollar AI research and development programs. In response, the White House has , and a with Amazon’s Andy Jassy and Microsoft’s Eric Horvitz among its members. and are among the hundreds of companies that are making AI a high priority in R&D, resulting in well-known products such as Amazon’s Alexa and Microsoft’s Cortana AI voice assistants (as well as similar AI agents offered by Apple and Google). AI capabilities such as machine learning and computer vision are also key to the development of and . Stacey Dixon, director of the , or IARPA, said AI applications are also highly relevant to national security. “Understanding imagery is one of the most evident opportunities for us to use AI, due to the sheer quantity of data to be analyzed and AI’s demonstrated effectiveness at image categorization,” she said. “However, IARPA also develops AI to address other intelligence challenges, including human language transcription and translation, facial recognition in real-world environments, sifting through videos to find nefarious activities, and increasing AI’s resilience to many kinds of attacks by adversaries.” Those AI tools could be used for nefarious purposes as well, however. , a consortium including the and called on policymakers to collaborate closely with researchers to investigate, prevent and mitigate potentially malicious uses of AI.
Paul Allen’s deepest legacy: Expedition surveys wreck of Japanese battleship Hiei

Paul Allen’s deepest legacy: Expedition surveys wreck of Japanese battleship Hiei

9:34am, 7th February, 2019
Guns of the Japanese battleship Hiei lie on the bottom of the Pacific. (© Navigea Ltd. / R/V Petrel via Vulcan) A wide-ranging shipwreck survey funded by Microsoft co-founder Paul Allen is continuing after his death, and the latest discovery focuses on the Japanese battleship Hiei, which sank in the South Pacific during the Battle of Guadalcanal in 1942. Japanese researchers , sparking a voyage by the Research Vessel (R/V) Petrel to check out the site and get the first on-the-scene underwater views. R/V Petrel and its crew, led by Robert Kraft, have been and for years as part of scientific initiative funded by Allen, . Among the best-known finds are the USS , the USS , the USS and the . Camera-equipped underwater robots have documented the wrecks of naval vessels fielded by Japan, Italy and Australia as well. The Petrel explored the battleship Hiei’s remains on the Pacific seabed on Jan. 31. The Hiei’s significance stems in part from its status as the first Japanese battleship to be sunk by enemy forces during World War II, on Nov. 14, 1942. Five months earlier, Japan’s imperial navy lost the heavy cruiser Mikuma and four fleet carriers during the but . “Hiei was crippled by a shell from the USS San Francisco on the 13th, which disabled the steering gear,” the . “For the next 24 hours it was attacked by multiple sorties of torpedo, dive and B-17 bombers. Hiei sank sometime in the evening with a loss of 188 of her crew. Hiei now lies upside down in 900 plus meters [3,200 feet] of water Northwest of Savo Island.” Photos from the expedition show the Hiei’s 127mm guns strewn in the debris field, a crate of 25mm anti-aircraft shells lying on the capsized hull, a breach that was apparently ripped in the hull during the naval battle, and an eerie view of portholes peeking out from the rust-encrusted remains. The visual evidence provides new clues to the scenario for the Hiei’s demise. , director of the in the Japanese coastal city of Kure, as saying that about a third of the ship’s hull appears to be missing. The imagery suggests that an onboard explosion caused the sinking, Todaka said. “The discovery shows a tragedy of the war, and I believe it also serves to remind people that the history of the war is real, not a story,” Todaka told NHK.
Boston Dynamics gears up to sell robot dogs – and improves android’s running game

Boston Dynamics gears up to sell robot dogs – and improves android’s running game

3:45pm, 12th May, 2018
Boston Dynamics founder Marc Raibert points out the cameras on his company’s SpotMini robotic dog, including a “butt-cam.” (TechCrunch via YouTube) Cue the : Boston Dynamics says it’s putting its scary SpotMini robotic dog on sale next year. The company’s founder, Marc Raibert, at a TechCrunch robotics event at the University of California at Berkeley. “SpotMini is in pre-production now. We’ve built 10 units that’s a design that’s close to a manufacturable design. We built them in-house, but with help from contract, manufacturing-type people,” Raibert said. “We have a plan later this year to build 100 with contract manufacturers,” he said, “and that’s the prelude to getting them in a higher-rate production which we hope to start in the middle of next year.” Raibert declined to say what the price will be. Potential applications could range from surveillance to office deliveries to home chores. The SpotMini sales announcement came a day after Boston Dynamics that shows the four-legged robot taking a six-minute jog through a real-world office and lab facility (condensed into a three-minute clip). The real trick to SpotMini’s trek, including its travel up and down some sketchy-looking staircases, is that it’s done autonomously. “Before the test, the robot is manually driven through the space so it can build a map of the space using visual data from cameras mounted on the front, back and sides of the robot,” Boston Dynamics explained. “During the autonomous run, SpotMini uses data from the cameras to localize itself in the map and to detect and avoid obstacles. Once the operator presses ‘Go’ at the beginning of the video, the robot is on its own.” that was released on Thursday shows Boston Dynamics’ Atlas android jogging through a field and jumping over a log. The robot demonstrates a sense of balance on a par with its . When November’s video hit YouTube, about where this all is going: This is nothing. In a few years, that bot will move so fast you’ll need a strobe light to see it. Sweet dreams… — Elon Musk (@elonmusk) It may be mere coincidence that Boston Dynamics released its latest videos while government officials, industry executives and researchers gathered at the White House for a . But this week’s developments seem well-suited for the opening scenes of a Sweet dreams, Elon … This is an updated version of a report that was first published at 4:59 p.m. PT May 10.
Boston Dynamics’ Atlas robot learns to run and jump, while robot dog gets smarter

Boston Dynamics’ Atlas robot learns to run and jump, while robot dog gets smarter

7:00pm, 10th May, 2018
Boston Dynamics’ Atlas robot runs through a field. (Boston Dynamics via YouTube) Cue the … again: Boston Dynamics’ latest videos are likely to spark more nightmares for tech billionaire Elon Musk and others worried about the rise of the robots. shows Boston Dynamics’ Atlas android robot jogging through a field and jumping over a log. The shows the doglike SpotMini robot taking a six-minute jog through a real-world office and lab facility (condensed into a three-minute clip). The real trick to SpotMini’s trek, including its travel up and down some sketchy-looking staircases, is that it’s done autonomously. “Before the test, the robot is manually driven through the space so it can build a map of the space using visual data from cameras mounted on the front, back and sides of the robot,” Boston Dynamics explained. “During the autonomous run, SpotMini uses data from the cameras to localize itself in the map and to detect and avoid obstacles. Once the operator presses ‘Go’ at the beginning of the video, the robot is on its own.” Atlas, meanwhile, demonstrates a sense of balance on a par with its . When that video hit YouTube, about where this all is going: This is nothing. In a few years, that bot will move so fast you’ll need a strobe light to see it. Sweet dreams… — Elon Musk (@elonmusk) It may be mere coincidence that Boston Dynamics released its latest videos while government officials, industry executives and researchers gathered at the White House for a . But today’s developments seem perfectly suited for the opening scenes of a
White House convenes AI summit and sets up advisory panel on artificial intelligence

White House convenes AI summit and sets up advisory panel on artificial intelligence

4:30pm, 10th May, 2018
Michael Kratsios, who’s currently in charge of the White House’s Office of Science and Technology Policy, addresses scores of executives, experts and officials at a White House summit focusing on artificial intelligence. (OSTP via Twitter) The White House brought together scores of industry representatives for a summit focusing on artificial intelligence and its policy implications today — including representatives from Amazon, Microsoft, Google and Facebook — and set up an advisory panel of government officials to assess AI’s impact. The Select Committee on Artificial Intelligence will advise the White House on AI research and development priorities, and will help forge partnerships involving government agencies, researchers and the private sector, . “In the summer of 1956, a dozen American scientists gathered on Dartmouth’s campus with the goal to ‘find out how to make machines solve the kinds of problems now reserved for humans.’ Now, nearly 62 years later, the age of artificial intelligence is here, and with it the hope of better lives for the American people,” Michael Kratsios, deputy assistant to the president for technology policy and acting head of the Office of Science and Technology Policy, said in prepared remarks provided to Nextgov. The committee is to be housed within the National Science and Technology Council and chaired by OSTP leadership. According to Nextgov and other news outlets, committee members also include: Walter Copan, undersecretary of commerce for standards and technology, and director of the National Institute of Standards and Technology. Mike Griffin, undersecretary of defense for research and engineering. Paul Dabbar, undersecretary for science at the U.S. Department of Energy. France Cordova, director of the National Science Foundation. Peter Highnam, director of the Defense Advanced Research Projects Agency, or DARPA. Jason Matheny, director of the Intelligence Advanced Research Projects Agency. There’ll also be representatives from the National Security Council, the Office of the Federal Chief Information Officer and the Office of Management and Budget. The makeup reflects the model set by the White House’s , which was re-established by the Trump administration last year. In a , the White House pointed to a variety of AI-related initiatives conducted over the past year, including international talks on AI innovation and programs to boost STEM education and apprenticeships. Last year, Treasury Secretary Steven Mnuchin downplayed the potential impact of AI in . He said it would be “50 or 100 more years” before the U.S. had to worry about the impact of automation on jobs. Kratsios took the concern more seriously in today’s prepared remarks. “To a certain degree, job displacement is inevitable,” “But we can’t sit idle, hoping eventually the market will sort it out. We must do what Americans have always done: adapt.” In 2016, the Obama White House convened a series of workshops on AI and its implications, starting with a workshop in Seattle. That process resulted in a calling for better information sharing, improved training to understand AI, targeted support of AI research and other policy initiatives. During today’s summit, which was closed to the media, more than 100 researchers, government officials and industry executives met to discuss what they’re doing in the fields of artificial intelligence, robotics and automation. In addition to the usual tech titans, the guest list included representatives from Boeing, Mastercard, Ford, Land O’Lakes and United Airlines. that Amazon would be represented by Rohit Prasad, vice president and head scientist for the Alexa AI program. Among the topics on the agenda are; Concerns about privacy, sparked by the recent scandal focusing on Cambridge Analytica’s use of Facebook data for political purposes. Concerns about America’s international competitiveness in AI research and development, stoked by multibillion-dollar initiatives and the . Concerns about regulatory requirements for AI. Bloomberg News reported that Kratsios promised a hands-off regulatory approach in his prepared remarks. “We didn’t cut the lines before Alexander Graham Bell made the first telephone call,” Kratsios said. “We didn’t regulate flight before the Wright Brothers took off at Kitty Hawk.”
Fossil hunters show off Triassic treasures from Antarctica at the Burke Museum

Fossil hunters show off Triassic treasures from Antarctica at the Burke Museum

1:00pm, 10th May, 2018
Christian Sidor, Burke Museum curator of vertebrate paleontology and a biology professor at the University of Washington, recounts the discovery of Triassic fossils. (GeekWire Photo / Alan Boyle) More than 100 fossil specimens at Seattle’s Burke Museum provide a fresh window into how life thrived in Antarctica about 250 million years ago, thanks to global warming. The slabs of rock document a time in the early Triassic Era when temperatures got so warm that Earth’s tropics were a The flip side of that climate equation is that Antarctica, which was still connected to what’s now Africa back then, was temperate enough to support and other forms of life. “Nothing lives at these high latitudes today,” said Christian Sidor, a University of Washington paleontologist who’s also the Burke Museum’s curator of vertebrate paleontology. “But in the Triassic, we have good evidence that these animals were probably not only living in these areas, but breeding there. So they were there year-round. … How did these animals live in environments that we don’t have a good analog for today?” Sidor and his colleagues collected the specimens during a three-month expedition to the Fremouw Formation in the Transantarctic Mountains. The November-to-February trek required setting up a base camp on Shackleton Glacier, shipping in supplies months in advance, and using helicopters like Uber cars to get around the rocky terrain amid bone-chilling temperatures. “It’s the most logistically intensive research you could ever do,” Sidor said on Wednesday during an informal news conference at the museum. Expeditions like this don’t happen very often: There have been only three previous paleontological excavations in the Shackleton Glacier area, in 1970-71, 1977-78 and 1995-96. The 2017-2018 trek brought together about 10 researchers from UW and other institutions, and involved six weeks of field work. The prize finds include fossil traces of salamander-like amphibians known as temnospondyls, pre-dinosaur reptiles such as and , and distant mammal relatives including and . Such species are of interest in part because they’re found in Antarctica as well as Africa, demonstrating that the two continents were once joined together in a supercontinent known as . The fossils also help document which types of creatures survived the Permian-Triassic extinction event, known as “the Great Dying,” which some scientists have linked to . The amphibian fossils could be particularly telling. “In the past, we’ve known which families of amphibians have been there, but not which species,” Sidor said in a news release. “Because we found so many, and they’re so well-preserved, we’ll be able to tackle that question.” One fossil clearly shows the outlines of a Procolophon skeleton, part of a rock that was cleaved in two by natural processes over millions of years. Sidor said another paleontologist on the team, Roger Smith of the Iziko South African Museum, picked up the rock at a place that previous expeditions had explored and spotted the fossil imprint on the underside. “We were the ones who were lucky enough to lift up the right rock,” Sidor said. Other fossils document the traces of Antarctica’s Triassic environment, frozen in time: the imprint of raindrops on mud, the footprints of a not-yet-identified animal, and the scrape marks that were made by a creature as it burrowed through the soil. Some of the specimens are on display in the Burke Museum’s exhibit of scientific work in progress, but most of them will remain boxed up until they’re moved into the next door. The New Burke opens to the public in the fall of 2019. The material brought back from Antarctica should keep paleontologists busy for years to come. UW graduate student Meg Whitney, for example, is studying the structure of fossil bones and teeth at a microscopic scale to understand how Triassic creatures were affected by extreme seasonality at polar latitudes. The recent expedition marked Whitney’s first trip to Antarctica, but the experience left an impression at least as deep as the imprint of a Triassic fossil. “It’s unlike anything else,” Whitney told GeekWire. “It’s pretty hard to go back to commuting on a bus after commuting on a helicopter every day to work.”
10 localities win federal approval to push the limits with drones, but Amazon’s left out

10 localities win federal approval to push the limits with drones, but Amazon’s left out

8:00pm, 9th May, 2018
Pilot projects will send drones where no drones have gone before. (Aerix Photo) The U.S. Department of Transportation has selected 10 state, local and tribal governments to oversee pilot projects that will go where no drones have gone before. But this time around, Amazon has been grounded. The projects are meant to help set a course for ever-expanding drone operations over the next three years. “Data gathered from these pilot projects will form the basis of a new regulatory framework to safely integrate drones into our national airspace,” Transportation Secretary Elaine Chao . Under the experimental program — known as the , or UAS IPP — officials at the Federal Aviation Administration and other agencies spent months reviewing 149 proposals submitted in response to . The process required governmental agencies to choose up teams and seek the federal government’s go-ahead to try out modes of operation that are usually off-limits to small-sized drones, such as flying beyond an operator’s line of sight, operating after dark or flying over large groups of uninvolved people. Such modes are seen as essential for widescale commercial applications such as the package delivery systems that Amazon, Walmart and other retailers are working on. In an emailed statement, Amazon said it’s not working with any of the 10 teams that were selected in the first round for the UAS IPP program. “While it’s unfortunate the applications we were involved with were not selected, we support the administration’s efforts to create a pilot program aimed at keeping America at the forefront of aviation and drone innovation,” said Brian Huseman, vice president of Amazon public policy. “At Amazon Prime Air, we’re focused on developing a safe operating model for drones in the airspace, and we will continue our work to make this a reality.” Amazon has been conducting its testing program under other regulatory frameworks. It has drone development centers and test sites in a variety of countries, including the U.S. as well as Britain, Austria, France and Israel. Here are the , and the focus of each project: Choctaw Nation of Oklahoma, Durant, Okla.: Test extended visual line-of-sight operations. Team partners include CNN and Green Valley Farms, which in Oklahoma. City of San Diego: Test drone operations for border protection and package delivery of food, with a secondary focus on international commerce, surveillance and interoperability with autonomous vehicles and smart-city systems. Partners include Uber, Qualcomm, Matternet and the University of California at San Diego’s hospital system. Virginia Tech – Center for Innovative Technology, Herndon, Va.: Facilitate package delivery in rural and urban settings, and test technologies including detect-and-avoid, identification and tracking, radar systems and mapping tools. Partners include NASA, the Virginia Tech Mid-Atlantic Aviation Partnership, Intel, AT&T, Airbus Aerial, State Farm, Dominion Energy, Sinclair Broadcast Group and Alphabet’s Project Wing, which got its start from Google. Kansas Department of Transportation, Topeka, Kan.: Test operations beyond visual line of sight, and leverage a to facilitate precision agriculture operations. Partners include local agencies and universities. Lee County Mosquito Control District, Fort Myers, Fla.: Test low-altitude aerial applications to monitor and control the district’s mosquito population. Memphis-Shelby County Airport Authority, Memphis, Tenn.: Test techniques to inspect FedEx aircraft. Conduct autonomous flights to support airport operations such as perimeter security surveillance and delivery of packages, including airplane parts. Partners include FedEx, Intel and units of General Electric. North Carolina Department of Transportation, Raleigh, N.C.: Test localized packaged delivery, including drone flights over people, beyond visual line of sight and at night. The test will . Partners include , , and . North Dakota Department of Transportation, Bismarck, N.D.: Test technologies to expand drone operations at night and beyond visual line of sight. Partners reportedly include. City of Reno, Nev.: Focus on the time-sensitive delivery of lifesaving medical equipment, such as medical defibrillators, in urban and rural environments. Partners include FedEx and , which has previously conducted drone delivery experiments in the Reno area with 7-Eleven and Pizza Hut.. University of Alaska at Fairbanks: Test drone operations for inspections, remote surveying and public safety under harsh conditions. The Transportation Department says more demonstration projects may be given the go-ahead in future rounds.
Biologists use artificial intelligence to flesh out 3-D views of a cell’s inner workings

Biologists use artificial intelligence to flesh out 3-D views of a cell’s inner workings

2:15am, 9th May, 2018
A 3-D visualization of human cells is color-coded to highlight substructures. (Allen Institute for Cell Science) What happens when you cross cell biology with artificial intelligence? At the Allen Institute for Cell Science, the answer isn’t super-brainy microbes, but new computer models that can turn simple black-and-white pictures of live human cells into color-coded, 3-D visualizations filled with detail. The online database, known as the , is now being made publicly available — and its creators say it could open up new windows into the workings of our cells. “From a single, simple microscopy image, you could get this very high-contrast, integrated 3-D image where it’s very easy to see where all the separate structures are,” Molly Maleckar, director of modeling at the Seattle-based Allen Institute, told GeekWire. “You can actually look at the relationships between them, and eventually apply a time series, so you can see dynamically how those change as well,” she said. “That’s something that’s totally new.” Molly Maleckar and Graham Johnson work on the Allen Integrated Cell project. (Allen Institute Photos) Eventually, the database could make it easier to monitor how stem cells transform themselves into the different types of cells in our bodies, see how diseases affect cellular processes, and check the effects that drugs have on individual cells. “These methods are allowing us to see multiple structures where they are in the cell, relative to one another, reliably, at the same time, while perturbing the cell as little as possible,” said Graham Johnson, director of the Allen Institute’s Animated Cell project. “Our goal is to get them as close to their native, happy state as possible, without hurting them with light, without messing up their function.” The effort began with the institute’s collection of gene-edited human , or hiPSC lines. These special cells have been engineered to add fluorescent labels, making it possible for researchers to pinpoint the substructures inside them. Examples of such substructures include the , the energy-producing and the that serve as cellular scaffolds. Researchers trained an artificial intelligence program to recognize the glow-in-the-dark substructures in thousands of cells. Then they applied that deep-learning model to simpler black-and-white images of cells that didn’t have fluorescent labels. The resulting “label-free model” makes it possible to generate highly detailed 3-D visualizations from the kinds of views you get from a standard high-school microscope. The method is described in depth in a . Another model developed at the institute can accurately predict the most probable shape and location of structures in any pluripotent stem cell, based solely on the shape of the cell membrane and the nucleus. The models could help researchers develop detailed information about cellular interactions over time without having to use chemical dyes, laser scans or other methods that disrupt the cells being studied. Maleckar said the technique could be used for drug discovery, but she’s just as excited about the potential applications for regenerative medicine. “A really interesting thing there is, how do we engineer heart muscle cells so we can grow them, and they become a functional cell and eventually functional tissue?” she said. “One way we can improve that process is by learning how that process occurs.” For now, the Allen Institute is working to fine-tune the computer modeling tools rather than moving on to the clinical applications. “We’re really excited about the downstream applications, but that’s not our major focus right now,” Maleckar said. “We’re really trying to probe the limits of the technology.” Today, the tools can turn high-school-level microscopy into visualizations for professional researchers — but someday, even high-school students could benefit. Johnson recalled how some of the cells he saw through the microscope during his high-school years just looked like blobs with a couple of spots on them. “To be able to take that same high-school scope, run the software on it one day, and be able to see six or eight different things inside that cell, and understand how those different components are connected, and why the pieces are moving the way they are — that would be so exciting,” he said. Johnson said he was so intrigued by the idea that he wrote himself a reminder to try out the system on microscope images from an actual high school. “That’d be really cool,” he said.
‘Eggs’ for alien Earths? At 94, physicist Freeman Dyson’s brain is still going strong

‘Eggs’ for alien Earths? At 94, physicist Freeman Dyson’s brain is still going strong

8:00pm, 8th May, 2018
Physicist Freeman Dyson’s latest book is “Maker of Patterns: An Autobiography Through Letters.” (Dan Komoda / Institute for Advanced Study, Princeton, NJ USA) Alien megaspheres … rockets powered by nuclear bombs … freeze-dried life in outer space: These are just some of the ideas that have flowered in the brain of physicist , and he’s not done yet. Dyson, who turned 94 last December, has spent most of his career at the in Princeton, N.J., and he still hangs his hat there as a professor emeritus. But he also has a connection to the Pacific Northwest: His son, tech historian , lives in Bellingham, Wash. The elder Dyson renews his Northwest connections on Wednesday at a that’s framed as a conversation with Seattle science-fiction author Neal Stephenson and Robbert Dijkgraaf, director of the Institute for Advanced Study. Topic A is sure to be Freeman Dyson’s newly published autobiography, which is based on letters he sent to his family between 1941 and 1978. During those decades, Dyson rubbed elbows with many of the great minds in physics — including Richard Feynman, Niels Bohr, Robert Oppenheimer and Stephen Hawking. He even crossed paths, literally, with Albert Einstein, who spent the latter years of his life at the Institute for Advanced Study. “We saw him each morning walk from his home to the institute, and each afternoon walk back, but we never spoke to him,” Dyson writes. Dyson himself made his mark in fields ranging from quantum electrodynamics to nuclear engineering. “Maker of Patterns” sheds light on that scientific work as well as his involvement in the civil rights movement, the anti-war movement and efforts to reduce nuclear arms. Ironically, the Partial Nuclear Test Ban Treaty killed off one of Dyson’s best-known ideas, Project Orion, which called for on interplanetary voyages. “I am really sorry about this,” Dyson wrote in 1963. “But I had to admit in my own mind that no single project of that sort could be allowed to stand in the way of a treaty.” Another idea attached to Dyson’s name is a concept that came to be known as the : In 1960, that an extraterrestrial civilization could be detected by its infrared signature around its parent star. That suggestion came back into the spotlight a couple of years ago when astronomers speculated that an might exist around a weirdly behaving star. The astronomers eventually , but Dyson said he was amused by the controversy while it lasted. “Of course that’s just all rubbish,” he told GeekWire in a telephone interview. “This was basically a misunderstanding of the word ‘biosphere.’ I used the word ‘biosphere’ to mean the habitat in which creatures would be living, and you’d see the warmth outside this habitat. The science-fiction people misunderstood ‘biosphere’ as meaning a big round ball. And of course it doesn’t have to be a ball.” Dyson is also on record as suggesting that if scientists wanted to look for life on Europa, a moon of Jupiter thought to harbor an ice-covered ocean, the easiest way would be to look for organisms that are splashed into space as the result of cosmic impacts. The idea of may sound fanciful, but it helped inspire strategies for detecting organic materials within the plumes of water spraying up from or from , a similarly ice-covered moon of Saturn. Dyson warned against overemphasizing the astrobiology angle for space missions. “I think it’s a big mistake to announce beforehand that a mission is going to search for life,” he said. “That’s almost certainly going to fail. But if you send out a mission to explore what’s there, whether it’s alive or not, it begins to make sense.” That being said, Dyson’s “latest brainwave” has everything to do with life in space — as in sending out earthly life to take root in outer space. The concept would make use of what he calls Here’s how he described the idea during our interview: “The Noah’s Ark Egg is a way of making space colonies highly cost-effective. They’re very cheap, and also very powerful. They’re using miniaturization to spread life in the universe, not just for exploring. “The Noah’s Ark Egg is an object looking like an ostrich egg, a few kilograms in weight. But instead of having a single bird inside, it has embryos — a whole planet’s worth of species of microbes and animals and plants, each represented by one embryo. “It’s programmed then to grow into a complete planet’s worth of life. So it will cost only a few million dollars for the egg and the launch, but you could have about 1,000 human beings and all the life support, and all the different kinds of plants and animals for surviving. The cost per person is only a few thousand dollars, and it could enlarge the role of life in the universe at an amazingly fast speed. So you could imagine doing this in 100 years or so. “We’d need to know more about embryology before we could do it. We’d want to know how to design embryos, and design robot nannies to take care of them until they’re grown up. But all that could be done.” The idea sounds as if it’d require interstellar travel on the scale seen in the , but Dyson insists that the scheme could work in our own solar system. “There’s lots of real estate in the solar system — of course, most of it is small objects, but there’s plenty of sunlight,” he said. Dyson is skeptical about the prospects of sending humans beyond the solar system. However, he’s supportive of projects such as the , which aims to send miniaturized probes flying past the Alpha Centauri star system. “Getting to Alpha Centauri is really hard, and it’s something we really don’t know how to do,” he said. “But on the way, we’ll get to a huge number of other places which are also interesting.” Dyson has a lot more to say about different approaches to exploring outer space, which we share in the audio clip below. The way he sees it, the important thing is to keep exploring. “Space is not empty,” he said. “It’s full of all kinds of interesting stuff.” at Meydenbauer Center Theater in Bellevue, Wash., at 6:30 p.m. PT on Wednesday, May 9. The event is being presented by Town Hall Seattle and Meydenbauer Center, and moderated by Robbert Dijkgraaf, director of the Institute for Advanced Study. Doors open at 5:30 p.m., and tickets are $5. For more information and to purchase tickets, visit the or . The event will also be .
U.S. pullout from Iran nuclear deal boosts most aerospace stocks, but not Boeing’s

U.S. pullout from Iran nuclear deal boosts most aerospace stocks, but not Boeing’s

3:30pm, 8th May, 2018
Iran Air already operates Boeing planes, including this 747-200 jet. (Press TV Photo) A broad range of aerospace and defense stocks as markets put a martial spin on President Donald Trump’s decision to make a “hard exit” from an international nuclear non-proliferation deal with Iran. That’s not surprising — nor is it surprising that Boeing’s share price fell instead. Trump’s intention to reimpose trade sanctions almost certainly dooms. That sale agreement covered 50 single-aisle 737 MAX 8 jets and 80 wide-body 777s. The loss is tempered by the fact that Boeing wasn’t counting on the deal going through. During a teleconference with analysts last month, Boeing CEO Dennis Muilenburg said 777 jet production rates were “not dependent” on the Iran deal. The company also has a healthy backlog of 737 jet orders to fall back on. Today Boeing issued a statement saying that the company “will consult with the U.S. government on next steps” regarding the Iran deal. “As we have throughout this process, we’ll continue to follow the U.S. government’s lead,” Boeing said. Airbus struck a similar deal to sell 98 planes to Iran Air, for a list-price total amounting to $20.9 billion. Three planes have been delivered so far, but the fact that Airbus’ planes include U.S.-made parts could throw a roadblock in the way of further deliveries. “We’re carefully analyzing the announcement and will be evaluating next steps consistent with our internal policies and in full compliance with sanctions and export control regulations,” . “This will take some time.” At today’s market close, Boeing’s share price was down 0.63 percent at $338.37. Other leading U.S. aerospace firms posted gains — partly due to expectations for increased defense spending, and partly as a reaction to. It’s not just about Iran: Late last week, for example, the Air Force to begin work on a new constellation of missile-warning satellites.
Uber CEO Dara Khosrowshahi touts flying cars and rideshare company’s turnaround

Uber CEO Dara Khosrowshahi touts flying cars and rideshare company’s turnaround

1:00pm, 8th May, 2018
This artist’s conception shows the reference model for Uber’s future air taxis. (Uber via YouTube) Uber executives are providing an update on their plans to put flying cars in the air by 2020, with commercial rides beginning in 2023, but the most pointed comments from CEO Dara Khosrowshahi address the rideshare company’s present challenges. Khosrowshahi’s interview with CBS News came in conjunction with today’s kickoff of the second annual Uber Elevate summit in Los Angeles, which focuses on Uber’s plans to operate fleets of electric-powered, vertical-takeoff-and-landing air taxis. “We want to create the network around those vehicles so that regular people can take these taxis in the air for longer distances when they want to avoid traffic at affordable prices,” Khosrowshahi told CBS. The company is focusing in on a reference model for the eVTOL aircraft: Multiple electric-driven rotors provide vertical helicopter-style lift, and after rising in the air, two of the rotors flip to a horizontal position to push the winged hybrid craft forward. The cockpit will accommodate four people, which Khosrowshahi said is “one of the key tenets for this technology.” To make Uber’s cost formula work, the company will use cloud-based trip management tools to maximize the passenger load for each flight. Over the next few years, Uber will calibrate the price for its air taxi service to match what could eventually amount to several million rides a day. Eric Allison, the company’s head of aviation program, showed a chart indicating that UberAir could conceivably charge $90 for a 29-minute ride between locations that would cost $60 and take 69 minutes using the UberX car service. Uber is working with a range of aerospace partners to develop the eVTOL aircraft, including Boeing subsidiary Aurora Flight Sciences, Embraer and Bell. The company is also working with NASA on an air traffic management system that will accommodate air taxi trips. The current plan calls for demonstration flights to begin in Dallas-Fort Worth and Los Angeles as well as Dubai in 2020, with regular commercial service starting up by 2023. By then, Uber hopes to ramp up to “automotive-scale manufacturing” of air taxis, said Jeff Holden, Uber’s chief product officer. Khosrowshahi said the aircraft will be human-piloted at first, but will eventually be operated autonomously. The issue of self-driving vehicles is a sensitive one for Uber, due to an accident involving an Uber autonomous car that killed a pedestrian in Arizona in March. In the wake of that fatality, Uber suspended its self-driving tests nationwide. In the CBS interview, Khosrowshahi said Uber was conducting a “top-to-bottom audit of our procedures, training, software, hardware, what our practices are.” Khosrowshahi said there was no question that Uber would resume its autonomous vehicle program, “but we want to be safe when we get back on the road.” He also addressed Uber’s sexual-harassment issues, which led to last year’s ouster of Uber’s previous CEO, Travis Kalanick. Khosrowshahi was brought over from his post as the CEO of Bellevue, Wash.-based Expedia to take Kalanick’s place. When CBS News’ Bianna Golodryga asked Khosrowshahi when he expected to turn around Uber’s corporate culture, he replied: “If it’s not changed right now, then I failed.” “I will tell you that the company took it upon itself to change,” Khosrowshahi said. “The change didn’t start with me.” He said “what happened in the past was deeply unpleasant and wrong, but the company from a bottoms-up standpoint started changing, and I think it continues apace.” Golodryga then asked whether Uber is providing a workplace where women employees can feel safe. “It’s ‘game over’ if we don’t,” Khosrowshahi replied.
Tesla stock goes through twists and turns as analysts focus on Model 3 car outlook

Tesla stock goes through twists and turns as analysts focus on Model 3 car outlook

6:15am, 8th May, 2018
Tesla CEO Elon Musk presides over the handover of the first Model 3 cars in August 2017. (Tesla via YouTube) Tesla’s share price took a weird turn today after the company reported its first-quarter financial results and billionaire CEO Elon Musk dissed analysts’ concerns about the Tesla Model 3 mass-market electric car. The raw numbers reflected Tesla’s efforts to ramp up production over the quarter: Net loss widened to a record $784.6 million for the quarter, but revenue rose to $3.41 billion, outdoing analysts’ estimates. The key questions have to do with the Model 3, which Musk is counting on to bring the company to profitability by the latter half of this year. “It’s high time we became profitable,” he said during today’s teleconference for analysts. “The reality is, you’re not a real company until you are.” Model 3 production surpassed 2,000 cars per week only in the past month or so, which is far behind Musk’s initial timeline. Tesla said it was targeting a 5,000-a-week rate by the end of June, and anticipated turning the gross margin on the Model 3 from slightly negative to break-even by then. Net reservations for the Model 3, including configured orders that had not yet been delivered, exceeded 450,000 at the end of the quarter, Tesla said. Musk said Tesla’s operations would undergo some restructuring this month, in part to reduce the number of third-party contractors. “We’re going to scrub the barnacles on that front,” he said. “It’s pretty crazy.” But when analysts wanted to delve into the details about gross margin, Musk brushed off the questions as “not cool” and “boring.” “These questions are so dry, they’re killing me,” he said. Instead, he went to a YouTube channel operator and retail investor named Gali Russell, who asked a series of questions about Tesla’s plan to phase in fully autonomous driving. In the course of the conversation, Musk repeated his complaints about reports focusing on fatalities involving Tesla cars. “It’s really incredibly irresponsible of any journalist with integrity to write an article that would lead people to believe that Tesla autonomy is less safe, because people might actually turn it off, and then die,” he said. Around that same time, Tesla’s share prices slumped in after-hours trading by as much as 5.7 percent. Musk eventually returned to analysts’ questions, touching on such subjects as ride-hailing services (which he said could begin “as soon as the end of next year”), plans for factory expansion (including the fact that every Gigafactory added from now on would build cars as well as batteries) and the Tesla Semi (a project that Musk said was currently getting a lower priority due to the focus on the Model 3). He also provided some stock advice for traders who speculate on the company that Musk himself has admitted is something of a “Do not buy if volatility is scary,” Musk said.
After years of study, radar scans rule out hidden rooms in King Tut’s burial chamber

After years of study, radar scans rule out hidden rooms in King Tut’s burial chamber

6:15am, 8th May, 2018
Experts scan the walls of King Tutankhamun’s tomb with ground-penetrating radar. (Egypt Ministry of Antiquities Photo via Facebook) Ground-penetrating radar scans have failed to confirm any hints that King Tutankhamun’s tomb in Egypt’s Valley of the KIngs contains a hidden chamber. The announcement from Egypt’s Ministry of Antiquities brought a disappointing end to a scientific investigation that began more than two years ago, after British archaeologist Nicholas Reeves put forth the claim. Reeves said he saw hints of covered-over doorways in high-resolution images of the 3,300-year-old tomb’s main chamber. He suggested that the chamber’s walls concealed a blocked-up entryway to the tomb of Queen Nefertiti, who is thought to have been Tutankhamun’s stepmother. Preliminary rounds of and seemed to confirm that there were anomalies behind the wall. But , the ministry said more detailed scans provided “conclusive evidence on the non-existence of hidden chambers adjacent to or inside Tutankhamun’s tomb.” An intensive survey was conducted in February, using ground-penetrating radar. The readings were analyzed by experts from the University of Turin and two Italian companies, Geostudi Astier and 3DGeoimaging. No marked discontinuities or telltale reflections were found, according to the science team’s leader, Francesco Porcelli. “It is maybe a little bit disappointing that there is nothing behind the walls of Tutankhamun’s tomb, but I think on the other hand that this is good science,” . King Tut’s treasures that’s being built on the outskirts of Cairo, near the Giza Pyramids. Parts of the $795 million Grand Egyptian Museum — including exhibits focusing on Tutankhamun and his time — are due to open this year, with a grand opening planned in 2022.
FAIR competition? Facebook creates official AI labs in Seattle and Pittsburgh, vying for top talent

FAIR competition? Facebook creates official AI labs in Seattle and Pittsburgh, vying for top talent

6:15am, 8th May, 2018
Facebook is upgrading the status of its Seattle AI research operation. (GeekWire Photo / Kevin Lisota) After months of work to beef up its artificial intelligence research teams in Seattle and Pittsburgh, Facebook is acknowledging that those two cities are getting official status as AI labs in their own right. “Facebook AI Research is opening two new labs in Seattle and Pittsburgh, which will join the existing sites in Menlo Park, New York, Paris, Montreal and Tel Aviv,” Yann LeCun, Facebook’s chief AI scientist, . LeCun’s statement confirms what sources told GeekWire in March about , as well as rumors we heard back then about the social-media giant’s plans for Pittsburgh. As we reported in March, University of Washington computer science professor Luke Zettlemoyer is a key hire for Seattle’s newly designated FAIR lab. At the time, Facebook spokesman Ari Entin said Zettlemoyer would report to Menlo Park, but LeCun’s statement suggests that the Seattle operation will have more autonomy going forward. Two professors from Carnegie Mellon University, Abhinav Gupta and Jessica Hodgins, will be part of the Pittsburgh lab. Gupta specializes in computer vision. Hodgins focuses on computer graphics, animation and robotics, with an emphasis on analyzing human motion. All three professors will retain part-time positions at their universities, LeCun said. Back in March, Entin said Facebook plans to expand its AI research staff even further in Seattle. Referring to Zettlemoyer, he said, “Luke isn’t a single hire and we’re done.” Some worry that Facebook’s recruitment campaign will strain what’s already a highly competitive market for AI experts in Seattle — particularly when it comes to training the next generation of researchers. Zettlemoyer, for example, was recruited from the Allen Institute for Artificial Intelligence, or AI2, where he led the . After his departure, AI2 has continued to hire high-profile AI specialists, including as well as UW professors Noah Smith and Yejin Choi — who, like Zettlemoyer, are experts in natural language processing, or NLP. AI2’s creator, Microsoft co-founder Paul Allen, , the institute’s effort to develop AI agents with more common sense. As an added inducement, AI2 is making allowances for researchers to keep their university posts and collaborate with commercial AI ventures. Despite the added resources and wide leeway, it’s getting tougher to hold onto AI talent — due in part to Facebook’s recruitment drive. “What are the ethics of a major corporation suddenly going after the entire NLP faculty in a computer science department? I believe their original offers had the faculty members spending 80 percent of their time at Facebook, which would not allow them time to carry out their educational responsibilities at UW,” AI2’s chief executive officer, Oren Etzioni, told GeekWire in an email. “Has Facebook’s motto evolved into: ‘Move fast, and break academia’?” he asked. A quoted UW computer science professor Dan Weld as also voicing concern about Facebook’s drive. “It is worrisome that they are eating the seed corn,” Weld said. “If we lose all our faculty, it will be hard to keep preparing the next generation of researchers.” In his Facebook posting, LeCun took issue with the criticism. He noted that many FAIR researchers spend some of their time at universities, and that FAIR labs host resident graduate students as well. “This new modus operandi is redefining the relationship between academic research and industry research,” LeCun said. He said The New York Times’ report “erroneously qualifies this evolution as a ‘brain drain’ from academia.” “Facebook is careful not to deplete universities from their best faculty, by making it easy to maintain sizeable research and teaching activities in their academic labs,” LeCun wrote. “In fact, making these part-time splits possible is precisely the reason why we have been establishing labs in New York, Paris, Montreal, Tel Aviv, and now Seattle and Pittsburgh. It is the proximity to leading universities with talented faculty and the existence of a local talent pool that attract us.”