How Microsoft is opening AI’s algorithmic ‘black box’ for greater transparency

How Microsoft is opening AI’s algorithmic ‘black box’ for greater transparency

8:57pm, 23rd April, 2019
Erez Barak, senior director of product for Microsoft’s AI Division, speaks at the Global Artificial Intelligence Conference in Seattle. (GeekWire Photo / Alan Boyle) Artificial intelligence can work wonders, but often it works in mysterious ways. Machine learning is based on the principle that a software program can analyze a huge set of data and fine-tune its algorithms to detect patterns and come up with solutions that humans may miss. That’s how Google DeepMind’s Alpha Go AI agent (and other games) well enough to beat expert players. But if programmers and users can’t figure out how AI algorithms came up with their results, that black-box behavior can be a cause for concern. It may become impossible to judge whether AI agents have picked up . That’s why terms such as transparency, explainability and interpretability are playing an increasing role in the AI ethics debate. The European Commission includes transparency and traceability among its , in line with the laid out in data-protection laws. The French government that powers the algorithms it uses. In the United States, the Federal Trade Commission’s has been charged with providing guidance on algorithmic transparency. Transparency figures in Microsoft CEO Satya Nadella’s as well — and , senior director of product for Microsoft’s AI Division, addressed the issue head-on today at the Global Artificial Intelligence Conference in Seattle. “We believe that transparency is a key,” he said. “How many features did we consider? Did we consider just these five? Or did we consider 5,000 and choose these five?” Barak noted that a is built right into Microsoft’s Azure Machine Learning service. “What it does is that it takes the model as an input and starts breaking it down,” he said. The model explanation can show which factors went into the computer model, and how they were weighted by the AI system’s algorithms. As a result, customers can better understand why, for instance, they were turned down for a mortgage, passed over for a job opening, or denied parole. AI developers can also use the model explanations to make their algorithms more “human.” For instance, it may be preferable to go with an algorithm that doesn’t fit a training set of data quite as well, but is more likely to promote fairness and avoid gender or racial bias. As AI applications become more pervasive, calls for transparency — perhaps enforced through government regulation — could well become stronger. And that runs the risk of exposing trade secrets hidden within a company’s intricately formulated algorithms, said , a partner at Seattle’s Perkins Coie law firm who specializes in trade regulations. “Algorithms tend to be things that are closely guarded. … That’s not something that you necessarily want to be transparent with the public or with your competitors about, so there is that fundamental tension,” Castillo said. “That’s more at issue in Europe than in the U.S., which has much, much, much stronger and aggressive enforcement.” Microsoft has already taken a strong stance on responsible AI — to the point that the company . After his talk, Barak told GeekWire that Azure Machine Learning’s explainability feature could be used as an open-source tool to look inside the black box and verify that an AI algorithm doesn’t perpetuate all-too-human injustices. Over time, will the software industry or other stakeholders develop a set of standards or a “seal of approval” for AI algorithms? “We’ve seen that in things like security. Those are the kinds of thresholds that have been set. I’m pretty sure we’re heading in that direction as well,” Barak said. “The idea is to give everyone the visibility and capability to do that, and those standards will develop, absolutely.”
Xnor shrinks AI to fit on a solar-powered chip, opening up big frontiers on the edge

Xnor shrinks AI to fit on a solar-powered chip, opening up big frontiers on the edge

9:50am, 13th February, 2019
Xnor.ai machine learning engineer Hessam Bagherinezhad, hardware engineer Saman Naderiparizi and co-founder Ali Farhadi show off a chip that uses solar-powered AI. (GeekWire Photo / Alan Boyle) It was a big deal two and a half years ago when researchers the size of a candy bar — and now it’s an even bigger deal for Xnor.ai to re-engineer its artificial intelligence software to fit onto a solar-powered computer chip. “To us, this is as big as when somebody invented a light bulb,” Xnor.ai’s co-founder, Ali Farhadi, said at the company’s Seattle headquarters. Like the candy-bar-sized, Raspberry Pi-powered contraption, the camera-equipped chip flashes a signal when it sees a person standing in front of it. But the chip itself isn’t the point. The point is that Xnor.ai has figured out how to blend stand-alone, solar-powered hardware and edge-based AI to turn its vision of “artificial intelligence at your fingertips” into a reality. “This is a key technology milestone, not a product,” Farhadi explained. Shrinking the hardware and power requirements for AI software should expand the range of potential applications greatly, Farhadi said. “Our homes can be way smarter than they are today. Why? Because now we can have many of these devices deployed in our houses,” he said. “It doesn’t need to be a camera. We picked a camera because we wanted to show that the most expensive algorithms can run on this device. It might be audio. … It might be a way smarter smoke detector.” Outside the home, Farhadi can imagine putting AI chips on stoplights, to detect how busy an intersection is at a given time and direct the traffic flow accordingly. AI chips could be tethered to balloons or scattered in forests, to monitor wildlife or serve as an early warning system for wildfires. Xnor’s solar-powered AI chip is light enough to be lofted into the air on a balloon for aerial monitoring. In this image, the chip is highlighted by the lamp in the background. (Xnor. ai Photo) Sophie Lebrecht, Xnor.ai’s senior vice president of strategy and operations, said the chips might even be cheap enough, and smart enough, to drop into a wildfire or disaster zone and sense where there are people who need to be rescued. “That way, you’re only deploying resources in unsafe areas if you really have to,” she said. The key to the technology is reducing the required power so that it can be supplied by a solar cell that’s no bigger than a cocktail cracker. That required innovations in software as well as hardware. “We had to basically redo a lot of things,” machine learning engineer Hessam Bagherinezhad said. Xnor.ai’s head of hardware engineering, Saman Naderiparizi, worked with his colleagues to figure out a way to fit the software onto an FPGA chip that costs a mere $2, and he says it’s possible to drive the cost down to less than a dollar by going to ASIC chips. It only takes on the order of milliwatts of power to run the chip and its mini-camera, he told GeekWire. “With technology this low power, a device running on only a coin-cell battery could be always on, detecting things every second, running for 32 years,” Naderiparizi said in a news release. That means there’d be no need to connect AI chips to a power source, replace their batteries or recharge them. And the chips would be capable of running AI algorithms on standalone devices, rather than having to communicate constantly with giant data servers via the cloud. If the devices need to pass along bits of data, they could . That edge-computing approach is likely to reduce the strain of what could turn out to be billions of AI-enabled devices. “The carbon footprint of data centers running all of those algorithms is a key issue,” Farhadi said. “And with the way AI is progressing, it will be a disastrous issue pretty soon, if we don’t think about how we’re going to power our AI algorithms. Data centers, cloud-based solutions for edge-use cases are not actually efficient ways, but other than efficiency, it’s harming our planet in a dangerous way.” Farhadi argues that cloud-based AI can’t scale as easily as edge-based AI. “Imagine when I put a camera or sensor at every intersection of this city. There is no cloud that is going to handle all that bandwidth,” he said. “Even if there were, back-of-the-envelope calculations would show that my business will go bankrupt before it sees the light of day.” The edge approach also addresses what many might see as the biggest bugaboo about having billions of AI bugs out in the world: data privacy. “I don’t want to put a camera in my daughter’s bedroom if I know that the picture’s going to end up in the cloud,” Farhadi said. Xnor.ai was , or AI2, only a couple of years ago, and the venture is with millions of dollars of financial backing from Madrona Venture Group, AI2 and other investors. Farhadi has faith that the technology Xnor.ai is currently calling “solar-powered AI” will unlock still more commercial frontiers, but he can’t predict whether the first applications will pop up in the home, on the street or off the beaten track. “It will open up so many different things, the exact same thing when the light bulb was invented: No one knew what to do with it,” he said. “The technology’s out there, and we’ll figure out the exact products.”
Xnor shrinks AI to fit on a solar-powered chip, opening big frontiers on the edge

Xnor shrinks AI to fit on a solar-powered chip, opening big frontiers on the edge

9:20am, 13th February, 2019
Xnor.ai machine learning engineer Hessam Bagherinezhad, hardware engineer Saman Naderiparizi and co-founder Ali Farhadi show off a chip that can use solar-powered AI to detect people. (GeekWire Photo / Alan Boyle) It was a big deal two and a half years ago when researchers the size of a candy bar — and now it’s an even bigger deal for Xnor.ai to re-engineer its artificial intelligence software to fit onto a solar-powered computer chip. “To us, this is as big as when somebody invented a light bulb,” Xnor.ai’s co-founder, Ali Farhadi, said at the company’s Seattle headquarters. Like the candy-bar-sized, Raspberry Pi-powered contraption, the camera-equipped chip flashes a signal when it sees a person standing in front of it. But the chip itself isn’t the point. The point is that Xnor.ai has figured out how to blend stand-alone, solar-powered hardware and edge-based AI to turn its vision of “artificial intelligence at your fingertips” into a reality. “This is a key technology milestone, not a product,” Farhadi explained. Shrinking the hardware and power requirements for AI software should expand the range of potential applications greatly, Farhadi said. “Our homes can be way smarter than they are today. Why? Because now we can have many of these devices deployed in our houses,” he said. “It doesn’t need to be a camera. We picked a camera because we wanted to show that the most expensive algorithms can run on this device. It might be audio. … It might be a way smarter smoke detector.” Outside the home, Farhadi can imagine putting AI chips on stoplights, to detect how busy an intersection is at a given time and direct the traffic flow accordingly. AI chips could be tethered to balloons or scattered in forests, to monitor wildlife or serve as an early warning system for wildfires. Xnor’s solar-powered AI chip is light enough to be lofted into the air on a balloon for aerial monitoring. In this image, the chip is highlighted by the lamp in the background. (Xnor. ai Photo) Sophie Lebrecht, Xnor.ai’s senior vice president of strategy and operations, said the chips might even be cheap enough, and smart enough, to drop into a wildfire or disaster zone and sense where there are people who need to be rescued. “That way, you’re only deploying resources in unsafe areas if you really have to,” she said. The key to the technology is reducing the required power so that it can be supplied by a solar cell that’s no bigger than a cocktail cracker. That required innovations in software as well as hardware. “We had to basically redo a lot of things,” machine learning engineer Hessam Bagherinezhad said. Xnor.ai’s head of hardware engineering, Saman Naderiparizi, worked with his colleagues to figure out a way to fit the software onto an FPGA chip that costs a mere $2, and he says it’s possible to drive the cost down to less than a dollar by going to ASIC chips. It only takes on the order of milliwatts of power to run the chip and its mini-camera, he told GeekWire. “With technology this low power, a device running on only a coin-cell battery could be always on, detecting things every second, running for 32 years,” Naderiparizi said in a news release. That means there’d be no need to connect AI chips to a power source, replace their batteries or recharge them. And the chips would be capable of running AI algorithms on standalone devices, rather than having to communicate constantly with giant data servers via the cloud. If the devices need to pass along bits of data, they could . That edge-computing approach is likely to reduce the strain of what could turn out to be billions of AI-enabled devices. “The carbon footprint of data centers running all of those algorithms is a key issue,” Farhadi said. “And with the way AI is progressing, it will be a disastrous issue pretty soon, if we don’t think about how we’re going to power our AI algorithms. Data centers, cloud-based solutions for edge-use cases are not actually efficient ways, but other than efficiency, it’s harming our planet in a dangerous way.” Farhadi argues that cloud-based AI can’t scale as easily as edge-based AI. “Imagine when I put a camera or sensor at every intersection of this city. There is no cloud that is going to handle all that bandwidth,” he said. “Even if there were, back-of-the-envelope calculations would show that my business will go bankrupt before it sees the light of day.” The edge approach also addresses what many might see as the biggest bugaboo about having billions of AI bugs out in the world: data privacy. “I don’t want to put a camera in my daughter’s bedroom if I know that the picture’s going to end up in the cloud,” Farhadi said. Xnor.ai was , or AI2, only a couple of years ago, and the venture is with millions of dollars of financial backing from Madrona Venture Group, AI2 and other investors. Farhadi has faith that the technology Xnor.ai is currently calling “solar-powered AI” will unlock still more commercial frontiers, but he can’t predict whether the first applications will pop up in the home, on the street or off the beaten track. “It will open up so many different things, the exact same thing when the light bulb was invented: No one knew what to do with it,” he said. “The technology’s out there, and we’ll figure out the exact products.”