How Microsoft is opening AI’s algorithmic ‘black box’ for greater transparency

How Microsoft is opening AI’s algorithmic ‘black box’ for greater transparency

8:57pm, 23rd April, 2019
Erez Barak, senior director of product for Microsoft’s AI Division, speaks at the Global Artificial Intelligence Conference in Seattle. (GeekWire Photo / Alan Boyle) Artificial intelligence can work wonders, but often it works in mysterious ways. Machine learning is based on the principle that a software program can analyze a huge set of data and fine-tune its algorithms to detect patterns and come up with solutions that humans may miss. That’s how Google DeepMind’s Alpha Go AI agent (and other games) well enough to beat expert players. But if programmers and users can’t figure out how AI algorithms came up with their results, that black-box behavior can be a cause for concern. It may become impossible to judge whether AI agents have picked up . That’s why terms such as transparency, explainability and interpretability are playing an increasing role in the AI ethics debate. The European Commission includes transparency and traceability among its , in line with the laid out in data-protection laws. The French government that powers the algorithms it uses. In the United States, the Federal Trade Commission’s has been charged with providing guidance on algorithmic transparency. Transparency figures in Microsoft CEO Satya Nadella’s as well — and , senior director of product for Microsoft’s AI Division, addressed the issue head-on today at the Global Artificial Intelligence Conference in Seattle. “We believe that transparency is a key,” he said. “How many features did we consider? Did we consider just these five? Or did we consider 5,000 and choose these five?” Barak noted that a is built right into Microsoft’s Azure Machine Learning service. “What it does is that it takes the model as an input and starts breaking it down,” he said. The model explanation can show which factors went into the computer model, and how they were weighted by the AI system’s algorithms. As a result, customers can better understand why, for instance, they were turned down for a mortgage, passed over for a job opening, or denied parole. AI developers can also use the model explanations to make their algorithms more “human.” For instance, it may be preferable to go with an algorithm that doesn’t fit a training set of data quite as well, but is more likely to promote fairness and avoid gender or racial bias. As AI applications become more pervasive, calls for transparency — perhaps enforced through government regulation — could well become stronger. And that runs the risk of exposing trade secrets hidden within a company’s intricately formulated algorithms, said , a partner at Seattle’s Perkins Coie law firm who specializes in trade regulations. “Algorithms tend to be things that are closely guarded. … That’s not something that you necessarily want to be transparent with the public or with your competitors about, so there is that fundamental tension,” Castillo said. “That’s more at issue in Europe than in the U.S., which has much, much, much stronger and aggressive enforcement.” Microsoft has already taken a strong stance on responsible AI — to the point that the company . After his talk, Barak told GeekWire that Azure Machine Learning’s explainability feature could be used as an open-source tool to look inside the black box and verify that an AI algorithm doesn’t perpetuate all-too-human injustices. Over time, will the software industry or other stakeholders develop a set of standards or a “seal of approval” for AI algorithms? “We’ve seen that in things like security. Those are the kinds of thresholds that have been set. I’m pretty sure we’re heading in that direction as well,” Barak said. “The idea is to give everyone the visibility and capability to do that, and those standards will develop, absolutely.”
Who’ll serve as AI’s watchdog? Experts trade suggestions at AI2 policy workshop

Who’ll serve as AI’s watchdog? Experts trade suggestions at AI2 policy workshop

8:40pm, 7th March, 2019
Seattle University’s Tracy Kosa, the University of Maryland’s Ben Shneiderman and Rice University’s Moshe Vardi take questions during an AI policy workshop at the Allen Institute for Artificial Intelligence, moderated by AI2 CEO Oren Etzioni. (GeekWire Photo / Alan Boyle) Do we need a National Algorithm Safety Board? How about licensing the software developers who work on critical artificial intelligence platforms? Who should take the lead when it comes to regulating AI? Or does AI need regulation at all? The future of AI and automation, and the policies governing how far those technologies go, took center stage today during a policy workshop presented by Seattle’s Allen Institute for Artificial Intelligence, or AI2. And the experts who spoke agreed on at least one thing: Something needs to be done, policy-wise. “Technology is driving the future — the question is, who is doing the steering?” said Moshe Vardi, a Rice University professor who focuses on computational engineering and the social impact of automation. Artificial intelligence is already sparking paradigm shifts in the regulatory sphere: For example, when a Tesla car owner was killed in a 2016 highway collision, the National Transportation Safety Board at the company’s self-driving software. (And there have been such for the NTSB to investigate since then.) The NTSB, which is an , may be a useful model for a future federal AI watchdog, said Ben Shneiderman, a computer science professor at the University of Maryland at College Park. Just as the NTSB determines where things go wrong in the nation’s transportation system, independent safety experts operating under a federal mandate could analyze algorithmic failures and recommend remedies. One of the prerequisites for such a system would be the ability to follow an audit trail. “A flight data recorder for every robot, a flight data recorder for every algorithm,” Shneiderman said. He acknowledged that a National Algorithm Safety Board may not work exactly like the NTSB. It may take the form of a “SWAT team” that’s savvy about algorithms and joins in investigations conducted by other agencies, in sectors ranging from health care to highway safety to financial markets and consumer protection. Ben Shneiderman, a computer science professor at the University of Maryland at College Park, says the National Transportation Safety Board could provide a model for regulatory oversight of algorithms that have significant societal impact. (GeekWire Photo / Alan Boyle)) What about the flood of disinformation and fakery that AI could enable? That might conceivably fall under the purview of the Federal Communications Commission — if it weren’t for the fact that a provision in the 1996 Communications Decency Act, known as , absolves platforms like Facebook (and, say, your internet service provider) from responsibility for the content that’s transmitted. “Maybe we need a way to just change [Section] 230, or maybe we need a fresh interpretation,” Shneiderman said. Ryan Calo, a law professor at the University of Washington who focuses on AI policy, noted that the Trump administration isn’t likely to go along with increased oversight of the tech industry. But he said state and local governments could play a key role in overseeing potentially controversial uses of AI. Seattle, for example, that requires agencies to take a hard look at surveillance technologies before they’re approved for use. Another leader in the field is New York City, which has to monitor how algorithms are being used. Determining the lines of responsibility, accountability and liability will be essential. Seattle University law professor Tracy Kosa went so far as to suggest that software developers should be subject to professional licensing, just like doctors and lawyers. “The goal isn’t to change what’s happening with technology, it’s about changing the people who are building it, the same way that the Hippocratic Oath changed the way medicine was practiced.” she said. The issues laid out today sparked a lot of buzz among the software developers and researchers at the workshop, but Shneiderman bemoaned the fact that such issues haven’t yet gained a lot traction in D.C. policy circles. That may soon change, however, due to AI’s rapid rise. “It’s time to grow up and say who does what by when,” Shneiderman said. Odds and ends from the workshop: Vardi noted that there’s been a lot of talk about ethical practices in AI, but he worried that focusing on ethics was “almost a ruse” on the part of the tech industry. “If we talk about ethics, we don’t have to talk about regulation,” he explained. Calo worried about references to an “AI race” or use of the term by the White House. “This is not only poisonous and factually ridiculous … it leads to bad policy choices,” Calo said. Such rhetoric fails to recognize the international character of the AI research community, he said. Speaking of words, Shneiderman said the way that AI is described can make a big difference in public acceptance. For example, terms such as “Autopilot” and “self-driving cars” may raise unrealistic expectations, while terms such as “adaptive cruise control” and “active parking assist” make it clear that human drivers are still in charge. Over the course of the day, the speakers provided a mini-reading list on AI policy issues: by Shoshana Zuboff; by Cathy O’Neil; a white paper distributed by IEEE; and an oldie but goodie by Charles Perrow.
AI2’s Oren Etzioni to entrepreneurs: It’s not too late to ride the machine-learning wave

AI2’s Oren Etzioni to entrepreneurs: It’s not too late to ride the machine-learning wave

2:08pm, 27th February, 2019
Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, answers questions during a chat moderated by Mike Grabham, director of the Seattle chapter of Startup Grind. (GeekWire Photo / Alan Boyle) It may seem as if everyone’s already on the bandwagon for artificial intelligence and machine learning, with players ranging from giants like and to startups like and — but the head of Seattle’s , or AI2, says there’s still plenty of room to climb aboard. “Let me assure you, if you have a machine learning-based startup in mind … you’re not late to the party,” AI2’s CEO, Oren Etzioni, told more than 70 people who gathered Tuesday evening at Create33 in downtown Seattle for a Startup Grind event. Etzioni had a hand in getting the party started back in 2004, with the launch of a startup called Farecast that used artificial intelligence to predict whether airline fares would rise or fall. The company was and has faded into the ether. But Etzioni said the basic approach, which involves analyzing huge amounts of data to identify patterns and solve problems,is just hitting its stride. The potential applications range from spam detection and voice recognition to health care, construction and self-driving cars. “It’s really a versatile technology, and we’re going to see more and more startups based on machine learning,” Etzioni said. He demonstrated one of the applications for the Startup Grind crowd, First, Etzioni played a series of short, narrated video clips advertising vacations, fashions and home loans. Then he asked the audience to guess what innovation was reflected in the clips. Several attendees guessed that the images were assembled by an AI agent, but Etzioni said AI produced the voice rather than the pictures. The videos actually served as a sneak peek at the next-generation text-to-speech conversion program produced by one of the stealthy startups working with AI2. “The goal isn’t to create commercials,” Etzioni said. “But think about somebody who can’t speak. All they can do is type, but they don’t want to sound like ‘Ste-phen Haw-king’ … with apologies to the late Stephen Hawking. This is really quite natural, and all it requires is to type, and you can get a variety of different voices.”