Harry Shum is Microsoft’s executive vice president for AI and research. (GeekWire Photo) Microsoft will “one day very soon” add an ethics review focusing on artificial-intelligence issues to its standard checklist of audits that precede the release of new products, according to Harry Shum, a top executive leading the company’s AI efforts. AI ethics will join privacy, security and accessibility on the list, Shum in San Francisco. Shum, who is executive vice president of group, said companies involved in AI development “need to engineer responsibility into the very fabric of the technology.” Among the ethical concerns are the potential for AI agents to from the data on which they’re trained, to through deep data analysis, to , or simply to be . Shum noted that as AI becomes better at analyzing emotions, conversations and writings, the technology could open the way to increased propaganda and misinformation, as well as deeper intrusions into personal privacy. In addition to pre-release audits, Microsoft is addressing AI’s ethical concerns by improving its facial recognition tools and adding altered versions of photos in its training databases to show people with a wider variety of skin colors, other physical traits and lighting conditions. Shum and other Microsoft executives have discussed the ethics of AI numerous times before today: Back in 2016, Microsoft CEO Satya Nadella for AI research and development, including the need to guard against algorithmic bias and ensure that humans are accountable for computer-generated actions. In a book titled “The Future Computed,” Shum and Microsoft President Brad Smith , supported by industry guidelines as well as government oversight. They wrote that “a Hippocratic Oath for coders … could make sense.” Shum and Smith , or Aether. Last year, Microsoft Research’s Eric Horvitz said due to the Aether group’s recommendations. In some cases, he said specific limitations have been written into product usage agreements — for example, a ban on facial-recognition applications. Shum told GeekWire almost a year ago that he hoped the Aether group would develop — exactly the kind of pre-release checklist that he mentioned today. Microsoft has been delving into the societal issues raised by AI with other tech industry leaders such as Apple, Amazon, Google and Facebook through a nonprofit group called the . But during his EmTech Digital talk, Shum acknowledged that governments will have to play a role as well. The nonprofit AI Now Foundation, for example, has called for , with special emphasis on applications such as facial recognition and affect recognition. Some researchers have called for creating a who can assist other watchdog agencies with technical issues — perhaps modeled after the National Transportation Safety Board. Others argue that entire classes of AI applications should be outlawed. In an and an , a British medical journal, experts called on the medical community and the tech community to support efforts to ban fully autonomous lethal weapons. The issue is the subject of a this week.