Keywordsdeep learning algorithm artificial intelligence solution ibm neural network
The transparency and ethical issues of artificial intelligence technology are attracting more and more attention, which has prompted cloud computing service providers to introduce new tools to explain the decision-making process behind artificial intelligence algorithms.
Executives in strong regulatory industries such as accounting and finance say that data scientists and non-technical business managers must understand the processes behind algorithmic decisions, which is critical. Such an understanding can have far-reaching implications for preventing potential ethical violations and regulatory violations, especially considering that enterprise-level artificial intelligence algorithms are becoming more common.
Vinodh Swaminathan, executive director of intelligent automation, cognitive and artificial intelligence at KPMG's Innovation and Enterprise Solutions division, said: "I don't think artificial intelligence is in business unless it has this ability to interpret. The scale of the medium cannot exceed hundreds of pilot applications."
The interpretation of artificial intelligence has prompted companies such as IBM and Google to introduce transparency and ethical tools into cloud computing artificial intelligence service offerings. For example, a recent study by the IBM Institute for Business Value surveyed 5,000 corporate executives. About 60% of respondents said they care about how to interpret artificial intelligence using data to make decisions to meet regulatory and compliance standards. This ratio has risen sharply from 29% in 2016.
Even for data scientists and related corporate executives, the decision-making process of artificial intelligence is sometimes a "black box." This is especially true in deep learning tools, such as neural networks for pattern recognition. Such neural networks attempt to simulate how the human brain works. Although these systems can draw conclusions with unprecedented accuracy and speed, it is not always clear how the computing network makes specific decisions.
Data scientists inside KPMG are developing autonomous interpretable tools, and the company is also leveraging IBM's new transparency tools. Swaminasan said that the purpose of this is to ensure that both technical and business employees can “open the black box” and accurately understand how artificial intelligence algorithms make conclusions.
IBM last week released a new cloud computing artificial intelligence tool that shows users what major factors are affecting the recommendations made by artificial intelligence. These tools also analyze artificial intelligence decisions in real time to identify inherent biases and recommend data and methods to address these biases. David Kenny, senior vice president of cognitive solutions at IBM, said the tools can be used with IBM artificial intelligence services and tools from other cloud computing service providers such as Google.
A blog post by Google research scientists and software engineers shows that Google last year began releasing new tools for open source machine learning code, "to meet the policy goals of interpretability as part of a broader study." Another artificial intelligence tool released by Google earlier this month allowed non-programmers to inspect and debug machine learning systems to assess the fairness of the algorithm.
Microsoft is providing artificial intelligence services through Azure. A spokesperson for the company said that the design of artificial intelligence systems requires fair, transparent and secure protection to ensure trust. Microsoft is making continuous efforts in this regard. Amazon did not respond to this question.
Capital One Financial Corp and Bank of America from the highly regulated financial industry are studying how to explain the rationale behind the answers made by artificial intelligence algorithms. Both companies say the goal is to use artificial intelligence to optimize fraud monitoring, but first they need to understand how these algorithms work.
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.