This article is produced by NetEase Smart Studio (public number smartman 163). Focus on AI and read the next big era!燑/p>
[Netease smart news December 9 news] For deep learning in the "black box", people always have a variety of concerns, including "Artificial Intelligence (AI) will rise and destroy all humans" terribly Predict how do you provide oversight for this incomprehensible system? However, just like other tools, whether Black Box AI poses a danger depends entirely on the way we choose to use it.
The news media claimed that this AI is not trustworthy, and told us that experts sought to end the use of "black box AI" in the government. But what exactly is the "black box AI"? When our understanding of the computer system is limited to its input and output, without knowing how the machine decides to cause the output, this mysterious part exists in the "black box" that we cannot see.
This may sound scary, but as we explained earlier, it is not. Some people can perform long-term numerical calculations in their brains without showing how they did it. In fact, this is no different from "black box AI." Of course, just as high school algebra teachers ask you to explain how to draw conclusions, there are many valid reasons to explain why we need to know how AI gives output.
The main reason is that in order to avoid prejudice, prejudice has become the "combat slogan" of those who believe that the government should regulate AI. ProPublica revealed an evil, algorithmically based racial discrimination in his award-winning article on "Prejudice AI." Black men were treated unfairly by the US criminal justice system.
If getting rid of "black box AI" may end racial discrimination, then this discussion is over and we will say give it up! However, if you want to end the "black box AI prejudice", the easiest way is to follow the advice of experts at the AI ​​Now Institute and stop using it in government functions that require accountability.
Anybody who advocates a total ban on "black box AI" is like asking researchers to stop promoting cancer technology research until we can solve the whole thing. If the general technology community continues to allow the spread of fearful statements, claiming that "black box AI" is dangerous, then there will be worrying controversy.
If our policy makers believe this and place restrictions on the deep learning system, they must fully recognize that these restrictions may stifle research that saves lives. As far as the "black box AI" is concerned, the current situation does not require any restrictions on the development of this technology. Instead, we need to understand the ethical standards of using "black box AI."
Here are a few questions that can help you determine whether ignorance is happiness:
1. Some "black boxes AI" may diagnose cancer more accurately than humans. Even if you never know how a computer achieves this goal, will you accept this technological advancement?
2. You are about to be sentenced and the computer will decide if you should be sent to prison. It thinks that you should be sentenced to 20 years' imprisonment for the first misdemeanor, but no one knows why. Each situation requires a different agreement.
When human beings are negatively affected by lack of understanding, we should avoid using the "black box" system unless we can better eliminate prejudices. When people are already facing death threats, such as millions of car accidents, or because the cancer is not diagnosed in time, it would seem even more unethical to simply use the “black box AI†because it does not show how it works.
You can create a doomsday scenario for any "out-of-control" AI use, but "black box deep learning" is not something magical that humans are stupid enough to understand. This is the result of people using mathematical shortcuts because we do not have time to sit down and calculate the exponential factor.
Now, before I destroy everyone's life, I admit that AI is likely to appear on one day and kill us all in a cold-blooded way. But this cannot be because of "black box deep learning." It is important that we do not use AI to make difficult decisions that people do not want to do. In issuing "blind" justice, there should be no computer that can take racial considerations into account.
We should continue to develop cutting-edge AI technology, even if it means creating systems that can be used by bad people or stupid people. For the same reason, we also invented sharper knives and stronger ropes. This is not for the purpose of being better used by evil or ignorant people, but because it has the potential to benefit all humanity.
The problem with "black box AI" is not how it solves the problem, but how we choose to use it. On the other hand, if the "black box AI" is directly responsible for sending the robot back to earth and killing us all, then it may be because of our own reasons. (From: TheNextWeb Compilation: NetEase See Compilation Robot Review: Little)
Pay attention to NetEase smart public number (smartman163), obtain the latest report of artificial intelligence industry.
CIXI LANGUANG PHOTOELECTRIC TECHNOLOGY CO..LTD , https://www.cxblueray.com