People have seen movies in which the machine controls the world and humans are destroyed. Fortunately, these movies are entertaining, and people think that these scenes will not happen. However, a real problem that should be more concerned is: algorithmic bias.
Algorithmic biases
The so-called "algorithm bias" refers to the prejudice of the creator in a seemingly innocent program, or the data used is biased. The result of course is a variety of problems, for example, misunderstood Google search, qualified candidates are banned from entering medical schools, and chat bots publish racist and sexist information on Twitter.
The most intractable problem caused by algorithmic bias is that engineers engaged in programming may be biased even if they are subjectively racist, gender-discriminatory, and age-discriminating. Artificial intelligence (AI) is essentially designed for self-study, and sometimes it does go wrong. Of course, people can make adjustments after the fact, but the best solution is to prevent it from happening at the beginning. So, how can we keep artificial intelligence away from prejudice?
Ironically, one of the most exciting possibilities of artificial intelligence is the ability to build a world without human bias. For example, when it comes to recruiting, an algorithm allows men and women to receive equal treatment when applying for the same job, or to prevent racial prejudice in police work.
Whether people realize that the machines created by humans do reflect how people see the world, they will have similar stereotypes and worldviews. As artificial intelligence gets deeper into life, humans must value it.
Classification of prejudice
Another challenge for artificial intelligence is that prejudice does not come in one form, but in various types. This includes cross-bias, subconscious bias, selection bias, data-driven bias and confirmation bias.
"Interaction bias" refers to the bias that the user generates due to the way the user interacts with the algorithm. When machines are set to learn from their surroundings, they cannot decide which data to keep or discard, what is right, and what is wrong. Instead, they can only use the data provided to them—whether good, bad, or ugly—and make decisions based on them. The aforementioned chatter Tay is an example of this type of bias. It has become racist by the influence of a web chat community.
"Subconscious bias" means that the algorithm incorrectly links ideas to factors such as race and gender. For example, when searching for an image of a doctor, artificial intelligence presents the image of the male doctor to a woman, or vice versa when searching for a nurse.
"Selective bias" means that the data used to train the algorithm is used to represent a group or group, thereby making the algorithm beneficial to these groups at the expense of other groups. In the case of recruitment, if artificial intelligence is trained to recognize only male resumes, female job seekers will be difficult to succeed in the application process.
"Data-driven bias" means that the raw data used to train the algorithm is already biased. Machines are like children: they don't question the data given, they just look for patterns. If the data is misinterpreted at the beginning, the output will also reflect this.
The last category is “confirmation bias,†which is similar to data-driven bias and favors preconceived information. It affects how people gather information and how people interpret it. For example, if you think that people born in August are more creative than those born at other times, you will be biased towards finding data that reinforces this idea.
When we learned that the example of so many prejudices infiltrated into the artificial intelligence system, it seemed to cause us concern. But it is important to recognize the facts and remember that the world itself is biased, so in some cases, people are not surprised by the results obtained from artificial intelligence. However, this should not be the case, and a process of testing and validating artificial intelligence algorithms and systems is needed to detect bias early during development and prior to deployment.
Artificial intelligence algorithm and system testing and verification
Unlike humans, algorithms can't lie, so if the result is biased, there must be a reason: it is related to the data it gets. Humans can lie to explain why they are not hired, but artificial intelligence cannot. With algorithms, it is possible to know when biases will occur and adjust them so that they can be overcome in the future.
Artificial intelligence can learn and make mistakes. Often, any inherent bias can be discovered only after using the algorithm in the actual environment, as these biases are magnified in practice. The algorithm should not be seen as a threat, but a good opportunity to resolve any prejudice problems and can be corrected when necessary.
Development systems can be used to identify biased decisions and take action in a timely manner. Compared with humans, artificial intelligence is particularly suitable for using the Bayesian method to determine the probability of a hypothesis, thereby eliminating the possibility of all human biases. This is more complicated, but feasible, especially considering the importance of artificial intelligence (which will only become more important in the next few years).
With the establishment and deployment of artificial intelligence systems, it is very important to understand how they work. Only in this way can they be consciously designed to avoid bias in the future. Don't forget that although artificial intelligence is developing very rapidly, it is still in its infancy and there are many places worth learning and improving. This adjustment will last for a while, during which artificial intelligence will become smarter and there will be more and more ways to overcome prejudice and other issues.
The technology industry is always questioning how machines work and why they work. While most artificial intelligence works in black boxes and the decision-making process is hidden, the transparency of artificial intelligence is the key to building trust and avoiding misunderstandings.
A number of studies are currently underway to help identify biases, such as the work carried out by the Fraunhofer Heinrich Hertz Institute. They are studying to identify different types of biases, such as the aforementioned biases, and some more "lower" biases, as well as problems that may arise during artificial intelligence training and development.
Also need to consider unsupervised training. Most of the current artificial intelligence models are generated through supervised training: the collection of tag data that is clearly human-selected. For unsupervised training, using data without any tags, the algorithm must classify, identify, and aggregate the data by itself. Although this method is usually many orders of magnitude slower than supervised learning, this approach limits human involvement and, therefore, eliminates any conscious or unconscious human biases that avoid affecting the data.
There are also many things that can be improved at the bottom. When developing new products, websites, or features, technology companies need people in all areas. Diversity provides a variety of data to the algorithm, and these data are biased. If there are some people who can analyze the output, then the possibility of finding a bias will be higher.
In addition, you can also play the role of algorithmic auditing. In 2016, Carnegie Mellon research team found algorithmic bias in online job advertisements. When they listed people looking for work online, Google ads showed that men accounted for nearly six times as many women in high-paying jobs. The team concluded that conducting an internal audit would help reduce such bias.
in conclusion
In short, machine bias comes from human prejudice. The prejudice of artificial intelligence can be manifested in many ways, but in reality, it has only one source: human beings themselves.
The key to dealing with this problem lies in technology companies, engineers and developers, who should take effective measures to prevent inadvertent creation of a biased algorithm. By conducting algorithmic audits and always maintaining transparency, you have the confidence to keep artificial intelligence algorithms away from prejudice.
Pearl Cotton Bag Cutting Machine
Pearl Cotton Bag Cutting Machine,Front Sealing Bag Cutting Machine,High-Speed Bag Cutting Machine,Square Bottom Bag Cutting Machine
Dongguan Yuantong Technology Co., Ltd. , https://www.ytbagmachine.com