What are the nine misconceptions about machine learning? Do you know a few?

When technology is as hyped as machine learning, misconceptions and misconceptions arise. The following is a clear understanding of what machine learning can and can't do.

Machine learning is being proven to be very useful, thinking that they can solve all the problems and ideas that can be applied to all environments are very attractive. However, like any other tool, machine learning is only useful in specific areas, especially for those problems that have always bothered us but we are clearly unable to solve it by hiring sufficient personnel, or have clear goals but no clear way to solve them. The problem.

Each company may use the advantages of machine learning in different ways. In a recent survey by management consulting firm Accenture, 42% of business executives said they believe that by 2021, all innovation activities will be backed by artificial intelligence. However, if we can clearly recognize the existence of speculation and avoid the myth created by misunderstanding the ability of machine learning, this will benefit us a lot.

Myth # 1: Machine Learning Is Artificial Intelligence

Machine learning and artificial intelligence are often used as synonyms. However, although machine learning has successfully moved from the laboratory into the real world, artificial intelligence covers a wider area, such as computer vision, robotics, natural language processing, and machines not involved. Learning solutions such as constraint compensation. We can think of it as something that makes the machine look smarter. Some of the "artificial intelligence" that some people worry about will compete with humans and even attack humans. None of these are true.

We should keep a clear and accurate understanding of various popular vocabularies. Machine learning refers to learning patterns and the use of large data sets to predict results. The conclusions may seem "intelligence," but in reality they are just application statistics for computing at an unprecedented speed and scale.

Myth 2: All data is useful

We need to provide data for machine learning, but not all data is useful for machine learning. In order to train these systems, we need representative data that covers the patterns and results that the machine learning system will handle. There can be no unrelated patterns in the data (such as photos where all boys are standing and all girls are sitting, or all the cars are in the garage and all the bikes are in muddy photos). Because the machine learning model we create will reflect those patterns that are too specific and look for these patterns in the data we use. All data used for training should be clearly marked, and their characteristics should be marked. These characteristics must also match the questions that will be asked about the machine learning system. These need to do a lot of work.

Don't take it for granted that the data we have is clean, clear, representative, or easily labeled data.

Myth #3: We always need a lot of data

Thanks to better tools, computing hardware such as GPUs capable of processing massive amounts of data in parallel, a large number of tagged data sets (such as ImageNet and Stanford University question and answer data sets), machine learning in image recognition, machine reading comprehension, language translation, etc. Significant progress has been made. With the technology known as "transfer learning", we do not need a large number of mathematical sets in a specific field to achieve excellent results. Instead, we can teach machine learning systems how to learn to use a large data set, and then let them use this ability to learn a much smaller set of our own training data. This is how the Salesforce and Microsoft Azure Custom Visual APIs work: Only 30-50 images that can display the classification we want can produce excellent results.

Migration learning can customize a pre-trained system for our problems with relatively little data.

Myth 4: Anybody can create a machine learning system

There have been many open source tools and architectures for machine learning, as well as training courses that teach us how to use them. But machine learning is still a highly specialized technology. We need to know how to prepare the data and split them for training and testing. We need to know how to choose the best algorithm and what kind of heuristic algorithm to use, and how to turn them into A reliable production system. In addition, we also need to monitor the system to ensure that the results remain relevant over time. Whether the market has changed or the machine learning system is sufficient to meet the needs of different types of customers, we all need to constantly check that the model always matches our problems.

Keeping machine learning in place requires extensive experience. If you are just starting out, while hiring data science and machine learning experts to create custom systems, we also need to focus on the API of the pre-training model that can be called from internal code.

Myth #5: All patterns in the data are useful

The survival rate of asthma patients, chest pain patients or heart disease patients, and any elderly people aged 100 years after getting pneumonia is higher than we thought. In fact, a simple machine learning system used to automate hospitalization may allow them to go home without hospitalization (regular-based systems use the same data as neural networks for training). The reason why patients have such a high survival rate is that since pneumonia is very dangerous for these types of patients, they can always be immediately hospitalized.

The system looks at the valid patterns in the data, while some (although it can help insurance companies predict cost of treatment) is not a useful model for choosing who should be hospitalized. Even more dangerous is that we do not know that those useless anti-patterns are in our data set unless we already know them.

In other cases, the system learns some useful patterns that are not useful (for example, a controversial facial recognition system that can accurately predict sexual orientation from selfies) because they do not have a clear and explicit explanation (in this case Next, the photos show social cues such as gestures rather than some other natural features.

The "black box" model is valid, but we do not know what model they learned. More transparent and easy-to-understand algorithms, such as generalized additive models, make the model more aware of what has been learned, so we can decide if these patterns are useful for deployment.

Myth 6: Reinforcement learning is ready for use

In fact, all machine learning systems currently in use are supervised learning. In most cases, they train datasets that have been clearly marked, and humans participate in the preparation of these datasets. It is time-consuming and labor-intensive to organize and manage these data sets, so people are more interested in unsupervised learning, especially for reinforcement learning (RL). In reinforcement learning, agents will constantly try to explore, interact with their environment, and receive rewards for correct behavior. DeepMind's AlphaGo system used reinforcement learning while using supervised learning to defeat the Go master who played with it. Carnegie Mellon University's Libratus was also using reinforcement learning plus two other artificial intelligence techniques to eventually defeat the world's top players in one-on-one no-limit hold'em. Researchers are currently conducting extensive testing of reinforcement learning in areas ranging from robotics to security software testing.

Reinforcement learning is currently uncommon outside of the research field. Google has saved power in data centers by allowing DeepMind to learn how to cool down more efficiently. Microsoft presented personalized headlines to MSN.com visitors through a Contextual Bandits reinforcement learning algorithm. The problem is that in the real-world environment, there are seldom relatively easy discoveries of rewards and immediate feedback. In particular, agents take multiple actions before they happen to be cheating rewards.

Misunderstanding 7: Machine learning is not biased

Because machine learning learns from data, they copy all biases in the data set. Searching for a picture of the CEO may show pictures of a male Caucasian CEO. This is so because there are more male Caucasians who are CEOs than non-male whites. This shows that machine learning can also amplify this bias.

COCO data that are often used to train image recognition systems have male and female photographs, but more female photographs have kitchen equipment in the background, while male photographs have more of a computer keyboard and mouse or tennis racket in the background. Ski board. If relying on the COCO training system, they will more strongly link men with computer hardware.

A machine learning system can also impose bias on another machine learning system. Using a popular architecture to train a machine learning system to display the relationships among a word as a vector, they may learn that "a male is like a computer programmer versus a housewife" or "a doctor is like a boss relative to a nurse. In the receptionist's stereotype. If we use a system with such prejudices for language translation, such as translating a gender-neutral language such as Finnish or Turkish into a gender-specific language such as English, then there will be a translation of “ta is a doctor” to “he is The doctor" translates "ta is a nurse" to "she is a nurse."

It's useful to do similar item recommendations on shopping sites, but when it comes to sensitive areas and can generate a feedback loop, then the problem comes. If you add an anti-vaccination group to Facebook, Facebook's recommendation engine will recommend groups that focus on various conspiracy theories or believe that the earth is flat.

It is important to recognize the issue of bias in machine learning. If we can't remove these biases in the training data set, then we can use techniques that can adjust gender links in word pairs to reduce bias or add irrelevant items to recommendations to avoid "filtering bubbles."

Myth 8: Machine learning is only used for good deeds

Machine learning enhances the capabilities of anti-virus tools. They will focus on new attacks and discover them once they are discovered. Similarly, hackers are also using machine learning to study the defense capabilities of anti-virus tools by launching large-scale, targeted phishing attacks by analyzing large amounts of public data or previously successful phishing attacks.

Myth 9: Machine Learning Will Replace Humans

Artificial intelligence will grab jobs with us, changing the work we are doing and how we work has become a common concern. Machine learning can improve efficiency and compliance while reducing costs. In the long run, machine learning will create new jobs while eliminating some of the current jobs. Due to complexity or scale, it is unthinkable that many tasks that are now automated by machine learning help automation before. For example, we can't hire enough people to look at each photo posted on social media and see if there are features of the company’s brand in the photo.

Machine learning has begun to create new job opportunities, such as improving customer experience through predictive maintenance and advancing recommendations and support for business decisions. Like previous automation, machine learning can liberate employees so that they can use their professional knowledge and creativity.

Dual Band Router Module

Dual Band Router Module,Dual Band Wifi Module,Dual Band Router Embedded Wifi Module,Gigabit Ethernet Router Module

Shenzhen MovingComm Technology Co., Ltd. , https://www.movingcommiot.com