Classification in Machine Learning Overview

In the general sense, classification means the process of grouping. In data science, classification is then understood as grouping data into several categories to make it easier to process and analyze.

Examples of applying classification are actually very close to everyday life. One of them is when you use email. In an email, there is one category called spam.

The contents are messages that are considered unimportant. So, to sort out which messages are spam and which are not, use this classification.

In this example, the classifier or classifier in the email is trained to recognize spam email marker variables. That way, the system can sort out which ones are spam and which are not.

Machine Learning Algorithm

A machine learning algorithm is a method in which an artificial intelligence system does its job automatically. Generally, this machine learning algorithm is used to predict the output value of a given input.

The two main processes of machine learning algorithms are classification and regression. The machine learning algorithm itself is divided into two, namely supervised and unsupervised learning.

Supervised learning requires the desired input and output data. It creates labels, while the unsupervised learning algorithm works with unclassified or unlabeled data.

An example of an unsupervised learning algorithm is grouping or clustering data that is not filtered based on similarities and differences.

Machine Learning Algorithm Classification

1. Naive Bayes Classifier

A naive Bayes classifier is a straightforward classification algorithm based on the Bayesian theorem. This algorithm has one general property: each data is classified independently of other features bound to the class or commonly referred to as independent.

That is, one data has no impact on different data. Although this algorithm is relatively simple, Naive Bayes can beat some more sophisticated classification methods.

This algorithm is commonly used for spam detection and classification of text documents.

The advantages of this algorithm are that it is simple and easy to implement, not sensitive to irrelevant features, is fast, only requires a small amount of training data, and can be used for multi-class and binary classification problems.

2. Logistic Regression

This algorithm is commonly used to calculate probability values so that the resulting output is between 0 and 1. An example of its use is applying for credit at a bank.

Usually, the bank will ask several questions/questionnaires to assess the eligibility of prospective credit recipients. From those questions, the bank will calculate the probability that the prospective credit recipient will return the loan.

3. Decisions Tree

A decision Tree is one of the most popular classification methods because it is easy for humans to interpret. A decision Tree is a prediction model using a tree or hierarchical structure.

This type of Machine Learning algorithm does its job by using the concept of a branched flowchart structure using the designer’s decision rules.

Basically, Decision Tree starts with a single node or vertices. Then, the node branches represent the options available. Furthermore, each of these branches will have new additions.

Therefore, this method is called “tree” because its shape resembles a tree with many branches. Quoting from Venngage, the Decision Tree has three elements in it, namely:

  • Root node (root), The ultimate goal or big decision to be taken.
  • Branches (twigs), Various action options.
  • Leaf node (leaf), Possible results for each action.

4. Random Forest

A Decision Tree is one of the most popular classification methods because it is easy for humans to interpret. A Decision Tree is a prediction model using a tree or hierarchical structure.

This type of Machine Learning algorithm does its job by using the concept of a branched flowchart structure using the designer’s decision rules.

Basically, Decision Tree starts with a single node or vertices. Then, the node branches represent the options available. Furthermore, each of these branches will have new additions.

Therefore, this method is called “tree” because its shape resembles a tree with many branches. Quoting from Venngage, the Decision Tree has three elements in it, namely:

  • Root node (root), The ultimate goal or big decision to be taken.
  • Branches (twigs), Various action options.
  • Leaf node (leaf), Possible results for each action.

5. K-Means Clustering

As the name suggests, this algorithm is commonly used for clustering cases. K-means Clustering is one of the simplest and most famous examples of Unsupervised Machine Learning algorithms.

The K-Means Clustering method attempts to group existing data into several groups, where data in one group have the same characteristics and different characteristics from data in other groups.

The way this algorithm works at first is to form several k points, which are called centroids (where the value of k represents the number of clusters).

Then the existing data points will form a group with the closest centroid to it. Automatically, the center point (centroid) will change along with the increase in the members of each cluster (which are the data points earlier).

Therefore, each group formed will look for its new centroid point. This process is continuously carried out until convergence conditions are obtained, for example, if the centroid position has stayed the same.

Two types of data clustering are often used in the data grouping process, namely Hierarchical and Non-Hierarchical, and K-Means is a method of non-hierarchical data clustering or Partitional Clustering.

6. Hierarchical Clustering

Hierarchical Clustering is a clustering technique with a Machine Learning algorithm that forms a hierarchy or is based on a certain level so that it resembles a tree structure.

Thus the grouping process is carried out in stages or stages. Usually, this method is used on data that is not too large, and the number of clusters to be formed is still being determined.

In principle, Hierarchical Clustering will do Clustering in stages based on the similarity of each data. So that in the end, at the end of the hierarchy, clusters with different characteristics are formed, and objects in the same group are similar to each other.

In the hierarchical method, there are two grouping strategies: agglomerative and divisive.

Agglomerative Clustering (merging method) is a hierarchical grouping strategy that starts with each object in a separate cluster and then forms increasingly large groups

. So, the number of initial sets is the same as the number of things. Meanwhile, divisive Clustering (division method) is a hierarchical grouping strategy that starts with all objects grouped into a single cluster and then separated until each object is in a separate collection.

Leave a Comment