Hi ML enthusiasts! Today, we will start our journey to learning Machine Learning for Analytics. This is probably the most thought-about subject and interesting too. Let’s get started by going through this quote:

The incredible thing about the human mind is that it didn’t come with an instruction book.

-Terry Riley

The above quote clearly points out at the learning ability of the human mind. How? Well, when you try out things by yourself without referring to any instruction booklet, you pass or fail and in the midst of all this, you learn a lot. You learn from your mistakes, your successes and your failures. Right? It’s all the magic of our minds! The human minds! That’s the beauty and a valuable asset given to us by nature.

No doubt, human mind has the most beautiful quality of learning from experience. Experience teaches us a lot. Experience is the king everywhere.

There are three classes of **learning from experience**.

**Remembering**: Whenever we do something, it gets stored into our memory. If we get into some situation and do something that turns out to be correct, we try to do that again if that type of situation arrives again. And if it turns out to be the other way, we try not to repeat it. Why? Because we have the ability to remember what went wrong and what went right! That’s the power of experience.**Adapting**: If in the previous situation, something turns out to be wrong, then we try to adapt to new solutions that can help us in solving out problem. This is the power of adapting and yeah, experience.**Generalizing**: I try to look at two different things, and then I happen to get some sort of pattern or similarity between those two things. Then my mind will try to predict one by getting the details of other by means of learning those patterns or similarity. This is the idea of generalizing!

Just like our minds, we try to teach our machines too about learning from experience. We have three types of machine learning in place for our machines. They are:

**Supervised learning**: In this type of learning, training data in the form of inputs and targets(desired/correct outputs) are fed to an algorithm. By taking all the patterns and similarities into account and generalizing from them, the algorithm tries to form some sort of equation or model. We then use this model to predict the outputs for new/unknown input variables or features.**Unsupervised learning**: In this type of learning, no correct targets are provided. Only inputs are provided and the machine is left to learn everything on its own by the method of generalizing. If any input features have anything in common or bear any type of similarities, then they are categorized together. A statistical approach used in doing all this is known as density estimation.**Reinforcement learning**: It lies somewhere between the above two types. The model is told that the answer is wrong if it comes up with any incorrect answer but it is not told how to correct it. The model is left on its own to correct the incorrect approaches. For this reason, it is also called as learning with critic.

Let’s talk about supervised learning first. The data is represented as (x_{i}, t_{i}) where x_{i}s are nothing but inputs, t_{i}s are the target outputs or the correct/expected outputs and i is the index running from 1 to N.

This thing is also used in regression and classification.

In **regression**, we try to find out the mathematical equation/function that approximates to the set of all the points which we talked about above. This function is a type of curve which we can then use to predict for new inputs. This is a problem of function approximation or interpolation.

In **classification**, We try to group elements into different classes depending upon the similar patterns found between them. Here, we take input vectors. Then, decide which class they may get included. One thing to be noted here is that the classes are discrete and each input element can belong to just one class. This set of discrete classes covers the whole output space.

Having talked about all this, one thing to be talked about is **Fuzzy classifiers**. To understand this situation, let’s talk about tomato. Yes! Tomato. Tomato is called as fruit but used as a vegetable. Therefore, we can classify tomato both into fruits and vegetables. (Here, please don’t get into the debate with me about whether tomato is a fruit or vegetable. Please try to understand this analogy here). Therefore, this input is breaking our rule that says one input can have one class and not multiple classes. Fuzzy classifiers try to solve this problem of different classes.

There is also a term called as **Novelty detection**. This is defined as detecting those inputs which shouldn’t be a part of input data provided to a machine. Of course, the machine should detect it and not us. Novelty detection is carried out by machine. Suppose we have built a model that tries to identify the names of breed of dogs, the photos of which we provide as inputs to the model. Now, by mistake, we provide a photo of cat as an input. The machine is quick enough to produce a alarm and say, hey! that’s not to be provided. Here, the machine has successfully carried out novelty detection.

Last thing that we will talk is the **curse of dimensionality**. The increase in number of input dimensions(the number of variables/features) gives rise to the increase in the number of data-points required in building our model. This is what it says.

One thing that I want to talk about here is the concept of **training data and testing data**. We provide the algorithm only the training data. The algorithm uses this training data in order to generalize and come up with a function/equation. We then use our test data to test on whether the outputs predicted by this model for the test inputs match up with the target outputs corresponding to them or not. Thus, test data helps in calculating the accuracy of our model. Any data that comes to us has to be divided into training data and test data to serve this purpose.

Bravo! You have learnt a lot today. All of these things are basics that should be known by you so that you can understand the upcoming tutorials better. In the next tutorial, we will cover the concept of neural networks and after that we will dive into coding the algorithms of neural networks. Stay tuned!

For more updates and news related to this blog as well as to data science, machine learning and data visualization, please follow our facebook page by clicking this link.

Good article!

LikeLike

Thanks Soumya!

LikeLike