“Waiting hurts. Forgetting hurts. But not knowing which decision to take can sometimes be the most painful…”
In this article, we shall be covering the following:
Let’s get started!
Imagine a scenario where you are a property broker and have a house price list with given characteristics- number of bedrooms, area, electricity and water availability, etc. What if someone calls you up to…
Have you ever wondered while coming across the ‘house price’ (denoted by y) prediction task using Linear Regression that the price might not be directly a linear combination of the different features such as ‘size of house’, ‘number of bedrooms’, ‘number of neighbours’ etc? It is absolutely possible that the price might be a non-linear function of the features, isn’t it? It is also quite intuitive that this can be extended to Logistic Regression as well.
Keeping this in mind, let us first understand how to incorporate these non-linear features using a trick called the ‘Kernel Trick’ and then proceed…
“Garbage in, garbage out” — a commonly used phrase when it comes to data-handling.
In our previous articles, we covered few algorithms, viz., Logistic Regression, K-Nearest Neighbours (KNN) wherein we delved into the pros and cons of each model along with the best situation where applying these models would provide the best results.
While choosing and applying the right model makes a significant impact on accuracy, having a clean and reliable dataset would contribute more to that accuracy.
In this article, we will dive deep into various data pre-processing techniques and understand their necessity in different situations.
“Tell me who your friends are and I will tell you who you are”
As the saying goes — “A person is known by the company he keeps” and it sounds quite intuitive because people in the same company share similar interests. Same in the case of data points too.
Data points that are closer to a particular data point will portray similarity in properties and there is a high possibility that they belong to the same class. Here these close data points are called “neighbors” for that particular data point. …
What do you do when you see a “You have won a lottery” email? You prefer to report it as ‘spam’ rather than ignoring it.
The above image gives an overview of spam filtering. Plenty of emails arrive every day. Some of them go to the spam folder while the rest remain in the primary inbox. Also, emails get classified as primary, social, promotion, updates, forum.
How do mail apps classify them?
The blue box in the middle — Machine Learning Model decides which mail is spam and which is not. How can a model decide which mail is spam…
Do you remember drawing a straight line V-I characteristic in your cherished Electrical Lab? Do you remember how you drew it? Let’s brush it up a bit.
We first set a voltage (independent variable) and measure the current in the circuit (dependent variable). Then we keep a tab of this pair (V1, I1) and proceed to change the voltage and continue the process. Once we have enough data points (‘enough’ is an ambiguous term in the field of Machine Learning, you never know how much you require. You must have experienced it when you and the professor did not see…
Have you ever thought of how stock market predictions happen, how can a simple web app label the image as Dog or Cat by just uploading the image, or how Spotify automatically plays the song of your interest?
Kudos to you, if you have already deduced that somewhere Machine Learning is playing the role behind.
If these questions leave you in a very perplexing situation, no need to worry. Read till the end, we hope you’ll able to get all the answers.
So, coming to the question What is Machine Learning?