Introduction to Regression Analysis
How did the Regression Analysis work?
There are many types of regression techniques that are used considering different factors and outcomes.
- Linear Regression
- Logistic Regression
- Lasso/Ridge Regression
- Polynomial Regression
Some of the important statistical regression tests which are used in various sectors are given below:
1. Linear Regression
This is used when the outcome variable is linearly dependent on the independent variables. It is normally used when we don’t have a huge data set. It is also sensitive to outliers, so if the data set contains outliers, then it’s better to treat them before applying linear regression. There are single and multi-variable regression techniques. Simple Linear Regression is the analysis when the outcome variable is linearly dependent on a single independent variable. Simple Linear Regression follows the equation of a straight line which is given below:
- Y= Target, Dependent, or Criterion Variable
- x= Independent or predictor variable
- m= Slope or Regression Coefficient
- c= constant
Multi-Variable Linear regression defines the relationship between the outcome variable and more than one independent variable. It follows the below equation of a straight line where dependent variables are the linear combination of all the independent variables:
- Y= Target, Dependent, or Criterion Variable
- x1, x2, x3…xn= Independent or predictor variables
- m1, m2, m3…mn= Slope or Regression Coefficients of respective variables
- c= constant
Linear Regression follows the principle of the Least Square method. This method states that a line of best fit is chosen by minimizing the sum of square error. The line of best fit is chosen where the sum of square error between the observed data and the line is minimum.
There are some assumptions that should be taken care of before applying linear regression to the dataset.
- There should be a linear relationship between independent and dependent variables.
- There should be no or a little multicollinearity between the independent variables. Multicollinearity is defined as a phenomenon where there is a high correlation between the independent variables. We can treat multicollinearity by dropping one variable which is correlated or treats two variables as one variable.
- Homoscedasticity: It is defined as a state where error terms should be randomly distributed across the line in the regression analysis. There should be not any pattern across the line if there is some identified pattern than the data is said to be heteroscedastic.
- All the variables should be normally distributed, which we see by plotting a Q-Q plot. If the data is not normally distributed, we can use any nonlinear transformation methods to treat it.
So, it is always advisable to test the assumptions while applying linear regression for getting good accuracy and correct result.
2. Logistic Regression
This regression technique is used when the target or outcome variable is categorical or binary in nature. The main difference between linear and logistic regression lies in the target variable, in linear regression, it should be continuous whereas in logistic it should be categorical. The outcome variable should only have two classes, not more than that. Some of the examples are spam filters in emails (Spam or not), fraud detection (Fraud/ Not Fraud), etc. It works on the principle of probability. It can be classified into two categories by setting the threshold value.
For Example: If there are two categories A, B and we set the threshold value as 0.5 then the probability above 0.5 will be considered as one category, and below 0.5 will be another category. Logistic Regression follows an S-shaped curve. Before building the logistic regression model, we have to split the data set into training and testing. Since the target variable is categorical or binary we have to make sure that there is a proper class balance in the training set.
If there is class imbalance then this can be treated by using various methods as mentioned below:
- Up Sampling: In this technique, the class which has fewer rows is sampled over to match the number of rows of the majority class.
- Down Sampling: In this technique, the class which has more rows is sampled down to match the number of rows of the minority class.
There are some important points that are important to understand before applying the logistic regression model to the data sets:
- The target variable should be binary in nature. If there are more than 2 classes in the target variable then it is known as Multinomial Logistic Regression.
- There should be no or little multicollinearity between the independent variables.
- It requires a huge sample size to work.
- There should be a linear relationship between the independent variables and the log of odds.
Benefits of Regression
There are many benefits of regression analysis. Instead of considering our gut feeling and predicting the outcome, we can use regression analysis and show valid points for possible outcomes.
Some of those are listed below:
- To predict the sales and revenue in any sector for shorter or longer periods.
- To predict the customer churn rate of any industry and find out the suitable measures of reducing them.
- To understand and predict the inventory levels of the warehouse.
- To find whether introducing a new product in the market will be successful or not.
- To predict whether any customer will default loan or not.
- To predict whether any customer will buy a product or not.
- Fraud or Spam Detection
There are various evaluation metrics that are considered after applying the model. Though there are assumptions required to be tested before applying the model we can always modify the variables using various mathematical methods and increase model performance.
This is a guide to Regression Analysis. Here we discuss the Introduction to Regression Analysis, How did the Regression Analysis work and the Benefits of Regression. You can also go through our other suggested articles to learn more–