In any data science task after preparing data and understanding data, data scientists want to understand what are features/attributes are there in the data to be extracted. how many categorical variables are there and how many numerical variables are there in the dataset. today in this blog we will only talk about numerical data only. as we want to understand how statistical methods help us to summarise and understand data better.
I will focus on what are various statistical techniques are there and when to apply them to get a particular outcome in a given dataset.
here are the topics to be covered:
1) Summary statistics
2) Sampling methods
3) Hypothesis testing
4) Estimation statistics
1) Summary statistics:
They're some of the very basic methods are there in summary statistics to summarise given data distribution. like
1) Mean
2) Median
3) Mode
4) Standard deviation
5) Variance
6) Range
7) percentiles
8) Interquarantile range
9) Min/Max values
while generally mean and std. dev useful only in normally distributed data. one of the most frequently used summary statistics is known as a 5 number summary statistics which includes MIN, MAX. 25TH, 50TH, and 75TH QUARNTILES.
5 number summary can be presented as box and whisker plots to visualize data distributions and to visualize outliers and range of data.
2) Sampling Methods:
Data is everything in any data science task, without data how we can extract insights from it. here comes data sampling into the picture, we usually have a lot of data available for training and testing purposes. we want to find the best data sample that contains all sets of features without bias.
Ultimately we use sample data to estimate population parameters and get an idea about the population. there are various methods to sampling in classical statistics but in machine learning, we use historical data as our sample to train and test the model.
Sometimes we use multiple samples to train the model so that we can get optimal and most accurate predictions for test data.
Cross-validation is a method to apply machine learning models to various samples to measure the accuracy and skill of machine learning models.
3) Hypothesis testing:
Statistical hypothesis testing is used to test the statistical significance of a particular data sample. in hypothesis testing we assume the null hypothesis is true and calculate test statistics to get a P-value which gives us whether the chances of sample data are random or it has a significance. if P-value is higher than our significance value alpha then we confirm that sample is randomly occurring so we do not reject the hypothesis. while if we have a P-value within an alpha or below alpha then we can say occurrence is statistically significant so that we can reject the null hypothesis and accept the alternative hypothesis.
H0: there's no difference in our sample data than our population parameter.
Ha: there's a difference from our population parameter.
For significance value alpha we take 0.05 in general for most the cases.
For hypothesis tests, we have a standard python library scipy with which we can do hypothesis tests on various test datasets.
4) Estimation statistics:
Estimation statistics is one of the branches of inferential statistics where we try to estimate population parameters from a sample of data. there are three methods in estimation statistics:
1) Prediction intervals
2) Confidence intervals
3) Tolerance intervals
In very basic terms, we are trying to find that interval from sample statistics that will inferral population parameters. we use Z- statistics or T-statistics to find intervals that our population will fall into.
So above are the basics of statistical methods that we use in data science and machine learning algorithms extensively to get insights out of data.
Thank you for your time in reading this page. in case if you want to connect with me, here's Email: avikumar.talaviya@gmail.com
References:
machine-learning mastery-statistics
Comments
Post a Comment