Link
Search
Menu
Expand
Document
PyFin Academy
2. Quantitative Analysis
1. Probability
1. Introduction
2. Event and Event Space
3. Independent and Mutually Exclusive Events
4. Independent Vs Conditionally Independent Events
5. Probability Calculation for Discrete Probability Functions
6. Conditional Probability
7. Conditional Vs Unconditional Probability
8. Bayes' Rule
9. Conclusion
2. Random Variables
1. Introduction
2. Probability Mass Function and Cumulative Distribution Function
3. Expectation of a Random Variable
4. Population Moments
5. Probability Mass Function Vs Probability Density Function
6. Quantile Function and Quantile-Based Estimators
7. Linear Transformation on Random Variables
8. Conclusion
3. Common Univariate Distributions
1. Introduction
2. Uniform Distribution
3. Bernoulli Distribution
4. Binomial Distribution
5. Poisson Distribution
6. Normal Distribution
7. Lognormal Distribution
8. Chi-squared Distribution
9. Student’s T-Distribution
10. F-Distribution
11. Mixture Distribution
12. Conclusion
4. Multivariate Random Variables
1. Introduction
2. Probability Matrix and Probability Mass Function
3. Marginal and Conditional Distributions
4. Expectation of a Function
5. Covariance
6. Correlation, Covariance, and Independence
7. Effects of Linear Transformations on Covariance and Correlation
8. Variance of a Weighted Sum of Two Random Variables
9. Conditional Expectation of a Component
10. Independent and Identically Distributed (IID) Sequence of Random Variables
11. Mean and Variance of a Sum of IID Random Variables
12. Conclusion
5. Sample Moments
1. Introduction
2. Mean, Variance, and Standard Deviation
3. Population Vs Sample Moments
4. Estimator Vs Estimate
5. Bias of an Estimator
6. BLUE Estimator
7. Consistency of an Estimator
8. Law of Large Numbers and Central Limit Theorem
9. Skewness and Kurtosis
10. Quantiles
11. Mean of Two Variables and CLT
12. Covariance and Correlation
13. Coskewness and Cokurtosis
14. Conclusion
6. Hypothesis Testing
1. Introduction
2. Null and Alternative Hypothesis
3. One-sided Vs Two-sided Test
4. Type I and Type II Errors and Power of a Test
5. Confidence Interval
6. P-Value
7. One-sided and Two-sided Confidence Intervals
8. Testing Hypotheses about Population Means' Difference
9. Multiple Testing and Biased Results
10. Conclusion
7. Linear Regression
1. Introduction
2. Models Estimation using Regression
3. Ordinary Least Squares (OLS) Regression
4. Assumptions of OLS Parameter Estimation
5. Properties of OLS Estimators
6. Hypothesis Testing for a Single Regression Coefficient
7. Hypothesis Test in a Linear Regression
8. T-Statistic, P-Value, and Confidence Interval
9. Estimating Correlation Coefficient from $R^2$
10. Conclusion
8. Multivariate Regression
1. Introduction
2. Comparing Assumptions of Single Vs Multiple Regression
3. Regression Coefficients in a Multiple Regression
4. Goodness of Fit Measures for Single and Multiple Regressions
5. Joint Hypothesis Tests and Confidence Intervals
6. Regression Sum of Squares
7. Conclusion
9. Regression Diagnostics
1. Introduction
2. Testing for Heteroskedasticity
3. Approaches to Using Heteroskedastic Data
4. Multicollinearity
5. Model Specification
6. Model Selection Procedures and Bias-Variance Trade-Off
7. Residuals Visualization
8. Identifying Outliers
9. Conditions for OLS to be Best Linear Unbiased Estimator
10. Conclusion
10. Stationary Time Series
1. Introduction
2. Covariance Stationarity
3. Autocovariance and Autocorrelation Functions
4. White Noise
5. Autoregressive (AR) Processes
6. Moving Average (MA) Processes
7. Lag Operator
8. Mean Reversion
9. Autoregressive Moving Average (ARMA) Processes
10. Application of AR, MA and ARMA Processes
11. Sample and Partial Autocorrelations
12. Box-Pierce Q-Statistic and Ljung-Box Q-Statistic
13. Generating Forecasts from ARMA Models
14. Mean Reversion in Long-Horizon Forecasts
15. Seasonality in Covariance-Stationary ARMA
16. Conclusion
11. Non-Stationary Time Series
1. Introduction
2. Linear and Nonlinear Time Trends
3. Modeling Seasonality with Regression Analysis
4. Random Walk and a Unit Root
5. Modeling Time Series Containing Unit Roots
6. Testing for Unit Roots
7. Constructing h-Step-Ahead Point Forecasts
8. Estimated Trend Value and Interval Forecast
9. Conclusion
12. Returns, Volatility and Correlation
1. Introduction
2. Simple and Continuously Compounded Returns
3. Volatility, Variance Rate and Implied Volatility
4. Beyond First Two Moments
5. Jarque-Bera Test for Normality
6. Power Law
7. Correlation and Covariance
8. Correlation in the Context of a One-Factor Model
9. Correlation Measures Used to Assess Dependence
10. Conclusion
13. Simulation and Bootstrapping
1. Introduction
2. Monte Carlo Simulation
3. Monte Carlo Sampling Error
4. Reducing Monte Carlo Sampling Error using Antithetic and Control Variates
5. Bootstrapping Method
6. Pseudo-Random Number Generation
7. Limitations of Bootstrapping Method
8. Disadvantages of Simulation Approach
9. Conclusion
14. Machine Learning Methods
1. Introduction
2. Machine-Learning Vs Classical Econometrics
3. Training, Validation and Testing Data Sub-Samples
4. Underfitting and Overfitting
5. Principal Components Analysis
6. K-means Algorithm
7. Natural Language Processing
8. Unsupervised, Supervised, and Reinforcement learning
9. Reinforcement Learning
10. Conclusion
15. Machine Learning and Prediction
1. Introduction
2. Linear and Logistic Regression
3. Encoding Categorical Variables
4. Regularization Techniques
5. Decision Trees
6. Ensemble Learning
7. K-Nearest Neighbors and Support Vector Machines
8. Neural Networks
9. Confusion Matrix
10. Conclusion