The high level overview of all the articles on the site.
Math and Logic
>> ReLU vs. LeakyReLU vs. PReLU
>> How to Handle Missing Data in Logistic Regression?
>> How to Handle Unbalanced Data With SMOTE?
>> What Is TinyML?
>> Prevent the Vanishing Gradient Problem with LSTM
>> How to Do Early Stopping?
>> Bagging, Boosting, and Stacking in Machine Learning
>> Machine Learning: Analytical Learning
>> How to Analyze Loss vs. Epoch Graphs?
>> What’s a Non-trainable Parameter?
>> Lazy vs. Eager Learning
>> What Are Downstream Tasks?
>> Automated Machine Learning Explained
>> What Is Federated Learning?
>> What Does Learning Rate Warm-up Mean?
>> What Is End-to-End Deep Learning?
>> Natural Language Processing: Bleu Score
>> Introduction to Triplet Loss
>> ADAM Optimizer
>> Differences Between Hinge Loss and Logistic Loss
>> Machine Learning: How to Format Images for Training
>> What Is Multi-Task Learning?
>> One-Hot Encoding Explained
>> Feature Selection in Machine Learning
>> Differences Between Transfer Learning and Meta-Learning
>> Why Use a Surrogate Loss
>> Parameters vs. Hyperparameters
>> Differences Between Bias and Error
>> How to Handle Large Images to Train CNNs?
>> Data Augmentation
>> What Is Fine-Tuning in Neural Networks?
>> How to Use Gabor Filters to Generate Features for Machine Learning
>> What Does Pre-training a Neural Network Mean?
>> Neural Networks: What Is Weight Decay Loss?
>> Differences Between Gradient, Stochastic and Mini Batch Gradient Descent
>> Scale-Invariant Feature Transform
>> F-Beta Score
>> What’s a Hypothesis Space?
>> 0-1 Loss Function Explained
>> Differences Between Epoch, Batch, and Mini-batch
>> Differences Between Backpropagation and Feedforward Networks
>> Bias Update in Neural Network Backpropagation
>> Transfer Learning vs Domain Adaptation
>> An Introduction to Self-Supervised Learning
>> An Introduction to Contrastive Learning
>> Training and Validation Loss in Deep Learning
>> ML: Train, Validate, and Test
>> Differences Between SGD and Backpropagation
>> Relation Between Learning Rate and Batch Size
>> Stratified Sampling in Machine Learning
>> Choosing a Learning Rate
>> Underfitting and Overfitting in Machine Learning
>> How to Calculate the Regularization Parameter in Linear Regression
>> Feature Selection and Reduction for Text Classification
>> Why Mini-Batch Size Is Better Than One Single “Batch” With All Training Data
>> Instance vs Batch Normalization
>> Why Feature Scaling in SVM?
>> Normalization vs Standardization in Linear Regression
>> Normalize Features of a Table
>> Gradient Descent Equation in Logistic Regression
>> Interpretation of Loss and Accuracy for a Machine Learning Model
>> Splitting a Dataset into Train and Test Sets
>> Epoch in Neural Networks
>> Random Initialization of Weights in a Neural Network
>> Training Data for Sentiment Analysis
>> Why Does the Cost Function of Logistic Regression Have a Logarithmic Expression?
↑ Back to Top