The high level overview of all the articles on the site.
Math and Logic
>> How to Analyze Loss vs. Epoch Graphs?
>> Lazy vs. Eager Learning
>> What Are Downstream Tasks?
>> What Is Federated Learning?
>> What Does Learning Rate Warm-up Mean?
>> What Is End-to-End Deep Learning?
>> Natural Language Processing: Bleu Score
>> Introduction to Triplet Loss
>> ADAM Optimizer
>> Differences Between Hinge Loss and Logistic Loss
>> Machine Learning: How to Format Images for Training
>> Parameters vs. Hyperparameters
>> How to Handle Large Images to Train CNNs?
>> What Does Pre-training a Neural Network Mean?
>> Differences Between Gradient, Stochastic and Mini Batch Gradient Descent
>> 0-1 Loss Function Explained
>> Differences Between Epoch, Batch, and Mini-batch
>> Bias Update in Neural Network Backpropagation
>> Training and Validation Loss in Deep Learning
>> Differences Between SGD and Backpropagation
>> Relation Between Learning Rate and Batch Size
>> Choosing a Learning Rate
>> How to Calculate the Regularization Parameter in Linear Regression
>> Why Mini-Batch Size Is Better Than One Single “Batch” With All Training Data
>> Instance vs Batch Normalization
>> Why Feature Scaling in SVM?
>> Normalization vs Standardization in Linear Regression
>> Normalize Features of a Table
>> Gradient Descent Equation in Logistic Regression
>> Interpretation of Loss and Accuracy for a Machine Learning Model
>> Splitting a Dataset into Train and Test Sets
>> Epoch in Neural Networks
>> Random Initialization of Weights in a Neural Network
>> Training Data for Sentiment Analysis
>> Why Does the Cost Function of Logistic Regression Have a Logarithmic Expression?
↑ Back to Top