The Baeldung logo
  • The Baeldung LogoCS Sublogo
  • Start Here
  • About ▼▲
    • Full Archive

      The high level overview of all the articles on the site.

    • About Baeldung

      About Baeldung.

Category upArtificial Intelligence

Machine Learning

Machine learning is a technique where a machine can learn from data. Learn about various techniques for training models from datasets.

  • Training (54)
  • Neural Networks (53)
  • Probability and Statistics (17)
  • SVM (15)
  • Natural Language Processing (14)
  • Reinforcement Learning (13)
  • Regression (9)
  • Decision Trees (7)
  • Testing (7)
  • Clustering (6)
  • Optimization (5)
  • Random Forests (5)
  • Convolutional Neural Networks (4)
  • Naive Bayes (4)
  • Generative Adversarial Networks (3)
  • Attention (3)
  • Image Processing (3)
  • word2vec (3)
  • Matrix (2)
  • Cross-Validation (2)
  • Entropy (2)
  • Strings (2)
  • reference (2)

>> Introduction to Convolutional Neural Networks

>> Support Vector Machines (SVM)

>> ReLU vs. LeakyReLU vs. PReLU

>> How to Calculate the VC-Dimension of a Classifier?

>> How to Handle Missing Data in Logistic Regression?

>> How to Handle Unbalanced Data With SMOTE?

>> How to Plot Logistic Regression’s Decision Boundary?

>> What’s the Difference Between Cross-Entropy and KL Divergence?

>> Saturating Non-Linearities

>> What Is A Hopfield Network?

>> Neural Networks: Strided Convolutions

>> What Are the Advantages of Kernel PCA Over Standard PCA?

>> How Do AI Image Generators Work?

>> What Is TinyML?

>> GMMs for Clustering

>> Data Quality Explained

>> Importance of Statistics in Machine Learning

>> How to Do Early Stopping?

>> Gaussian Mixture Models

>> Python for Machine Learning

>> What Is Singular Value Decomposition?

>> Bagging, Boosting, and Stacking in Machine Learning

>> Introduction to Spiking Neural Networks

>> The Drawbacks of K-Means Algorithm

>> From RNNs to Transformers

>> Machine Learning: Analytical Learning

>> The C Parameter in Support Vector Machines

>> What Is and Why Use Temperature in Softmax?

>> How Do Self-Organizing Maps Work?

>> Algorithm for Online Outlier Detection in Time Series

>> How to Analyze Loss vs. Epoch Graphs?

>> The Concepts of Dense and Sparse in the Context of Neural Networks

>> 2D Convolution as a Matrix-Matrix Multiplication

>> Lazy vs. Eager Learning

>> DBSCAN Clustering: How Does It Work?

>> Q-Learning vs. Deep Q-Learning vs. Deep Q-Network

>> What Are Downstream Tasks?

>> What Is Neural Style Transfer?

>> Automated Machine Learning Explained

>> What Is Federated Learning?

>> What Does Learning Rate Warm-up Mean?

>> Introduction to Triplet Loss

>> Neural Networks: Pooling Layers

>> Dimensionality of Word Embeddings

>> What Is the Credit Assignment Problem?

>> ADAM Optimizer

>> What Does PAC Learning Theory Really Mean?

>> Co-occurrence Matrices and Their Uses in NLP

>> Sensitivity and Specificity

>> Hard vs. Soft Voting Classifiers

>> Difference Between Reinforcement Learning and Optimal Control

>> What Is Independent Component Analysis (ICA)?

>> Differences Between a Parametric and Non-parametric Model

>> Maximum Likelihood Estimation

>> Graph Attention Networks

>> Sparse Coding Neural Networks

>> Differences Between Luong Attention and Bahdanau Attention

>> Differences Between Hinge Loss and Logistic Loss

>> Machine Learning: How to Format Images for Training

>> Machine Learning: Flexible and Inflexible Models

>> What Are Restricted Boltzmann Machines?

>> What Is a Data Lake?

>> Introduction to Gibbs Sampling

>> One-Hot Encoding Explained

>> Feature Selection in Machine Learning

>> Differences Between Transfer Learning and Meta-Learning

>> Why Use a Surrogate Loss

>> Parameters vs. Hyperparameters

>> What Is the No Free Lunch Theorem?

>> What Does Backbone Mean in Neural Networks?

>> What Is Content-Based Image Retrieval?

>> Differences Between Bias and Error

>> Online Learning vs. Offline Learning

>> What Is One Class SVM and How Does It Work?

>> What Is Feature Importance in Machine Learning?

>> Data Augmentation

>> What Is Fine-Tuning in Neural Networks?

>> Random Forest vs. Extremely Randomized Trees

>> How to Use Gabor Filters to Generate Features for Machine Learning

>> Random Sample Consensus Explained

>> What Does Pre-training a Neural Network Mean?

>> Neural Networks: What Is Weight Decay Loss?

>> Win Gomoku with Threat Space Search

  • Gameplay Algorithms

>> Neural Networks: Difference Between Conv and FC Layers

>> Neural Networks: Binary vs. Discrete vs. Continuous Inputs

>> Differences Between Gradient, Stochastic and Mini Batch Gradient Descent

>> Scale-Invariant Feature Transform

>> What Is a Regressor?

>> Multi-Layer Perceptron vs. Deep Neural Network

>> Machine Learning: What Is Ablation Study?

>> Silhouette Plots

>> Node Impurity in Decision Trees

>> 0-1 Loss Function Explained

>> Differences Between Missing Data and Sparse Data

>> Differences Between Backpropagation and Feedforward Networks

>> Cross-Validation: K-Fold vs. Leave-One-Out

>> Off-policy vs. On-policy Reinforcement Learning

>> Comparing Naïve Bayes and SVM for Text Classification

>> Recurrent vs. Recursive Neural Networks in Natural Language Processing

>> What Are “Bottlenecks” in Neural Networks?

>> Hidden Layers in a Neural Network

>> Real-Life Examples of Supervised Learning and Unsupervised Learning

>> Real-World Uses for Genetic Algorithms

>> Decision Trees vs. Random Forests

>> What Is Inductive Bias in Machine Learning?

>> What Is the Difference Between Artificial Intelligence, Machine Learning, Statistics, and Data Mining?

>> An Introduction to Self-Supervised Learning

>> Latent Space in Deep Learning

>> Autoencoders Explained

>> Difference Between the Cost, Loss, and the Objective Function

>> Activation Functions: Sigmoid vs Tanh

>> An Introduction to Contrastive Learning

>> Gradient Boosting Trees vs. Random Forests

>> Basic Concepts of Machine Learning

>> Intuition Behind Kernels in Machine Learning

>> Algorithms for Image Comparison

>> Image Processing: Occlusions

>> Information Gain in Machine Learning

>> How to Use K-Fold Cross-Validation in a Neural Network?

>> An Introduction to Generative Adversarial Networks

>> K-Means for Classification

>> ML: Train, Validate, and Test

>> Q-Learning vs. SARSA

>> Differences Between SGD and Backpropagation

>> Linearly Separable Data in Neural Networks

>> The Effects of the Depth and Number of Trees in a Random Forest

>> Differences Between Bidirectional and Unidirectional LSTM

>> Features, Parameters and Classes in Machine Learning

>> Relation Between Learning Rate and Batch Size

>> Word2vec Word Embedding Operations: Add, Concatenate or Average Word Vectors?

>> Drift, Anomaly, and Novelty in Machine Learning

>> Markov Decision Process: How Does Value Iteration Work?

>> Ensemble Learning

>> Decision Tree vs. Naive Bayes Classifier

>> Accuracy vs AUC in Machine Learning

>> Bayesian Networks

>> Biases in Machine Learning

>> When Coherence Score Is Good or Bad in Topic Modeling?

>> How Do Markov Chain Chatbots Work?

>> Stratified Sampling in Machine Learning

>> Outlier Detection and Handling

>> Choosing a Learning Rate

>> Underfitting and Overfitting in Machine Learning

>> How Do “20 Questions” AI Algorithms Work?

>> How to Calculate the Regularization Parameter in Linear Regression

>> How to Get Vector for A Sentence From Word2vec of Tokens

>> Q-Learning vs. Dynamic Programming

>> Feature Selection and Reduction for Text Classification

>> Difference Between a SVM and a Perceptron

>> Why Mini-Batch Size Is Better Than One Single “Batch” With All Training Data

>> Intuitive Explanation of the Expectation-Maximization (EM) Technique

>> Pattern Recognition in Time Series

>> How to Create a Smart Chatbot?

>> Open-Source AI Engines

>> Out-of-bag Error in Random Forests

>> How to Calculate Receptive Field Size in CNN

>> k-Nearest Neighbors and High Dimensional Data

>> Using a Hard Margin vs. Soft Margin in SVM

>> Value Iteration vs. Policy Iteration in Reinforcement Learning

>> How To Convert a Text Sequence to a Vector

>> Instance vs Batch Normalization

>> Trade-offs Between Accuracy and the Number of Support Vectors in SVMs

>> Open Source Neural Network Libraries

>> Transformer Text Embeddings

>> Generative vs. Discriminative Algorithms

>> Why Feature Scaling in SVM?

>> Normalization vs Standardization in Linear Regression

>> Word Embeddings: CBOW vs Skip-Gram

>> String Similarity Metrics: Sequence Based

>> How to Improve Naive Bayes Classification Performance?

>> Ugly Duckling Theorem

>> Topic Modeling with Word2Vec

>> Normalize Features of a Table

>> String Similarity Metrics: Token Methods

>> Gradient Descent Equation in Logistic Regression

>> Correlated Features and Classification Accuracy

>> Weakly Supervised Learning

>> Interpretation of Loss and Accuracy for a Machine Learning Model

>> Splitting a Dataset into Train and Test Sets

>> Solving the K-Armed Bandit Problem

>> Epoch in Neural Networks

>> Epsilon-Greedy Q-learning

>> Random Initialization of Weights in a Neural Network

>> Multiclass Classification Using Support Vector Machines

>> Reinforcement Learning with Neural Network

>> What Is Cross-Entropy?

>> Advantages and Disadvantages of Neural Networks Against SVMs

>> Top-N Accuracy Metrics

>> Neural Network Architecture: Criteria for Choosing the Number and Size of Hidden Layers

>> Training Data for Sentiment Analysis

>> Differences Between Classification and Clustering

>> F-1 Score for Multi-Class Classification

>> What Is a Policy in Reinforcement Learning?

>> SVM Vs Neural Network

>> Normalizing Inputs for an Artificial Neural Network

>> What Is a Learning Curve in Machine Learning?

>> Introduction to the Classification Model Evaluation

>> Why Does the Cost Function of Logistic Regression Have a Logarithmic Expression?

>> Linear Regression vs. Logistic Regression

>> Bias in Neural Networks

>> How to Compute the Similarity Between Two Text Documents?

>> Difference Between a Feature and a Label

>> Data Normalization Before or After Splitting a Data Set?

>> A Simple Explanation of Naive Bayes Classification

>> Feature Scaling

>> Introduction to Supervised, Semi-supervised, Unsupervised and Reinforcement Learning

  • ↑ Back to Top
The Baeldung logo

Categories

  • Algorithms
  • Artificial Intelligence
  • Core Concepts
  • Data Structures
  • Graph Theory
  • Latex
  • Networking
  • Security

Series

About

  • About Baeldung
  • The Full archive
  • Editors
  • Terms of Service
  • Privacy Policy
  • Company Info
  • Contact
The Baeldung Logo