The Baeldung logo
  • The Baeldung LogoCS Sublogo
  • Start Here
  • Write for Baeldung
  • About ▼▲
    • Full Archive

      The high level overview of all the articles on the site.

    • About Baeldung

      About Baeldung.

Category upArtificial Intelligence

Machine Learning

Machine learning is a technique where a machine can learn from data. Learn about various techniques for training models from datasets.

  • Neural Networks (30)
  • SVM (9)
  • Decision Trees (4)
  • word2vec (3)
  • reference (2)

>> Differences Between a Parametric and Non-parametric Model

>> Graph Attention Networks

>> Sparse Coding Neural Networks

>> Differences Between Luong Attention and Bahdanau Attention

>> What are Embedding Layers in Neural Networks?

>> Differences Between Hinge Loss and Logistic Loss

>> Machine Learning: How to Format Images for Training

>> Machine Learning: Flexible and Inflexible Models

>> What are Restricted Boltzmann Machines?

>> What is a Data Lake?

>> Introduction to Gibbs Sampling

>> One-Hot Encoding Explained

>> Feature Selection in Machine Learning

>> Differences Between Transfer Learning and Meta-Learning

>> Why Use a Surrogate Loss

>> Parameters vs. Hyperparameters

>> What Is The No Free Lunch Theorem?

>> What Does Backbone Mean in Neural Networks?

>> What Is Content-Based Image Retrieval?

>> Differences Between Bias and Error

>> Online Learning Vs. Offline Learning

>> What is One Class SVM and How Does It Work?

>> What is Feature Importance in Machine Learning?

>> Data Augmentation

>> What is Fine-tuning in Neural Networks?

>> Random Forest Vs. Extremely Randomized Trees

>> How to Use Gabor Filters to Generate Features for Machine Learning

>> Random Sample Consensus Explained

>> What Does Pre-training a Neural Network Mean?

>> Neural Networks: What Is Weight Decay Loss?

>> Win Gomoku with Threat Space Search

>> Neural Networks: Difference Between Conv and FC Layers

>> Neural Networks: Binary Vs. Discrete Vs. Continuous Inputs

>> Differences Between Gradient, Stochastic and Mini Batch Gradient Descent

>> Scale-Invariant Feature Transform

>> What Is a Regressor?

>> Multi-layer Perceptron Vs. Deep Neural Network

>> Machine Learning: What Is Ablation Study?

>> Silhouette Plots

>> Node Impurity in Decision Trees

>> 0-1 Loss Function Explained

>> Differences Between Missing Data and Sparse Data

>> Differences Between Backpropagation and Feedforward Networks

>> Cross-Validation: K-Fold vs. Leave-One-Out

>> Off-policy vs. On-policy Reinforcement Learning

>> Comparing Naïve Bayes and SVM for Text Classification

>> Recurrent vs. Recursive Neural Networks in Natural Language Processing

>> What Are “Bottlenecks” in Neural Networks?

>> Hidden Layers in a Neural Network

>> Real-Life Examples of Supervised Learning and Unsupervised Learning

>> Real-World Uses for Genetic Algorithms

>> Decision Trees vs. Random Forests

>> What Is Inductive Bias in Machine Learning?

>> What is the Difference Between Artificial Intelligence, Machine Learning, Statistics, and Data Mining?

>> An Introduction to Self-Supervised Learning

>> Latent Space in Deep Learning

>> Autoencoders Explained

>> Difference Between the Cost, Loss, and the Objective Function

>> Activation Functions: Sigmoid vs Tanh

>> An Introduction to Contrastive Learning

>> Gradient Boosting Trees vs. Random Forests

>> Basic Concepts of Machine Learning

>> Intuition Behind Kernels in Machine Learning

>> Algorithms for Image Comparison

>> Image Processing: Occlusions

>> Information Gain in Machine Learning

>> How to Use K-Fold Cross-Validation in a Neural Network?

>> An Introduction to Generative Adversarial Networks

>> K-Means for Classification

>> ML: Train, Validate, and Test

>> Q-Learning vs. SARSA

>> Differences Between SGD and Backpropagation

>> Linearly Separable Data in Neural Networks

>> The Effects of The Depth and Number of Trees in a Random Forest

>> Differences Between Bidirectional and Unidirectional LSTM

>> Features, Parameters and Classes in Machine Learning

>> Relation Between Learning Rate and Batch Size

>> Word2vec Word Embedding Operations: Add, Concatenate or Average Word Vectors?

>> Drift, Anomaly, and Novelty in Machine Learning

>> Markov Decision Process: How Does Value Iteration Work?

>> Ensemble Learning

>> Decision Tree vs. Naive Bayes Classifier

>> Accuracy vs AUC in Machine Learning

>> Bayesian Networks

>> Biases in Machine Learning

>> When Coherence Score is Good or Bad in Topic Modeling?

>> How Do Markov Chain Chatbots Work?

>> Stratified Sampling in Machine Learning

>> Outlier Detection and Handling

>> Choosing a Learning Rate

>> Underfitting and Overfitting in Machine Learning

>> How Do “20 Questions” AI Algorithms Work?

>> How to Calculate the Regularization Parameter in Linear Regression

>> How to Get Vector for A Sentence From Word2vec of Tokens

>> Q-Learning vs. Dynamic Programming

>> Feature Selection and Reduction for Text Classification

>> Why Mini-Batch Size Is Better Than One Single “Batch” With All Training Data

>> Intuitive Explanation of the Expectation-Maximization (EM) Technique

>> Pattern Recognition in Time Series

>> How to Create a Smart Chatbot?

>> Open-Source AI Engines

>> NLP’s word2vec: Negative Sampling Explained

>> LL vs. LR Parsing

>> How to Calculate Receptive Field Size in CNN

>> k-Nearest Neighbors and High Dimensional Data

>> State Machines: Components, Representations, Applications

>> Using a Hard Margin vs. Soft Margin in SVM

>> Value Iteration vs. Policy Iteration in Reinforcement Learning

>> How To Convert a Text Sequence to a Vector

>> Instance vs Batch Normalization

>> Trade-offs Between Accuracy and the Number of Support Vectors in SVMs

>> Open Source Neural Network Libraries

>> Transformer Text Embeddings

>> Semantic Similarity of Two Phrases

>> Why Feature Scaling in SVM?

>> Generalized Suffix Trees

>> Normalization vs Standardization in Linear Regression

>> Word Embeddings: CBOW vs Skip-Gram

>> String Similarity Metrics: Sequence Based

>> How to Improve Naive Bayes Classification Performance?

>> Ugly Duckling Theorem

>> Topic Modeling with Word2Vec

>> String Similarity Metrics: Token Methods

>> Gradient Descent Equation in Logistic Regression

>> Weakly Supervised Learning

>> Interpretation of Loss and Accuracy for a Machine Learning Model

>> Splitting a Dataset into Train and Test Sets

>> Encoder-Decoder Models for Natural Language Processing

>> Solving the K-Armed Bandit Problem

>> Epoch in Neural Networks

>> Epsilon-Greedy Q-learning

>> Random Initialization of Weights in a Neural Network

>> Multiclass Classification Using Support Vector Machines

>> Reinforcement Learning with Neural Network

>> What is Cross-Entropy?

>> Advantages and Disadvantages of Neural Networks Against SVMs

>> Top-N Accuracy Metrics

>> Neural Network Architecture: Criteria for Choosing the Number and Size of Hidden Layers

>> Training Data for Sentiment Analysis

>> Differences Between Classification and Clustering

>> F-1 Score for Multi-Class Classification

>> What is a Policy in Reinforcement Learning?

>> SVM Vs Neural Network

>> Support Vector Machines (SVM)

>> Normalizing Inputs for an Artificial Neural Network

>> What is a Learning Curve in Machine Learning?

>> Introduction to Convolutional Neural Networks

>> Introduction to the Classification Model Evaluation

>> Linear Regression vs. Logistic Regression

>> Bias in Neural Networks

>> How to Compute the Similarity Between Two Text Documents?

>> Difference Between a Feature and a Label

>> Data Normalization Before or After Splitting a Data Set?

>> Feature Scaling

>> Introduction to Supervised, Semi-supervised, Unsupervised and Reinforcement Learning

  • ↑ Back to Top
The Baeldung logo

Categories

  • Algorithms
  • Artificial Intelligence
  • Core Concepts
  • Data Structures
  • Graph Theory
  • Latex
  • Networking
  • Security

Series

About

  • About Baeldung
  • The Full archive
  • Write for Baeldung
  • Editors
  • Terms of Service
  • Privacy Policy
  • Company Info
  • Contact
The Baeldung Logo