The Baeldung logo
  • The Baeldung LogoCS Sublogo
  • Start Here
  • About ▼▲
    • Full Archive

      The high level overview of all the articles on the site.

    • About Baeldung

      About Baeldung.

Category upArtificial Intelligence

Deep Learning

Deep learning is a technique to perform machine learning using neural networks. Learn about various techniques for training, optimizing, and using multi-layer neural networks.

  • Neural Networks (47)
  • Training (19)
  • Natural Language Processing (8)
  • Convolutional Neural Networks (8)
  • Reinforcement Learning (7)
  • Generative Adversarial Networks (6)
  • SVM (3)
  • Image Processing (2)
  • Probability and Statistics (2)

>> Prevent the Vanishing Gradient Problem with LSTM

>> Machine Learning vs. Deep Learning

>> An Introduction to Graph Neural Networks

>> An Introduction to Deepfakes

>> Deterministic vs. Stochastic Policies in Reinforcement Learning

>> Introduction to Large Language Models

>> What Is and Why Use Temperature in Softmax?

>> What’s a Non-trainable Parameter?

>> Epoch or Episode: Understanding Terms in Deep Reinforcement Learning

>> Q-Learning vs. Deep Q-Learning vs. Deep Q-Network

>> What Is End-to-End Deep Learning?

>> How Do Siamese Networks Work in Image Recognition?

>> Deep Neural Networks: Padding

  • Image Processing

>> Single Shot Detectors (SSDs)

>> What Is Maxout in a Neural Network?

>> Recurrent Neural Networks

>> What Are Embedding Layers in Neural Networks?

>> Why Use a Surrogate Loss

>> Generative Adversarial Networks: Discriminator’s Loss and Generator’s Loss

>> What Does Backbone Mean in Neural Networks?

>> What Are Channels in Convolutional Networks?

>> Differences Between Bias and Error

>> Instance Segmentation vs. Semantic Segmentation

>> Online Learning vs. Offline Learning

>> Data Augmentation

>> Random Sample Consensus Explained

  • Probability and Statistics

>> What Does Pre-training a Neural Network Mean?

>> Neural Networks: What Is Weight Decay Loss?

>> Neural Networks: Difference Between Conv and FC Layers

>> What Exactly Is an N-Gram?

>> Multi-Layer Perceptron vs. Deep Neural Network

>> Model-free vs. Model-based Reinforcement Learning

>> 0-1 Loss Function Explained

>> Differences Between Backpropagation and Feedforward Networks

>> Cross-Validation: K-Fold vs. Leave-One-Out

>> Off-policy vs. On-policy Reinforcement Learning

>> Bias Update in Neural Network Backpropagation

>> Recurrent vs. Recursive Neural Networks in Natural Language Processing

>> What Are “Bottlenecks” in Neural Networks?

>> Convolutional Neural Network vs. Regular Neural Network

>> Mean Average Precision in Object Detection

>> Hidden Layers in a Neural Network

>> Real-Life Examples of Supervised Learning and Unsupervised Learning

>> Real-World Uses for Genetic Algorithms

>> The Reparameterization Trick in Variational Autoencoders

>> What Is Inductive Bias in Machine Learning?

>> Latent Space in Deep Learning

>> Autoencoders Explained

>> Activation Functions: Sigmoid vs Tanh

>> An Introduction to Contrastive Learning

>> Intuition Behind Kernels in Machine Learning

>> Algorithms for Image Comparison

>> Image Processing: Occlusions

  • Image Processing

>> Applications of Generative Models

>> Calculate the Output Size of a Convolutional Layer

>> An Introduction to Generative Adversarial Networks

>> Linearly Separable Data in Neural Networks

>> Using GANs for Data Augmentation

>> Relation Between Learning Rate and Batch Size

>> Word2vec Word Embedding Operations: Add, Concatenate or Average Word Vectors?

>> Outlier Detection and Handling

  • Probability and Statistics

>> Feature Selection and Reduction for Text Classification

>> Why Mini-Batch Size Is Better Than One Single “Batch” With All Training Data

>> How to Create a Smart Chatbot?

>> How to Calculate Receptive Field Size in CNN

>> k-Nearest Neighbors and High Dimensional Data

>> Using a Hard Margin vs. Soft Margin in SVM

>> Value Iteration vs. Policy Iteration in Reinforcement Learning

>> How To Convert a Text Sequence to a Vector

>> Instance vs Batch Normalization

>> Trade-offs Between Accuracy and the Number of Support Vectors in SVMs

>> Open Source Neural Network Libraries

>> Word Embeddings: CBOW vs Skip-Gram

>> Encoder-Decoder Models for Natural Language Processing

>> Epoch in Neural Networks

>> Random Initialization of Weights in a Neural Network

>> Batch Normalization in Convolutional Neural Networks

>> Advantages and Disadvantages of Neural Networks Against SVMs

>> Neural Network Architecture: Criteria for Choosing the Number and Size of Hidden Layers

>> Training Data for Sentiment Analysis

>> F-1 Score for Multi-Class Classification

>> What Is a Policy in Reinforcement Learning?

>> What Is the Difference Between Gradient Descent and Gradient Ascent?

>> Normalizing Inputs for an Artificial Neural Network

>> Introduction to Convolutional Neural Networks

>> Bias in Neural Networks

>> Understanding Dimensions in CNNs

  • ↑ Back to Top
The Baeldung logo

Categories

  • Algorithms
  • Artificial Intelligence
  • Core Concepts
  • Data Structures
  • Graph Theory
  • Latex
  • Networking
  • Security

Series

About

  • About Baeldung
  • The Full archive
  • Editors
  • Terms of Service
  • Privacy Policy
  • Company Info
  • Contact
The Baeldung Logo