Partner – DBSchema – NPI (tag = SQL)
announcement - icon

DbSchema is a super-flexible database designer, which can take you from designing the DB with your team all the way to safely deploying the schema.

The way it does all of that is by using a design model, a database-independent image of the schema, which can be shared in a team using GIT and compared or deployed on to any database.

And, of course, it can be heavily visual, allowing you to interact with the database using diagrams, visually compose queries, explore the data, generate random data, import data or build HTML5 database reports.

>> Take a look at DBSchema

1. Overview

Spring Boot makes it really easy to manage our database changes. If we leave the default configuration, it’ll search for entities in our packages and create the respective tables automatically.

But we’ll sometimes need more fine-grained control over the database alterations. And that’s when we can use the data.sql and schema.sql files in Spring.

Further reading:

Spring Boot With H2 Database

Learn how to configure and how to use the H2 database with Spring Boot.

Database Migrations with Flyway

This article describes key concepts of Flyway and how we can use this framework to continuously remodel our application's database schema reliably and easily.

Generate Database Schema with Spring Data JPA

JPA provides a standard for generating DDL from our entity model. Here we explore how to do this in Spring Data and compare that with native Hibernate.

2. The data.sql File

Let’s also make the assumption here that we’re working with JPA and define a simple Country entity in our project:

public class Country {

    @GeneratedValue(strategy = IDENTITY)
    private Integer id;
    @Column(nullable = false)
    private String name;


If we run our application, Spring Boot will create an empty table for us but won’t populate it with anything.

An easy way to do this is to create a file named data.sql:

INSERT INTO country (name) VALUES ('India');
INSERT INTO country (name) VALUES ('Brazil');
INSERT INTO country (name) VALUES ('USA');
INSERT INTO country (name) VALUES ('Italy');

By default, data.sql scripts get executed before the Hibernate is initialized. We need Hibernate to create our tables before inserting the data into them. To achieve this, we need to defer the initialization of our data source. We’ll use the below property to achieve this:


When we run the project with this file on the classpath, Spring will pick it up and use it to populate the country table.

Please note that for any script-based initialization, i.e. inserting data via data.sql or creating schema via schema.sql (which we’ll learn next), we need to set the below property:


For embedded databases such as H2, this is set to always by default.

3. The schema.sql File

Sometimes, we don’t want to rely on the default schema creation mechanism.

In such cases, we can create a custom schema.sql file:

create table USERS(
  ID int not null AUTO_INCREMENT,
  NAME varchar(100) not null,
  STATUS int,

Spring will pick this file up and use it for creating a schema.

When we run the project with this file on the classpath, we can see that even though the Users table is not present as an entity in our project, still Spring has created a Users table in our database by reading this schema.sql file.

Please note that if we are using script-based initialization, i.e. through schema.sql and data.sql and also Hibernate initialization, then using both of them together can cause some issues.

To solve this, we can disable the execution of DDL commands altogether by Hibernate, which Hibernate uses for the creation/updation of tables:


This will ensure that only script-based schema generation is performed using schema.sql.

If we still want to have both Hibernate automatic schema generation in conjugation with script-based schema creation and data population, we’ll have to use:


This will ensure that after Hibernate schema creation is performed, then additionally schema.sql is read for any additional schema changes, and further data.sql is executed to populate the database. 

Also, as explained in the previous section, script-based initialization is performed by default only for embedded databases. To always initialize a database using scripts, we’ll have to use:


Please refer to the official Spring documentation on initializing databases using SQL scripts.

4. Controlling Database Creation Using Hibernate

Spring provides a JPA-specific property that Hibernate uses for DDL generation: spring.jpa.hibernate.ddl-auto.

The standard Hibernate property values are createupdatecreate-dropvalidate and none:

  • create – Hibernate first drops existing tables and then creates new tables.
  • update – The object model created based on the mappings (annotations or XML) is compared with the existing schema, and then Hibernate updates the schema according to the diff. It never deletes the existing tables or columns even if they are no longer required by the application.
  • create-drop – similar to create, with the addition that Hibernate will drop the database after all operations are completed; typically used for unit testing
  • validate – Hibernate only validates whether the tables and columns exist; otherwise, it throws an exception.
  • none – This value effectively turns off the DDL generation.

Spring Boot internally defaults this parameter value to create-drop if no schema manager has been detected, otherwise none for all other cases.

We have to set the value carefully or use one of the other mechanisms to initialize the database.

5. Customizing Database Schema Creation

By default, Spring Boot automatically creates the schema of an embedded DataSource.

If we need to control or customize this behavior, we can use the property spring.sql.init.mode. This property takes one of three values:

  • always – always initialize the database
  • embedded – always initialize if an embedded database is in use. This is the default if the property value is not specified.
  • never – never initialize the database

Notably, if we are using a non-embedded database, let’s say MySQL or PostGreSQL, and want to initialize its schema, we’ll have to set this property to always.

This property was introduced in Spring Boot 2.5.0; we need to use spring.datasource.initialization-mode if we are using previous versions of Spring Boot.

6. Using the @Sql Annotation

Spring also provides the @Sql annotation – a declarative way to initialize and populate our test schema.

Here are the attributes of the @Sql annotation:

  • config – Local configuration for the SQL scripts. We’ll discuss this in detail in the next section.
  • executionPhase – We can also specify when to execute SQL scripts.
  • statements – We can declare inline SQL statements to execute.
  • scripts – We can declare the paths to SQL script files to execute. This is an alias for the value attribute.

The @Sql annotation can be used at the class level or the method level. 

6.1. @Sql Annotation at Class Level

The @Sql annotation can be declared at the class level to populate data for a test.

Let’s see how to use the @Sql annotation to create a new table and also load initial data for our integration test:

@Sql({"/employees_schema.sql", "/import_employees.sql"})
public class SpringBootInitialLoadIntegrationTest {

    private EmployeeRepository employeeRepository;

    public void testLoadDataForTestClass() {
        assertEquals(3, employeeRepository.findAll().size());

In the code above, we define two SQL scripts that execute before the test method. The @Sql declaration utilizes the default BEFORE_TEST_METHOD execution phase.

Spring version 6.1 and Spring Boot version 3.2.0 introduce class-level support for the executionPhase parameter with BEFORE_TEST_CLASS and AFTER_TEST_CLASS constants to determine if a script should run before or after the test class.

Let’s update the SpringBootInitialLoadIntegrationTest class and explicitly define an execution phase:

@Sql(scripts = {"/employees_schema.sql", "/import_employees.sql"}, executionPhase = BEFORE_TEST_CLASS)
public class SpringBootInitialLoadIntegrationTest { 
// ...  

Here, we run the SQL scripts before the test class by setting the value of the executionPhase to BEFORE_TEST_CLASS.

Furthermore, the AFTER_TEST_CLASS execution phase helps load a SQL script after a test class. This may be useful in a case where we want to clear the database after a test:

@Sql(scripts = {"/delete_employees_data.sql"}, executionPhase = AFTER_TEST_CLASS)
public class SpringBootInitialLoadIntegrationTest {
// ...

Notably, this configuration can’t be overridden by the method-level scripts and statements. Instead, the script will be executed in addition to the method-level scripts and statements.

6.2. @Sql Annotation at Method Level

We’ll load additional data required for a particular test case by annotating that method:

public void testLoadDataForTestCase() {
    assertEquals(5, employeeRepository.findAll().size());

Here, the SQL script is executed before the execution of the test method.

Again, we can explicitly define the execution phase at the method level using the BEFORE_TEST_METHOD or AFTER_TEST_METHOD constants:

@Sql(scripts = {"/import_senior_employees.sql"}, executionPhase = BEFORE_TEST_METHOD)
public void testLoadDataForTestCase() {
    assertEquals(5, employeeRepository.findAll().size());

The AFTER_TEST_METHOD execution phase helps to load a SQL script after the test method. We can use it, for example, to drop a database table after the execution of a test method.

By default, the @Sql annotation declaration at the method level overrides the @Sql declaration at the class level. In this case, the method-level @Sql declaration takes precedence over the SQL defined at the class level:

@Sql(scripts = {"/employees_schema.sql", "/import_employees.sql"})
public class SpringBootInitialLoadIntegrationTest {

    private EmployeeRepository employeeRepository;

    @Sql(scripts = {"/import_senior_employees.sql"})
    public void testLoadDataForTestClass() {
        assertEquals(5, employeeRepository.findAll().size());

Here, only import_seioner_employees.sql is executed when we run the test.

However, we can further configure this behavior using the @SqlMergeMode declaration which helps to merge method level @Sql declaration with class level @Sql declaration.

7. @SqlConfig 

We can configure the way we parse and run the SQL scripts by using the @SqlConfig annotation.

@SqlConfig can be declared at the class level, where it serves as a global configuration. Or we can use it to configure a particular @Sql annotation.

Let’s see an example where we specify the encoding of our SQL scripts as well as the transaction mode for executing the scripts:

@Sql(scripts = {"/import_senior_employees.sql"}, 
  config = @SqlConfig(encoding = "utf-8", transactionMode = TransactionMode.ISOLATED))
public void testLoadDataForTestCase() {
    assertEquals(5, employeeRepository.findAll().size());

And let’s look at the various attributes of @SqlConfig:

  • blockCommentStartDelimiter – delimiter to identify the start of block comments in SQL script files
  • blockCommentEndDelimiter – delimiter to denote the end of block comments in SQL script files
  • commentPrefix – prefix to identify single-line comments in SQL script files
  • dataSource – name of the javax.sql.DataSource bean against which the scripts and statements will be run
  • encoding – encoding for the SQL script files; default is platform encoding
  • errorMode – mode that will be used when an error is encountered running the scripts
  • separator – string used to separate individual statements; default is “–“
  • transactionManager – bean name of the PlatformTransactionManager that will be used for transactions
  • transactionMode – the mode that will be used when executing scripts in transaction

8. @SqlGroup 

Java 8 and above allow the use of repeated annotations. We can utilize this feature for @Sql annotations as well. For Java 7 and below, there is a container annotation — @SqlGroup.

Using the @SqlGroup annotation, we’ll declare multiple @Sql annotations:

  @Sql(scripts = "/employees_schema.sql", 
    config = @SqlConfig(transactionMode = TransactionMode.ISOLATED)),
public class SpringBootSqlGroupAnnotationIntegrationTest {

    private EmployeeRepository employeeRepository;

    public void testLoadDataForTestCase() {
        assertEquals(3, employeeRepository.findAll().size());

9. Conclusion

In this quick article, we saw how we can leverage schema.sql and data.sql files for setting up an initial schema and populating it with data.

We also looked at how to use @Sql, @SqlConfig and @SqlGroup annotations to load test data for tests.

Keep in mind that this approach is more suited for basic and simple scenarios, and any advanced database handling would require more advanced and refined tooling like Liquibase or Flyway.

Code snippets, as always, can be found over on GitHub.

Course – LSD (cat=Persistence)

Get started with Spring Data JPA through the reference Learn Spring Data JPA course:

res – Persistence (eBook) (cat=Persistence)
Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.