T-SQL Tutorial

Bulk Insert in SQL Server


Bulk insertion in SQL Server is a technique used to quickly import a large number of records into a database table. This can be significantly faster than using individual INSERT statements, especially when dealing with large datasets. Here's an overview of how you can perform bulk inserts in SQL Server:


Basic Syntax

The basic syntax for bulk insert in SQL Server is:

BULK INSERT [database_name].[schema_name].[table_name]
FROM 'file_path'
WITH
(
[DATAFILETYPE = 'char'],
[FIELDTERMINATOR = ','|';'|'tab'|' '],
[ROWTERMINATOR = '\n'|'\r\n'|'0x0a'],
[ERRORFILE = 'file_path'],
[ROWS_PER_BATCH = number],
[TABLOCK]
)

database_name: The name of the database.
schema_name: The schema of the table.
table_name: The table where data will be inserted.
file_path: The full file path of the data source.
DATAFILETYPE: The data file type ('char' for character data).
FIELDTERMINATOR: The field delimiter used in the file.
ROWTERMINATOR: The row delimiter used in the file.
ERRORFILE: The path to the file where errors will be logged.
ROWS_PER_BATCH: The number of rows to be inserted in a batch.
TABLOCK: Holds a lock while bulk loading data.


Example 1: Importing CSV Data into SQL Server

Suppose you have a CSV file (data.csv) with the following format:

Name, Age, City
John, 30, New York
Jane, 25, Los Angeles

To import this data into a SQL Server table named People, you can use:

BULK INSERT People
FROM 'C:\path\to\data.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n',
FIRSTROW = 2
)

In this example, FIELDTERMINATOR is set to a comma, and ROWTERMINATOR to a newline character. FIRSTROW = 2 skips the header row of the CSV file.


Example 2: Importing Data with Custom Delimiters

If your data file uses custom delimiters, you can specify them in the FIELDTERMINATOR and ROWTERMINATOR options. For example, if fields are separated by semicolons and rows are separated by carriage return-line feed:

BULK INSERT People
FROM 'C:\path\to\data.txt'
WITH
(
FIELDTERMINATOR = ';',
ROWTERMINATOR = '\r\n'
)


Considerations and Best Practices

File Location: The file to be bulk imported must be accessible by the SQL Server.
Permissions: Proper permissions are required to read the file and insert data into the database.
Data Type Matching: Ensure that the data types in the file match those of the destination table.
Error Handling: Use the ERRORFILE option for logging errors encountered during the bulk insert.
Performance Tuning: Adjusting ROWS_PER_BATCH and using TABLOCK can improve performance.


Conclusion

Bulk insert is a powerful tool in SQL Server for efficiently importing large datasets. By customizing the options in the WITH clause, you can handle various data formats and optimize the import process. Always remember to test your bulk insert operations in a non-production environment to fine-tune the process and handle any data inconsistencies.