Optimizing SQLite is tricky. Bulk-insert performance of a C application can vary from 85 inserts-per-second to over 96000 inserts-per-second!
Background: We are using SQLite as part of a desktop application. We have large amounts of configuration data stored in XML files that are parsed and loaded into an SQLite database for further processing when the application is initialized. SQLite is ideal for this situation because it's fast, it requires no specialized configuration and the database is stored on disk as a single file.
Rationale: Initially I was disappointed with the performance I was seeing. It turns-out that the performance of SQLite can vary significantly (both for bulk-inserts and selects) depending on how the database is configured and how you're using the API. It was not a trivial matter to figure-out what all of the options and techniques were, so I though it prudent to create this community wiki entry to share the results with SO readers in order to save others the trouble of the same investigations.
The Experiment: Rather than simply talking about performance tips in the general sense (i.e. "Use a transaction!"), I thought it best to write some C code and actually measure the impact of various options. We're going to start with some simple data:
- A 28 meg TAB-delimited text file (approx 865000 records) of the complete transit schedule for the city of Toronto
- My test machine is a 3.60 GHz P4 running Windows XP.
- The code is compiled with MSVC 2005 as "Release" with "Full Optimization" (/Ox) and Favor Fast Code (/Ot).
- I'm using the SQLite "Amalgamation", compiled directly into my test application. The SQLite version I happen to have is a bit older (3.6.7), but I suspect these results will be comparable to the latest release (please leave a comment if you think otherwise).
Let's write some code!
The Code: A simple C program that reads the text file line-by-line, splits the string into values and then will inserts the data into an SQLite database. In this "baseline" version of the code, the database is created but we won't actually insert data:
/*************************************************************
Baseline code to experiment with SQLite performance.
Input data is a 28 Mb TAB-delimited text file of the
complete Toronto Transit System schedule/route info
from http://www.toronto.ca/open/datasets/ttc-routes/
**************************************************************/
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <string.h>
#include "sqlite3.h"
#define INPUTDATA "C:\\TTC_schedule_scheduleitem_10-27-2009.txt"
#define DATABASE "c:\\TTC_schedule_scheduleitem_10-27-2009.sqlite"
#define TABLE "CREATE TABLE IF NOT EXISTS TTC (id INTEGER PRIMARY KEY, Route_ID TEXT, Branch_Code TEXT, Version INTEGER, Stop INTEGER, Vehicle_Index INTEGER, Day Integer, Time TEXT)"
#define BUFFER_SIZE 256
int main(int argc, char **argv) {
sqlite3 * db;
sqlite3_stmt * stmt;
char * sErrMsg = 0;
char * tail = 0;
int nRetCode;
int n = 0;
clock_t cStartClock;
FILE * pFile;
char sInputBuf [BUFFER_SIZE] = "\0";
char * sRT = 0; /* Route */
char * sBR = 0; /* Branch */
char * sVR = 0; /* Version */
char * sST = 0; /* Stop Number */
char * sVI = 0; /* Vehicle */
char * sDT = 0; /* Date */
char * sTM = 0; /* Time */
char sSQL [BUFFER_SIZE] = "\0";
/*********************************************/
/* Open the Database and create the Schema */
sqlite3_open(DATABASE, &db);
sqlite3_exec(db, TABLE, NULL, NULL, &sErrMsg);
/*********************************************/
/* Open input file and import into Database*/
cStartClock = clock();
pFile = fopen (INPUTDATA,"r");
while (!feof(pFile)) {
fgets (sInputBuf, BUFFER_SIZE, pFile);
sRT = strtok (sInputBuf, "\t"); /* Get Route */
sBR = strtok (NULL, "\t"); /* Get Branch */
sVR = strtok (NULL, "\t"); /* Get Version */
sST = strtok (NULL, "\t"); /* Get Stop Number */
sVI = strtok (NULL, "\t"); /* Get Vehicle */
sDT = strtok (NULL, "\t"); /* Get Date */
sTM = strtok (NULL, "\t"); /* Get Time */
/* ACTUAL INSERT WILL GO HERE */
n++;
}
fclose (pFile);
printf("Imported %d records in %4.2f seconds\n", n, (clock() - cStartClock) / (double)CLOCKS_PER_SEC);
sqlite3_close(db);
return 0;
}
The "Control"
Running the code as-is doesn't actually perform any database operations, but it will give us an idea of how fast the raw C file IO and string processing operations are.
Imported 864913 records in 0.94
seconds
Great! We can do 920 000 inserts-per-second, provided we don't actually do any inserts :-)
The "Worst-Case-Scenario"
We're going to generate the SQL string using the values read from the file and invoke that SQL operation using sqlite3_exec:
sprintf(sSQL, "INSERT INTO TTC VALUES (NULL, '%s', '%s', '%s', '%s', '%s', '%s', '%s')", sRT, sBR, sVR, sST, sVI, sDT, sTM);
sqlite3_exec(db, sSQL, NULL, NULL, &sErrMsg);
This is going to be slow because the SQL will be compiled into VDBE code for every insert and every insert will happen in it's own transaction. How slow?
Imported 864913 records in 9933.61
seconds
Yikes! 1 hour and 45 minutes! That's only 85 inserts-per-second.
Using a Transaction
By default SQLite will evaluate every INSERT / UPDATE statement within a unique transaction. If performing a large number of inserts, it's advisable to wrap your operation in a transaction:
sqlite3_exec(db, "BEGIN TRANSACTION", NULL, NULL, &sErrMsg);
pFile = fopen (INPUTDATA,"r");
while (!feof(pFile)) {
...
}
fclose (pFile);
sqlite3_exec(db, "END TRANSACTION", NULL, NULL, &sErrMsg);
Imported 864913 records in 38.03
seconds
That's better. Simply wrapping all of our inserts in a single transaction improved our performance to 23 000 inserts-per-second.
Using a Prepared Statement
Using a transaction was a huge improvement, but recompiling the SQL statement for every insert doesn't make sense if we using the same SQL over-and-over. Let's use sqlite3_prepare_v2
to compile our SQL statement once and then bind our parameters to that statement using sqlite3_bind_text
:
/* Open input file and import into Database*/
cStartClock = clock();
sprintf(sSQL, "INSERT INTO TTC VALUES (NULL, @RT, @BR, @VR, @ST, @VI, @DT, @TM)");
sqlite3_prepare_v2(db, sSQL, BUFFER_SIZE, &stmt, &tail);
sqlite3_exec(db, "BEGIN TRANSACTION", NULL, NULL, &sErrMsg);
pFile = fopen (INPUTDATA,"r");
while (!feof(pFile)) {
fgets (sInputBuf, BUFFER_SIZE, pFile);
sRT = strtok (sInputBuf, "\t"); /* Get Route */
sBR = strtok (NULL, "\t"); /* Get Branch */
sVR = strtok (NULL, "\t"); /* Get Version */
sST = strtok (NULL, "\t"); /* Get Stop Number */
sVI = strtok (NULL, "\t"); /* Get Vehicle */
sDT = strtok (NULL, "\t"); /* Get Date */
sTM = strtok (NULL, "\t"); /* Get Time */
sqlite3_bind_text(stmt, 1, sRT, -1, SQLITE_TRANSIENT);
sqlite3_bind_text(stmt, 2, sBR, -1, SQLITE_TRANSIENT);
sqlite3_bind_text(stmt, 3, sVR, -1, SQLITE_TRANSIENT);
sqlite3_bind_text(stmt, 4, sST, -1, SQLITE_TRANSIENT);
sqlite3_bind_text(stmt, 5, sVI, -1, SQLITE_TRANSIENT);
sqlite3_bind_text(stmt, 6, sDT, -1, SQLITE_TRANSIENT);
sqlite3_bind_text(stmt, 7, sTM, -1, SQLITE_TRANSIENT);
sqlite3_step(stmt);
sqlite3_clear_bindings(stmt);
sqlite3_reset(stmt);
n++;
}
fclose (pFile);
sqlite3_exec(db, "END TRANSACTION", NULL, NULL, &sErrMsg);
printf("Imported %d records in %4.2f seconds\n", n, (clock() - cStartClock) / (double)CLOCKS_PER_SEC);
sqlite3_finalize(stmt);
sqlite3_close(db);
return 0;
Imported 864913 records in 16.27
seconds
Nice! There's a little bit more code (don't forget to call sqlite3_clear_bindings
and sqlite3_reset
) but we've more than doubled our performance to 53 000 inserts-per-second.
PRAGMA synchronous = OFF
By default SQLite will pause after issuing a OS-level write command. This guarantees that the data is written to the disk. By setting synchronous = OFF
, we are instructing SQLite to simply hand-off the data to the OS for writing and then continue. There's a chance that the database file may become corrupted if the computer suffers a catastrophic crash (or power failure) before the data is written to the platter:
/* Open the Database and create the Schema */
sqlite3_open(DATABASE, &db);
sqlite3_exec(db, TABLE, NULL, NULL, &sErrMsg);
sqlite3_exec(db, "PRAGMA synchronous = OFF", NULL, NULL, &sErrMsg);
Imported 864913 records in 12.41
seconds
The improvements are now smaller, but we're up to 69 600 inserts-per-second.
PRAGMA journal_mode = MEMORY
Consider storing the rollback journal in memory by evaluating PRAGMA journal_mode = MEMORY
. Your transaction will be faster, but if you lose power or your program crashes during a transaction you database could be left in a corrupt state with a partially-completed transaction:
/* Open the Database and create the Schema */
sqlite3_open(DATABASE, &db);
sqlite3_exec(db, TABLE, NULL, NULL, &sErrMsg);
sqlite3_exec(db, "PRAGMA journal_mode = MEMORY", NULL, NULL, &sErrMsg);
Imported 864913 records in 13.50
seconds
A little slower than the previous optimization at 64 000 inserts-per-second.
PRAGMA synchronous = OFF and PRAGMA journal_mode = MEMORY
Let's combine the previous two optimizations. It's a little more risky (in case of a crash), but we're just importing data (not running a bank):
/* Open the Database and create the Schema */
sqlite3_open(DATABASE, &db);
sqlite3_exec(db, TABLE, NULL, NULL, &sErrMsg);
sqlite3_exec(db, "PRAGMA synchronous = OFF", NULL, NULL, &sErrMsg);
sqlite3_exec(db, "PRAGMA journal_mode = MEMORY", NULL, NULL, &sErrMsg);
Imported 864913 records in 12.00
seconds
Fantastic! We're able to do 72 000 inserts-per-second.
Using an In-Memory Database
Just for kicks, let's build upon all of the previous optimizations and redefine the database filename so we're working entirely in RAM:
#define DATABASE ":memory:"
Imported 864913 records in 10.94
seconds
It's not super-practical to store our database in RAM, but it's impressive that we can perform 79 000 inserts-per-second.
Refactoring C Code
Although not specifically an SQLite improvement, I don't like the extra char*
assignment operations in the while
loop. Let's quickly refactor that code to pass the output of strtok()
directly into sqlite3_bind_text()
and let the compiler try to speed things up for us:
pFile = fopen (INPUTDATA,"r");
while (!feof(pFile)) {
fgets (sInputBuf, BUFFER_SIZE, pFile);
sqlite3_bind_text(stmt, 1, strtok (sInputBuf, "\t"), -1, SQLITE_TRANSIENT); /* Get Route */
sqlite3_bind_text(stmt, 2, strtok (NULL, "\t"), -1, SQLITE_TRANSIENT); /* Get Branch */
sqlite3_bind_text(stmt, 3, strtok (NULL, "\t"), -1, SQLITE_TRANSIENT); /* Get Version */
sqlite3_bind_text(stmt, 4, strtok (NULL, "\t"), -1, SQLITE_TRANSIENT); /* Get Stop Number */
sqlite3_bind_text(stmt, 5, strtok (NULL, "\t"), -1, SQLITE_TRANSIENT); /* Get Vehicle */
sqlite3_bind_text(stmt, 6, strtok (NULL, "\t"), -1, SQLITE_TRANSIENT); /* Get Date */
sqlite3_bind_text(stmt, 7, strtok (NULL, "\t"), -1, SQLITE_TRANSIENT); /* Get Time */
sqlite3_step(stmt); /* Execute the SQL Statement */
sqlite3_clear_bindings(stmt); /* Clear bindings */
sqlite3_reset(stmt); /* Reset VDBE */
n++;
}
fclose (pFile);
Note: We are back to using a real database file. In-memory databases as fast but not necessarily practical
Imported 864913 records in 8.94
seconds
A slight refactoring to the string processing code used in our parameter binding has allowed us to perform 96 700 inserts-per-second. I think it's safe to say that this is plenty fast. As we start to tweak other variables (i.e. page size, index creation, etc.) this will be our benchmark.
Summary (so far)
I hope you're still with me! The reason we started down this road is that bulk-insert performance varies so wildly with SQLite and it's not always obvious what changes need to be made to speed-up our operation. Using the same compiler (and compiler options), the same version of SQLite and the same data we've optimized our code and our usage of SQLite to go from a worst-case scenario of 85 inserts-per-second to over 96000 inserts-per-second!
CREATE INDEX then INSERT vs. INSERT then CREATE INDEX
Before we start measuring SELECT
performance, we know that we'll be creating indexes. It's been suggested in one of the answers below that when doing bulk inserts, it is faster to create the index after the data has been inserted (as opposed to creating the index first then inserting the data). Let's try:
Create Index then Insert Data
sqlite3_exec(db, "CREATE INDEX 'TTC_Stop_Index' ON 'TTC' ('Stop')", NULL, NULL, &sErrMsg);
sqlite3_exec(db, "BEGIN TRANSACTION", NULL, NULL, &sErrMsg);
...
Imported 864913 records in 18.13
seconds
Insert Data then Create Index
...
sqlite3_exec(db, "END TRANSACTION", NULL, NULL, &sErrMsg);
sqlite3_exec(db, "CREATE INDEX 'TTC_Stop_Index' ON 'TTC' ('Stop')", NULL, NULL, &sErrMsg);
Imported 864913 records in 13.66
seconds
As expected, bulk-inserts are slower if one column is indexed, but it does make a difference if the index is created after the data is inserted. Our no-index baseline is 96 000 insert-per-second. Creating the index first then inserting data gives us 47 700 inserts-per-second, whereas inserting the data first then creating the index gives us 63 300 inserts-per-second.
I'd gladly take suggestions for other scenarios to try... And will be compiling similar data for SELECT queries soon.
Source: (StackOverflow)
How can the theoretical peak performance of 4 floating point operations (double precision) per cycle be achieved on a modern x86-64 Intel CPU?
As far as I understand it take three cycles for an SSE add
and five cycles for a mul
to complete on most of the modern Intel CPUs (see for example Agner Fog's 'Instruction Tables' ). Due to pipelining one can get a throughput of one add
per cycle if the algorithm has at least three independent summations. Since that is true for packed addpd
as well as the scalar addsd
versions and SSE registers can contain two double
's the throughput can be as much as two flops per cycle.
Furthermore, it seems (although I've not seen any proper documentation on this) add
's and mul
's can be executed in parallel giving a theoretical max throughput of four flops per cycle.
However, I've not been able to replicate that performance with a simple C/C++ programme. My best attempt resulted in about 2.7 flops/cycle. If anyone can contribute a simple C/C++ or assembler programme which demonstrates peak performance that'd be greatly appreciated.
My attempt:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <sys/time.h>
double stoptime(void) {
struct timeval t;
gettimeofday(&t,NULL);
return (double) t.tv_sec + t.tv_usec/1000000.0;
}
double addmul(double add, double mul, int ops){
// Need to initialise differently otherwise compiler might optimise away
double sum1=0.1, sum2=-0.1, sum3=0.2, sum4=-0.2, sum5=0.0;
double mul1=1.0, mul2= 1.1, mul3=1.2, mul4= 1.3, mul5=1.4;
int loops=ops/10; // We have 10 floating point operations inside the loop
double expected = 5.0*add*loops + (sum1+sum2+sum3+sum4+sum5)
+ pow(mul,loops)*(mul1+mul2+mul3+mul4+mul5);
for (int i=0; i<loops; i++) {
mul1*=mul; mul2*=mul; mul3*=mul; mul4*=mul; mul5*=mul;
sum1+=add; sum2+=add; sum3+=add; sum4+=add; sum5+=add;
}
return sum1+sum2+sum3+sum4+sum5+mul1+mul2+mul3+mul4+mul5 - expected;
}
int main(int argc, char** argv) {
if (argc != 2) {
printf("usage: %s <num>\n", argv[0]);
printf("number of operations: <num> millions\n");
exit(EXIT_FAILURE);
}
int n = atoi(argv[1]) * 1000000;
if (n<=0)
n=1000;
double x = M_PI;
double y = 1.0 + 1e-8;
double t = stoptime();
x = addmul(x, y, n);
t = stoptime() - t;
printf("addmul:\t %.3f s, %.3f Gflops, res=%f\n", t, (double)n/t/1e9, x);
return EXIT_SUCCESS;
}
Compiled with
g++ -O2 -march=native addmul.cpp ; ./a.out 1000
produces the following output on an Intel Core i5-750, 2.66 GHz.
addmul: 0.270 s, 3.707 Gflops, res=1.326463
That is, just about 1.4 flops per cycle. Looking at the assembler code with
g++ -S -O2 -march=native -masm=intel addmul.cpp
the main loop seems kind of
optimal to me:
.L4:
inc eax
mulsd xmm8, xmm3
mulsd xmm7, xmm3
mulsd xmm6, xmm3
mulsd xmm5, xmm3
mulsd xmm1, xmm3
addsd xmm13, xmm2
addsd xmm12, xmm2
addsd xmm11, xmm2
addsd xmm10, xmm2
addsd xmm9, xmm2
cmp eax, ebx
jne .L4
Changing the scalar versions with packed versions (addpd
and mulpd
) would double the flop count without changing the execution time and so I'd get just short of 2.8 flops per cycle. Is there a simple example which achieves four flops per cycle?
Nice little programme by Mysticial; here are my results (run just for a few seconds though):
gcc -O2 -march=nocona
: 5.6 Gflops out of 10.66 Gflops (2.1 flops/cycle)
cl /O2
, openmp removed: 10.1 Gflops out of 10.66 Gflops (3.8 flops/cycle)
It all seems a bit complex, but my conclusions so far:
gcc -O2
changes the order of independent floating point operations with
the aim of alternating
addpd
and mulpd
's if possible. Same applies to gcc-4.6.2 -O2 -march=core2
.
gcc -O2 -march=nocona
seems to keep the order of floating point operations as defined in
the C++ source.
cl /O2
, the 64-bit compiler from the
SDK for Windows 7
does loop-unrolling automatically and seems to try and arrange operations
so that groups of three addpd
's alternate with three mulpd
's (well, at least on my system and for my simple programme).
My Core i5 750 (Nahelem architecture)
doesn't like alternating add's and mul's and seems unable
to run both operations in parallel. However, if grouped in 3's it suddenly works like magic.
Other architectures (possibly Sandy Bridge and others) appear to
be able to execute add/mul in parallel without problems
if they alternate in the assembly code.
Although difficult to admit, but on my system cl /O2
does a much better job at low-level optimising operations for my system and achieves close to peak performance for the little C++ example above. I measured between
1.85-2.01 flops/cycle (have used clock() in Windows which is not that precise. I guess, need to use a better timer - thanks Mackie Messer).
The best I managed with gcc
was to manually loop unroll and arrange
additions and multiplications in groups of three. With
g++ -O2 -march=nocona addmul_unroll.cpp
I get at best 0.207s, 4.825 Gflops
which corresponds to 1.8 flops/cycle
which I'm quite happy with now.
In the C++ code I've replaced the for
loop with
for (int i=0; i<loops/3; i++) {
mul1*=mul; mul2*=mul; mul3*=mul;
sum1+=add; sum2+=add; sum3+=add;
mul4*=mul; mul5*=mul; mul1*=mul;
sum4+=add; sum5+=add; sum1+=add;
mul2*=mul; mul3*=mul; mul4*=mul;
sum2+=add; sum3+=add; sum4+=add;
mul5*=mul; mul1*=mul; mul2*=mul;
sum5+=add; sum1+=add; sum2+=add;
mul3*=mul; mul4*=mul; mul5*=mul;
sum3+=add; sum4+=add; sum5+=add;
}
And the assembly now looks like
.L4:
mulsd xmm8, xmm3
mulsd xmm7, xmm3
mulsd xmm6, xmm3
addsd xmm13, xmm2
addsd xmm12, xmm2
addsd xmm11, xmm2
mulsd xmm5, xmm3
mulsd xmm1, xmm3
mulsd xmm8, xmm3
addsd xmm10, xmm2
addsd xmm9, xmm2
addsd xmm13, xmm2
...
Source: (StackOverflow)