EzDevInfo.com

database-performance interview questions

Top database-performance frequently asked interview questions

Dropping column in Postgres on a large dataset

So I have a table with a large dataset and this table has a three columns that I would like to drop.
The question is: how will Postgres deal with it?

Will it walk through every entry or will it just update mapping info without much overhead? Can I just make an ALTER TABLE or should I use swap-table in this particular case?

And, if it makes any difference, all three columns have fixed length (two integers and one numeric).

I'm sorry if it's been asked already, but Google couldn't find any related questions / articles ...


Source: (StackOverflow)

LOWER LIKE vs iLIKE

How does the performance of the following two query components compare?

LOWER LIKE

... LOWER(description) LIKE '%abcde%' ...

iLIKE

... description iLIKE '%abcde%' ...

Source: (StackOverflow)

Advertisements

is "where (ParamID = @ParamID) OR (@ParamID = -1)" a good practice in sql selection

i used to write sql statments like

select * from teacher where (TeacherID = @TeacherID) OR (@TeacherID = -1)

read more

and pass @TeacherID value = -1 to select all teachers

now i'm worry about the performance can you tell me is that a good practice or bad one?

many thanks


Source: (StackOverflow)

How big can a MySQL database get before performance starts to degrade

At what point does a MySQL database start to lose performance?

  • Does physical database size matter?
  • Do number of records matter?
  • Is any performance degradation linear or exponential?

I have what I believe to be a large database, with roughly 15M records which take up almost 2GB. Based on these numbers, is there any incentive for me to clean the data out, or am I safe to allow it to continue scaling for a few more years?


Source: (StackOverflow)

How to configure MongoDB Java driver MongoOptions for production use?

I've been searching the web looking for best practices for configuring MongoOptions for the MongoDB Java driver and I haven't come up with much other than the API. This search started after I ran into the "com.mongodb.DBPortPool$SemaphoresOut: Out of semaphores to get db connection" error and by increasing the connections/multiplier I was able to solve that problem. I'm looking for links to or your best practices in configuring these options for production.

The options for the 2.4 driver include: http://api.mongodb.org/java/2.4/com/mongodb/MongoOptions.html

  • autoConnectRetry
  • connectionsPerHost
  • connectTimeout
  • maxWaitTime
  • socketTimeout
  • threadsAllowedToBlockForConnectionMultiplier

The newer drivers have more options and I would be interested in hearing about those as well.


Source: (StackOverflow)

Postgresql Truncation speed

We're using Postgresql 9.1.4 as our db server. I've been trying to speed up my test suite so I've stared profiling the db a bit to see exactly what's going on. We are using database_cleaner to truncate tables at the end of tests. YES I know transactions are faster, I can't use them in certain circumstances so I'm not concerned with that.

What I AM concerned with, is why TRUNCATION takes so long (longer than using DELETE) and why it takes EVEN LONGER on my CI server.

Right now, locally (on a Macbook Air) a full test suite takes 28 minutes. Tailing the logs, each time we truncate tables... ie:

TRUNCATE TABLE table1, table2  -- ... etc

it takes over 1 second to perform the truncation. Tailing the logs on our CI server (Ubuntu 10.04 LTS), take takes a full 8 seconds to truncate the tables and a build takes 84 minutes.

When I switched over to the :deletion strategy, my local build took 20 minutes and the CI server went down to 44 minutes. This is a significant difference and I'm really blown away as to why this might be. I've tuned the DB on the CI server, it has 16gb system ram, 4gb shared_buffers... and an SSD. All the good stuff. How is it possible:

a. that it's SO much slower than my Macbook Air with 2gb of ram
b. that TRUNCATION is so much slower than DELETE when the postgresql docs state explicitly that it should be much faster.

Any thoughts?


Source: (StackOverflow)

Can this SQLite query be made much faster?

I have a database representing metadata of a security camera NVR. There's a 26-byte recording row for every 1-minute segment of video. (If you're curious, a design doc is in progress here.) My design limits are 8 cameras, 1 year (~4 million rows, half a million per camera). I've faked up some data to test performance. This query is slower than I expected:

select
  recording.start_time_90k,
  recording.duration_90k,
  recording.video_samples,
  recording.sample_file_bytes,
  recording.video_sample_entry_id
from
  recording
where
  camera_id = ?
order by
  recording.start_time_90k;

That's just scanning all data for a camera, using an index for filtering out other cameras and ordering. Index looks like this:

create index recording_camera_start on recording (camera_id, start_time_90k);

explain query plan looks as expected:

0|0|0|SEARCH TABLE recording USING INDEX recording_camera_start (camera_id=?)

The rows are quite small.

$ sqlite3_analyzer duplicated.db
...

*** Table RECORDING w/o any indices *******************************************

Percentage of total database......................  66.3%
Number of entries................................. 4225560
Bytes of storage consumed......................... 143418368
Bytes of payload.................................. 109333605   76.2%
B-tree depth...................................... 4
Average payload per entry......................... 25.87
Average unused bytes per entry.................... 0.99
Average fanout.................................... 94.00
Non-sequential pages.............................. 1            0.0%
Maximum payload per entry......................... 26
Entries that use overflow......................... 0            0.0%
Index pages used.................................. 1488
Primary pages used................................ 138569
Overflow pages used............................... 0
Total pages used.................................. 140057
Unused bytes on index pages....................... 188317      12.4%
Unused bytes on primary pages..................... 3987216      2.8%
Unused bytes on overflow pages.................... 0
Unused bytes on all pages......................... 4175533      2.9%

*** Index RECORDING_CAMERA_START of table RECORDING ***************************

Percentage of total database......................  33.7%
Number of entries................................. 4155718
Bytes of storage consumed......................... 73003008
Bytes of payload.................................. 58596767    80.3%
B-tree depth...................................... 4
Average payload per entry......................... 14.10
Average unused bytes per entry.................... 0.21
Average fanout.................................... 49.00
Non-sequential pages.............................. 1            0.001%
Maximum payload per entry......................... 14
Entries that use overflow......................... 0            0.0%
Index pages used.................................. 1449
Primary pages used................................ 69843
Overflow pages used............................... 0
Total pages used.................................. 71292
Unused bytes on index pages....................... 8463         0.57%
Unused bytes on primary pages..................... 865598       1.2%
Unused bytes on overflow pages.................... 0
Unused bytes on all pages......................... 874061       1.2%

...

I'd like something like this (maybe only a month at a time, rather than a full year) to be run every time a particular webpage is hit, so I want it to be quite fast. But on my laptop, it takes most of a second, and on the Raspberry Pi 2 I'd like to support, it's way too slow. Times (in seconds) below; it's CPU-bound (user+sys time ~= real time):

laptop$ time ./bench-profiled
trial 0: time 0.633 sec
trial 1: time 0.636 sec
trial 2: time 0.639 sec
trial 3: time 0.679 sec
trial 4: time 0.649 sec
trial 5: time 0.642 sec
trial 6: time 0.609 sec
trial 7: time 0.640 sec
trial 8: time 0.666 sec
trial 9: time 0.715 sec
...
PROFILE: interrupts/evictions/bytes = 1974/489/72648

real    0m20.546s
user    0m16.564s
sys     0m3.976s
(This is Ubuntu 15.10, SQLITE_VERSION says "3.8.11.1")

raspberrypi2$ time ./bench-profiled
trial 0: time 6.334 sec
trial 1: time 6.216 sec
trial 2: time 6.364 sec
trial 3: time 6.412 sec
trial 4: time 6.398 sec
trial 5: time 6.389 sec
trial 6: time 6.395 sec
trial 7: time 6.424 sec
trial 8: time 6.391 sec
trial 9: time 6.396 sec
...
PROFILE: interrupts/evictions/bytes = 19066/2585/43124

real    3m20.083s
user    2m47.120s
sys 0m30.620s
(This is Raspbian Jessie; SQLITE_VERSION says "3.8.7.1")

I'll likely end up doing some sort of denormalized data, but first I'd like to see if I can get this simple query to perform as well as it can. My benchmark's pretty simple; it prepares the statement in advance and then loops over this:

void Trial(sqlite3_stmt *stmt) {
  int ret;
  while ((ret = sqlite3_step(stmt)) == SQLITE_ROW) ;
  if (ret != SQLITE_DONE) {
    errx(1, "sqlite3_step: %d (%s)", ret, sqlite3_errstr(ret));
  }
  ret = sqlite3_reset(stmt);
  if (ret != SQLITE_OK) {
    errx(1, "sqlite3_reset: %d (%s)", ret, sqlite3_errstr(ret));
  }
}

I made a CPU profile with gperftools. Image:

CPU profile graph

$ google-pprof bench-profiled timing.pprof
Using local file bench-profiled.
Using local file timing.pprof.
Welcome to pprof!  For help, type 'help'.
(pprof) top 10
Total: 593 samples
     154  26.0%  26.0%      377  63.6% sqlite3_randomness
     134  22.6%  48.6%      557  93.9% sqlite3_reset
      83  14.0%  62.6%       83  14.0% __read_nocancel
      61  10.3%  72.8%       61  10.3% sqlite3_strnicmp
      41   6.9%  79.8%       46   7.8% sqlite3_free_table
      26   4.4%  84.1%       26   4.4% sqlite3_uri_parameter
      25   4.2%  88.4%       25   4.2% llseek
      13   2.2%  90.6%      121  20.4% sqlite3_db_config
      12   2.0%  92.6%       12   2.0% __pthread_mutex_unlock_usercnt (inline)
      10   1.7%  94.3%       10   1.7% __GI___pthread_mutex_lock

This looks strange enough to give me hope it can be improved. Maybe I'm doing something dumb. I'm particularly skeptical of the sqlite3_randomness and sqlite3_strnicmp operations:

  • docs say sqlite3_randomness is used for inserting rowids in some circumstances, but I'm just doing a select query. Why would it be using it now? From skimming sqlite3 source code, I see it's used in select for sqlite3ColumnsFromExprList but that seems to be something that'd happen when preparing the statement. I'm doing that once, not in the part being benchmarked.
  • strnicmp is for case-insensitive string comparisons. But every field in this table is an integer. Why would it be using this function? What is it comparing?
  • and in general, I don't know why sqlite3_reset would be expensive or why it'd be called from sqlite3_step.

Schema:

-- Each row represents a single recorded segment of video.
-- Segments are typically ~60 seconds; never more than 5 minutes.
-- Each row should have a matching recording_detail row.
create table recording (
  id integer primary key,
  camera_id integer references camera (id) not null,

  sample_file_bytes integer not null check (sample_file_bytes > 0),

  -- The starting time of the recording, in 90 kHz units since
  -- 1970-01-01 00:00:00 UTC.
  start_time_90k integer not null check (start_time_90k >= 0),

  -- The duration of the recording, in 90 kHz units.
  duration_90k integer not null
      check (duration_90k >= 0 and duration_90k < 5*60*90000),

  video_samples integer not null check (video_samples > 0),
  video_sync_samples integer not null check (video_samples > 0),
  video_sample_entry_id integer references video_sample_entry (id)
);

I've tarred up my test data + test program; you can download it here.


Edit 1:

Ahh, looking through SQLite code, I see a clue:

int sqlite3_step(sqlite3_stmt *pStmt){
  int rc = SQLITE_OK;      /* Result from sqlite3Step() */
  int rc2 = SQLITE_OK;     /* Result from sqlite3Reprepare() */
  Vdbe *v = (Vdbe*)pStmt;  /* the prepared statement */
  int cnt = 0;             /* Counter to prevent infinite loop of reprepares */
  sqlite3 *db;             /* The database connection */

  if( vdbeSafetyNotNull(v) ){
    return SQLITE_MISUSE_BKPT;
  }
  db = v->db;
  sqlite3_mutex_enter(db->mutex);
  v->doingRerun = 0;
  while( (rc = sqlite3Step(v))==SQLITE_SCHEMA
         && cnt++ < SQLITE_MAX_SCHEMA_RETRY ){
    int savedPc = v->pc;
    rc2 = rc = sqlite3Reprepare(v);
    if( rc!=SQLITE_OK) break;
    sqlite3_reset(pStmt);
    if( savedPc>=0 ) v->doingRerun = 1;
    assert( v->expired==0 );
  }

It looks like sqlite3_step calls sqlite3_reset on schema change. (FAQ entry) I don't know why there'd be a schema change since my statement was prepared though...


Edit 2:

I downloaded the SQLite 3.10.1 "amalgation" and compiled against it with debugging symbols. I get a pretty different profile now that doesn't look as weird, but it's not any faster. Maybe the weird results I saw before were due to Identical Code Folding or something.

enter image description here


Edit 3:

Trying Ben's clustered index solution below, it is about 3.6X faster. I think this is the best I'm going to do with this query. SQLite's CPU performance is about ~700 MB/s on my laptop. Short of rewriting it to use a JIT compiler for its virtual machine or some such, I'm not going to do any better. In particular, I think the bizarre calls I saw on my first profile weren't actually happening; gcc must have written misleading debug info due to optimizations or something.

Even if the CPU performance would be improved, that throughput is more than my storage can do on cold read now, and I think the same is true on the Pi (which has a limited USB 2.0 bus for the SD card).

$ time ./bench
sqlite3 version: 3.10.1
trial 0: realtime 0.172 sec cputime 0.172 sec
trial 1: realtime 0.172 sec cputime 0.172 sec
trial 2: realtime 0.175 sec cputime 0.175 sec
trial 3: realtime 0.173 sec cputime 0.173 sec
trial 4: realtime 0.182 sec cputime 0.182 sec
trial 5: realtime 0.187 sec cputime 0.187 sec
trial 6: realtime 0.173 sec cputime 0.173 sec
trial 7: realtime 0.185 sec cputime 0.185 sec
trial 8: realtime 0.190 sec cputime 0.190 sec
trial 9: realtime 0.192 sec cputime 0.192 sec
trial 10: realtime 0.191 sec cputime 0.191 sec
trial 11: realtime 0.188 sec cputime 0.188 sec
trial 12: realtime 0.186 sec cputime 0.186 sec
trial 13: realtime 0.179 sec cputime 0.179 sec
trial 14: realtime 0.179 sec cputime 0.179 sec
trial 15: realtime 0.188 sec cputime 0.188 sec
trial 16: realtime 0.178 sec cputime 0.178 sec
trial 17: realtime 0.175 sec cputime 0.175 sec
trial 18: realtime 0.182 sec cputime 0.182 sec
trial 19: realtime 0.178 sec cputime 0.178 sec
trial 20: realtime 0.189 sec cputime 0.189 sec
trial 21: realtime 0.191 sec cputime 0.191 sec
trial 22: realtime 0.179 sec cputime 0.179 sec
trial 23: realtime 0.185 sec cputime 0.185 sec
trial 24: realtime 0.190 sec cputime 0.190 sec
trial 25: realtime 0.189 sec cputime 0.189 sec
trial 26: realtime 0.182 sec cputime 0.182 sec
trial 27: realtime 0.176 sec cputime 0.176 sec
trial 28: realtime 0.173 sec cputime 0.173 sec
trial 29: realtime 0.181 sec cputime 0.181 sec
PROFILE: interrupts/evictions/bytes = 547/178/24592

real    0m5.651s
user    0m5.292s
sys     0m0.356s

I may have to keep some denormalized data. Fortunately, I'm thinking I can just keep it in my application's RAM given that it won't be too big, startup doesn't have to be amazingly fast, and only the one process ever writes to the database.


Source: (StackOverflow)

MYSQL query performs very slow

I have developed a user bulk upload module. There are 2 situations, when I do a bulk upload of 20 000 records when database has zero records. Its taking about 5 hours. But when the database already has about 30 000 records the upload is very very slow. It takes about 11 hours to upload 20 000 records. I am just reading a CSV file via fgetcsv method.

if (($handle = fopen($filePath, "r")) !== FALSE) {
            while (($peopleData = fgetcsv($handle, 10240, ",")) !== FALSE) {
                if (count($peopleData) == $fieldsCount) {

//inside i check if user already exist (firstName & lastName & DOB)
//if not, i check if email exist. if exist, update the records.
//other wise insert a new record.
}}}

Below are the queries that run. (I am using Yii framework)

SELECT * 
FROM `AdvanceBulkInsert` `t` 
WHERE renameSource='24851_bulk_people_2016-02-25_LE CARVALHO 1.zip.csv' 
LIMIT 1

SELECT cf.*, ctyp.typeName, cfv.id as customId, cfv.customFieldId, 
       cfv.relatedId, cfv.fieldValue, cfv.createdAt 
FROM `CustomField` `cf` 
    INNER JOIN CustomType ctyp on ctyp.id = cf.customTypeId 
    LEFT OUTER JOIN CustomValue cfv on cf.id = cfv.customFieldId 
                and relatedId = 0 
    LEFT JOIN CustomFieldSubArea cfsa on cfsa.customFieldId = cf.id 
WHERE ((relatedTable = 'people' and enabled = '1') 
  AND (onCreate = '1')) 
  AND (cfsa.subarea='peoplebulkinsert') 
ORDER BY cf.sortOrder, cf.label

SELECT * 
FROM `User` `t` 
WHERE `t`.`firstName`='Franck' 
  AND `t`.`lastName`='ALLEGAERT ' 
  AND `t`.`dateOfBirth`='1971-07-29' 
  AND (userType NOT IN ("1")) 
LIMIT 1

If exist update the user:

UPDATE `User` SET `id`='51394', `address1`='49 GRANDE RUE', 
                  `mobile`='', `name`=NULL, `firstName`='Franck', 
                  `lastName`='ALLEGAERT ', `username`=NULL, 
                  `password`=NULL, `email`=NULL, `gender`=0, 
                  `zip`='60310', `countryCode`='DZ', 
                  `joinedDate`='2016-02-23 10:44:18', 
                  `signUpDate`='0000-00-00 00:00:00', 
                  `supporterDate`='2016-02-25 13:26:37', `userType`=3, 
                  `signup`=0, `isSysUser`=0, `dateOfBirth`='1971-07-29', 
                  `reqruiteCount`=0, `keywords`='70,71,72,73,74,75', 
                  `delStatus`=0, `city`='AMY', `isUnsubEmail`=0, 
                  `isManual`=1, `isSignupConfirmed`=0, `profImage`=NULL, 
                  `totalDonations`=NULL, `isMcContact`=NULL, 
                  `emailStatus`=NULL, `notes`=NULL, 
                  `addressInvalidatedAt`=NULL, 
                  `createdAt`='2016-02-23 10:44:18', 
                  `updatedAt`='2016-02-25 13:26:37', `longLat`=NULL 
WHERE `User`.`id`='51394'

If user don't exist, insert new record.

Table engine type is MYISAM. Only the email column has a index.

How can I optimize this to reduce the processing time?

Query 2, took 0.4701 seconds which means for 30 000 records it will take 14103 sec, which is about 235 minutes. approx 6 hours.

Update

CREATE TABLE IF NOT EXISTS `User` (
  `id` bigint(20) NOT NULL,
  `address1` text COLLATE utf8_unicode_ci,
  `mobile` varchar(15) COLLATE utf8_unicode_ci DEFAULT NULL,
  `name` varchar(45) COLLATE utf8_unicode_ci DEFAULT NULL,
  `firstName` varchar(64) COLLATE utf8_unicode_ci DEFAULT NULL,
  `lastName` varchar(64) COLLATE utf8_unicode_ci DEFAULT NULL,
  `username` varchar(20) COLLATE utf8_unicode_ci DEFAULT NULL,
  `password` varchar(45) COLLATE utf8_unicode_ci DEFAULT NULL,
  `email` varchar(45) COLLATE utf8_unicode_ci DEFAULT NULL,
  `gender` tinyint(2) NOT NULL DEFAULT '0' COMMENT '1 - female, 2-male, 0 - unknown',
  `zip` varchar(15) COLLATE utf8_unicode_ci DEFAULT NULL,
  `countryCode` varchar(3) COLLATE utf8_unicode_ci DEFAULT NULL,
  `joinedDate` datetime DEFAULT NULL,
  `signUpDate` datetime NOT NULL COMMENT 'User signed up date',
  `supporterDate` datetime NOT NULL COMMENT 'Date which user get supporter',
  `userType` tinyint(2) NOT NULL,
  `signup` tinyint(2) NOT NULL DEFAULT '0' COMMENT 'whether user followed signup process 1 - signup, 0 - not signup',
  `isSysUser` tinyint(1) NOT NULL DEFAULT '0' COMMENT '1 - system user, 0 - not a system user',
  `dateOfBirth` date DEFAULT NULL COMMENT 'User date of birth',
  `reqruiteCount` int(11) DEFAULT '0' COMMENT 'User count that he has reqruited',
  `keywords` text COLLATE utf8_unicode_ci COMMENT 'Kewords',
  `delStatus` tinyint(2) NOT NULL DEFAULT '0' COMMENT '0 - active, 1 - deleted',
  `city` varchar(50) COLLATE utf8_unicode_ci DEFAULT NULL,
  `isUnsubEmail` tinyint(1) NOT NULL DEFAULT '0' COMMENT '0 - ok, 1 - Unsubscribed form email',
  `isManual` tinyint(1) NOT NULL DEFAULT '0' COMMENT '0 - ok, 1 - Manualy add',
  `longLat` varchar(45) COLLATE utf8_unicode_ci DEFAULT NULL COMMENT 'Longitude and Latitude',
  `isSignupConfirmed` tinyint(4) NOT NULL DEFAULT '0' COMMENT 'Whether user has confirmed signup ',
  `profImage` tinytext COLLATE utf8_unicode_ci COMMENT 'Profile image name or URL',
  `totalDonations` float DEFAULT NULL COMMENT 'Total donations made by the user',
  `isMcContact` tinyint(1) DEFAULT NULL COMMENT '1 - Mailchimp contact',
  `emailStatus` tinyint(2) DEFAULT NULL COMMENT '1-bounced, 2-blocked',
  `notes` text COLLATE utf8_unicode_ci,
  `addressInvalidatedAt` datetime DEFAULT NULL,
  `createdAt` datetime NOT NULL,
  `updatedAt` datetime DEFAULT NULL
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;

CREATE TABLE IF NOT EXISTS `AdvanceBulkInsert` (
  `id` int(11) NOT NULL,
  `source` varchar(256) NOT NULL,
  `renameSource` varchar(256) DEFAULT NULL,
  `countryCode` varchar(3) NOT NULL,
  `userType` tinyint(2) NOT NULL,
  `size` varchar(128) NOT NULL,
  `errors` varchar(512) NOT NULL,
  `status` char(1) NOT NULL COMMENT '1:Queued, 2:In Progress, 3:Error, 4:Finished, 5:Cancel',
  `createdAt` datetime NOT NULL,
  `createdBy` int(11) NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

CREATE TABLE IF NOT EXISTS `CustomField` (
  `id` int(11) NOT NULL,
  `customTypeId` int(11) NOT NULL,
  `fieldName` varchar(64) COLLATE utf8_unicode_ci DEFAULT NULL,
  `relatedTable` varchar(64) COLLATE utf8_unicode_ci DEFAULT NULL,
  `defaultValue` text COLLATE utf8_unicode_ci,
  `sortOrder` int(11) NOT NULL DEFAULT '0',
  `enabled` char(1) COLLATE utf8_unicode_ci DEFAULT '1',
  `listItemTag` char(1) COLLATE utf8_unicode_ci DEFAULT NULL,
  `required` char(1) COLLATE utf8_unicode_ci DEFAULT '0',
  `onCreate` char(1) COLLATE utf8_unicode_ci DEFAULT '1',
  `onEdit` char(1) COLLATE utf8_unicode_ci DEFAULT '1',
  `onView` char(1) COLLATE utf8_unicode_ci DEFAULT '1',
  `listValues` text COLLATE utf8_unicode_ci,
  `label` varchar(64) COLLATE utf8_unicode_ci DEFAULT NULL,
  `htmlOptions` text COLLATE utf8_unicode_ci
) ENGINE=MyISAM AUTO_INCREMENT=12 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;

CREATE TABLE IF NOT EXISTS `CustomFieldSubArea` (
  `id` int(11) NOT NULL,
  `customFieldId` int(11) NOT NULL,
  `subarea` varchar(256) COLLATE utf8_unicode_ci NOT NULL
) ENGINE=MyISAM AUTO_INCREMENT=43 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;

CREATE TABLE IF NOT EXISTS `CustomValue` (
  `id` int(11) NOT NULL,
  `customFieldId` int(11) NOT NULL,
  `relatedId` int(11) NOT NULL,
  `fieldValue` text COLLATE utf8_unicode_ci,
  `createdAt` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP
) ENGINE=MyISAM AUTO_INCREMENT=86866 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;

Entire PHP Code is here http://pastie.org/10737962

Update 2

Explain output of the Query

enter image description here


Source: (StackOverflow)

Using git repository as a database backend

I'm doing a project that deals with structured document database. I have a tree of categories (~1000 categories, up to ~50 categories on each level), each category contains several thousands (up to, say, ~10000) of structured documents. Each document is several kilobytes of data in some structured form (I'd prefer YAML, but it may just as well be JSON or XML).

Users of this systems do several types of operations:

  • retrieving of these documents by ID
  • searching for documents by some of the structured attributes inside them
  • editing documents (i.e. adding/removing/renaming/merging); each edit operation should be recorded as a transaction with some comment
  • viewing a history of recorded changes for particular document (including viewing who, when and why changed the document, getting earlier version - and probably reverting to this one if requested)

Of course, the traditional solution would be using some sort of document database (such as CouchDB or Mongo) for this problem - however, this version control (history) thing tempted me to a wild idea - why shouldn't I use git repository as a database backend for this application?

On the first glance, it could be solved like this:

  • category = directory, document = file
  • getting document by ID => changing directories + reading a file in a working copy
  • editing documents with edit comments => making commits by various users + storing commit messages
  • history => normal git log and retrieval of older transactions
  • search => that's a slightly trickier part, I guess it would require periodic export of a category into relational database with indexing of columns that we'll allow to search by

Are there any other common pitfalls in this solution? Have anyone tried to implement such backend already (i.e. for any popular frameworks - RoR, node.js, Django, CakePHP)? Does this solution have any possible implications on performance or reliability - i.e. is it proven that git would be much slower than traditional database solutions or there would be any scalability/reliability pitfalls? I presume that a cluster of such servers that push/pull each other's repository should be fairly robust & reliable.

Basically, tell me if this solution will work and why it will or won't do?


Source: (StackOverflow)

MySQL Partitioning / Sharding / Splitting - which way to go?

We have an InnoDB database that is about 70 GB and we expect it to grow to several hundred GB in the next 2 to 3 years. About 60 % of the data belong to a single table. Currently the database is working quite well as we have a server with 64 GB of RAM, so almost the whole database fits into memory, but we’re concerned about the future when the amount of data will be considerably larger. Right now we’re considering some way of splitting up the tables (especially the one that accounts for the biggest part of the data) and I’m now wondering, what would be the best way to do it.

The options I’m currently aware of are

  • Using MySQL Partitioning that comes with version 5.1
  • Using some kind of third party library that encapsulates the partitioning of the data (like hibernate shards)
  • Implementing it ourselves inside our application

Our application is built on J2EE and EJB 2.1 (hopefully we’re switching to EJB 3 some day).

What would you suggest?

EDIT (2011-02-11):
Just an update: Currently the size of the database is 380 GB, the data size of our "big" table is 220 GB and the size of its index is 36 GB. So while the whole table does not fit in memory any more, the index does.
The system is still performing fine (still on the same hardware) and we're still thinking about partitioning the data.

EDIT (2014-06-04): One more update: The size of the whole database is 1.5 TB, the size of our "big" table is 1.1 TB. We upgraded our server to a 4 processor machine (Intel Xeon E7450) with 128 GB RAM. The system is still performing fine. What we're planning to do next is putting our big table on a separate database server (we've already done the necessary changes in our software) while simultaneously upgrading to new hardware with 256 GB RAM.

This setup is supposed to last for two years. Then we will either have to finally start implementing a sharding solution or just buy servers with 1 TB of RAM which should keep us going for some time.

EDIT (2016-01-18):

We have since put our big table in it's own database on a separate server. Currently the size ot this database is about 1.9 TB, the size of the other database (with all tables except for the "big" one) is 1.1 TB.

Current Hardware setup:

  • HP ProLiant DL 580
  • 4 x Intel(R) Xeon(R) CPU E7- 4830
  • 256 GB RAM

Performance is fine with this setup.


Source: (StackOverflow)

When will a FAST_FORWARD cursor have a work table (and is this something to avoid)?

Background

I noticed whilst experimenting with running total queries that sometimes the estimated plan just shows a "Fetch Query"

Fetch

and the actual plan shows repeated Fetches from the Clustered Index Scan

Fetch Scan

on other occasions (e.g when adding a TOP to the query) the estimated plan shows a "Population Query" stage that populates a work table

Fetch and Populate

With the actual plan showing a clustered index scan to populate the work table then repeated seeks against that work table.

Seeks

Question

  1. What criteria does SQL Server use in choosing one approach over the other?
  2. Would I be right in thinking that the first method (without the additional work table population step) is more efficient?

(Bonus question: If anyone could explain why each scan in the first query counts as 2 logical reads that might be quite enlightening too)

Additional Information

I have found this article here which explains that FAST_FORWARD cursors can either use a dynamic plan or a static plan. The first query in this case appears to be using a dynamic plan and the second one a static plan.

I've also found that if I try

SET @C2 = CURSOR DYNAMIC TYPE_WARNING FOR SELECT TOP ...

The cursor gets implicitly converted to a keyset cursor so it is clear that the TOP construct is not supported for dynamic cursors, perhaps for the reasons in Ruben's answer - Still looking for a definitive explanation of this.

However I have also read that dynamic cursors tend to be slower than their static counterparts (source 1, source 2) which seems surprising to me given that the static variety have to read the source data, copy it, then read the copy rather than just read the source data. The article I referenced earlier mentions that dynamic cursors use markers. Can anyone explain what these are? Is it just a RID or the CI key, or something different?

Script

SET STATISTICS IO OFF

CREATE TABLE #T ( ord INT IDENTITY PRIMARY KEY, total INT, Filler char(8000))

INSERT INTO #T (total) VALUES (37),(80),(55),(31),(53)

DECLARE @running_total INT, 
    @ord INT, 
    @total INT

SET @running_total = 0
SET STATISTICS IO ON
DECLARE @C1 AS CURSOR;
SET @C1 = CURSOR FAST_FORWARD FOR SELECT ord, total FROM #T ORDER BY ord;
OPEN @C1;
PRINT 'Initial FETCH C1'
FETCH NEXT FROM @C1 INTO @ord, @total ;
WHILE @@FETCH_STATUS = 0
BEGIN
  SET @running_total = @running_total + @total
  PRINT 'FETCH C1'
  FETCH NEXT FROM @C1 INTO @ord, @total ;
END

SET @running_total = 0
SET STATISTICS IO ON
DECLARE @C2 AS CURSOR;
SET @C2 = CURSOR FAST_FORWARD FOR SELECT TOP 5 ord, total FROM #T ORDER BY ord;
OPEN @C2;
PRINT 'Initial FETCH C2'
FETCH NEXT FROM @C2 INTO @ord, @total ;
WHILE @@FETCH_STATUS = 0
BEGIN
  SET @running_total = @running_total + @total
  PRINT 'FETCH C2'
  FETCH NEXT FROM @C2 INTO @ord, @total ;
END

PRINT 'End C2'
DROP TABLE #T 

Source: (StackOverflow)

Database partitioning - Horizontal vs Vertical - Difference between Normalization and Row Splitting?

I am trying to grasp the different concepts of Database Partitioning and this is what I understood of it:

Horizontal Partitioning/Sharding: Splitting a table into different table that will contain a subset of the rows that were in the initial table (an example that I have seen a lot if splitting a Users table by Continent, like a sub table for North America, another one for Europe, etc...). Each partition being in a different physical location (understand 'machine'). As I understood it, Horizontal Partitioning and Sharding are the exact same thing(?).

Vertical Partitioning: From what I understood (http://technet.microsoft.com/en-us/library/ms178148%28v=sql.105%29.aspx ), there are 2 sorts of Vertical Partitioning:

  • Normalization (which consists of removing redundancies from a the database by splitting tables and linking them with a foreign key).

  • Row Splitting, here is what I don't understand, what is the difference between Normalization and Row Splitting? In what those 2 techniques differ from each other?

I have also read in this post (Difference between scaling horizontally and vertically for databases ) that the difference between Horizontal Partitioning and Vertical Partitioning is that in the first you scale by adding more machines, while in the second one you scale by adding more power (CPU, RAM) to your existing machine, is that a correct definition? I thought that the core difference between those 2 techniques resides in the way you split your tables.

I am sorry for the load of questions but I am a bit confused as a lot of different websites that I have came across say different things.

Any help clarifying would be greatly appreciated. Any link to a clear and simple demonstration with a few tables would also be very helpful.


Source: (StackOverflow)

Why is Oracle ignoring index with ORDER BY?

My intention is to obtain a paginated resultset of customers. I am using this algorithm, from Tom:

select * from (
  select /*+ FIRST_ROWS(20) */ FIRST_NAME, ROW_NUMBER() over (order by FIRST_NAME) RN
  from CUSTOMER C
)
where RN between 1 and 20
order by RN;

I also have an index defined on the column "CUSTOMER"."FIRST_NAME":

CREATE INDEX CUSTOMER_FIRST_NAME_TEST ON CUSTOMER (FIRST_NAME ASC);

The query returns the expected resultset, but from the explain plan I notice that the index is not used:

--------------------------------------------------------------------------------------
| Id  | Operation                 | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |          | 15467 |   679K|   157   (3)| 00:00:02 |
|   1 |  SORT ORDER BY            |          | 15467 |   679K|   157   (3)| 00:00:02 |
|*  2 |   VIEW                    |          | 15467 |   679K|   155   (2)| 00:00:02 |
|*  3 |    WINDOW SORT PUSHED RANK|          | 15467 |   151K|   155   (2)| 00:00:02 |
|   4 |     TABLE ACCESS FULL     | CUSTOMER | 15467 |   151K|   154   (1)| 00:00:02 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("RN">=1 AND "RN"<=20)
   3 - filter(ROW_NUMBER() OVER ( ORDER BY "FIRST_NAME")<=20)

I am using Oracle 11g. Since I just query for the first 20 rows, ordered by the indexed column, I would expect the index to be used.

Why is the Oracle optimizer ignoring the index? I assume it's something wrong with the pagination algorithm, but I can't figure out what.

Thanks.


Source: (StackOverflow)

What's TOO BIG for a database?

I have a buddy who runs a web app for people listing cars for sale. There are a few thousand clients who use it, and each client has hundreds and sometimes thousands of rows in the database (some have been on for 5 years with hundreds of cars selling each month, and 10s of rows per sale (comments, messages, etc)). He has run this system in one SQL Server database in one physical server with like 20GB or RAM and a couple processors for the whole time, with no problems. Is this some sort of miracle?

Just like most programmers, I'm no DBA and just get by, thanks to ORMs, etc. Everywhere I look, people talk about having the need to shard or get a separate database server for big users of a web app. Why is this? Is it really that inefficient to have a large DB with lots or rows? Should I plan to use Cassandra or something, or can I rely on scaling up well with Postgres?


Source: (StackOverflow)

From a performance perspective, how efficient is it to use a MySQL temporary table for a highly used website feature?

I'm attempting to write a search functionality for a website, and I've decided upon an approach of using MySQL temporary tables to handle the data input, via the query below:

CREATE TEMPORARY TABLE `patternmatch`
  (`pattern` VARCHAR(".strlen($queryLengthHere)."))

INSERT INTO `patternmatch` VALUES ".$someValues

Where $someValues is a set of data with the layout ('some', 'search', 'query') - or basically what the user searched. I then search my main table images based on the data within table patternmatch like so:

SELECT images.* FROM images JOIN patternmatch ON (images.name LIKE patternmatch.pattern)

I then apply a heuristic or scoring system based on how well each result matched the input and display the results by that heuristic etc.

What I'm wondering is how much overhead does creating a temporary table require? I understand that they only exist in session, and are dropped as soon as the session is ended, but if I have hundreds of thousands of searches per second, what sort of performance issues might I encounter? Is there any better way of implementing a search functionality?


Source: (StackOverflow)