EzDevInfo.com

innodb interview questions

Top innodb frequently asked interview questions

MySQL foreign key constraints, cascade delete

I want to use foreign keys to keep the integrity and avoid orphans (I already use innoDB).

How do I make a SQL statment that DELETE ON CASCADE?

If I delete a category then how do I make sure that it would not delete products that also are related to other categories.

The pivot table "categories_products" creates a many-to-many relationship between the two other tables.

categories
- id (INT)
- name (VARCHAR 255)

products
- id
- name
- price

categories_products
- categories_id
- products_id

Source: (StackOverflow)

MySQL InnoDB not releasing disk space after deleting data rows from table

I have one MySQL table using the InnoDB storage engine; it contains about 2M data rows. When I deleted data rows from the table, it did not release allocated disk space. Nor did the size of the ibdata1 file reduce after running the optimize table command.

Is there any way to reclaim disk space from MySQL?

I am in a bad situation; this application is running in about 50 different locations and now problem of low disk space is appearing at almost all of them.


Source: (StackOverflow)

Advertisements

Should I COUNT(*) or not?

I know it's generally a bad idea to do queries like this:

SELECT * FROM `group_relations`

But when I just want the count, should I go for this query since that allows the table to change but still yields the same results.

SELECT COUNT(*) FROM `group_relations`

Or the more specfic

SELECT COUNT(`group_id`) FROM `group_relations`

I have a feeling the latter could potentially be faster, but are there any other things to consider?

Update: I am using InnoDB in this case, sorry for not being more specific.


Source: (StackOverflow)

MyISAM versus InnoDB

I'm working on a projects which involves a lot of database writes, I'd say (70% inserts and 30% reads). This ratio would also include updates which I consider to be one read and one write. The reads can be dirty (e.g. I don't need 100% accurate information at the time of read).
The task in question will be doing over 1 million database transactions an hour.

I've read a bunch of stuff on the web about the differences between MyISAM and InnoDB, and MyISAM seems like the obvious choice to me for the particular database/tables that I'll be using for this task. From what I seem to be reading, InnoDB is good if transactions are needed since row level locking is supported.

Does anybody have any experience with this type of load (or higher)? Is MyISAM the way to go?


Source: (StackOverflow)

How to shrink/purge ibdata1 file in MySQL

I am using MySQL in localhost as a "query tool" for performing statistics in R, that is, everytime I run a R script, I create a new database (A), create a new table (B), import the data into B, submit a query to get what I need, and then I drop B and drop A.

It's working fine for me, but I realize that the ibdata file size is increasing rapidly, I stored nothing in MySQL, but the ibdata1 file already exceeded 100 MB.

I am using more or less default MySQL setting for the setup, is there a way for I can automatically shrink/purge the ibdata1 file after a fixed period of time?


Source: (StackOverflow)

TINYTEXT, TEXT, MEDIUMTEXT, and LONGTEXT maximum storage sizes

Per the docs, there are four TEXT types:

  1. TINYTEXT
  2. TEXT
  3. MEDIUMTEXT
  4. LONGTEXT

What is the maximum length that I can store in a column of each data type assuming the character encoding is UTF-8?


Source: (StackOverflow)

Joining InnoDB tables with MyISAM tables

We have a set of tables which contain the meta level data like organizations, organization users, organization departments etc. All these table are going to be read heavy with very few write operations. Also, the table sizes would be quite small (maximum number of records would be around 30K - 40K)

Another set of table store OLTP data like bill transactions, user actions etc which are going to be both read and write heavy. These tables would be quite huge (around 30 Million records per table)

For the first set of tables we are planning to go with MyISAM and for the second ones with InnoDb engine. Many of our features would also require JOINS on tables from these 2 sets.

Are there any performance issues in joining MyISAM tables with InnoDB tables? Also, are there any other possible issues (db backups, tuning etc) we might run into with this kind of design?

Any feedback would be greatly appreciated.


Source: (StackOverflow)

Force InnoDB to recheck foreign keys on a table/tables?

I have a set of InnoDB tables that I periodically need to maintain by removing some rows and inserting others. Several of the tables have foreign key constraints referencing other tables, so this means that the table loading order is important. To insert the new rows without worrying about the order of the tables, I use:

SET FOREIGN_KEY_CHECKS=0;

before, and then:

SET FOREIGN_KEY_CHECKS=1;

after.

When the loading is complete, I'd like to check that the data in the updated tables still hold referential integrity--that the new rows don't break foreign key constraints--but it seems that there's no way to do this.

As a test, I entered data that I was sure violated foreign key constraints, and upon re-enabling the foreign key checks, mysql produced no warnings or errors.

If I tried to find a way to specify the table loading order, and left the foreign key checks on during the loading process, this would not allow me to load data in a table that has a self-referencing foreign key constraint, so this would not be an acceptable solution.

Is there any way to force InnoDB to verify a table's or a database's foreign key constraints?


Source: (StackOverflow)

InnoDB takes over an hour to import 600MB file, MyISAM in a few minutes

I'm currently working on creating an environment to test performance of an app; I'm testing with MySQL and InnoDB to find out which can serve us best. Within this environment, we'll automatically prepare the database (load existing dumps) and instrument our test tools.

I'm preparing to test the same data dump with MySQL and InnoDB, but I'm already failing to bring the initial import to an usable speed for the InnoDB part. The initial dump took longer, but that didn't concerned me yet:

$ for i in testdb_myisam testdb_innodb; do time mysqldump --extended-insert $i > $i.sql; done

real    0m38.152s
user    0m8.381s
sys     0m2.612s

real    1m16.665s
user    0m6.600s
sys     0m2.552s

However, the import times were quite different:

$ for i in  testdb_myisam testdb_innodb; do time mysql $i < $i.sql; done

real    2m52.821s
user    0m10.505s
sys     0m1.252s

real    87m36.586s
user    0m10.637s
sys     0m1.208s

After research I came over http://stackoverflow.com/questions/457060/changing-tables-from-myisam-to-innodb-make-the-system-slow and then used set global innodb_flush_log_at_trx_commit=2:

$ time mysql testdb_innodb < testdb_innodb.sql

real    64m8.348s
user    0m10.533s
sys     0m1.152s

IMHO still shockingly slow. I've also disabled log_bin for these tests and here's a list of all mysql variables.

Do I've to accept this long InnoDB times or can they be improved? I've full control over this MySQL server as it's purely for this test environment.

I can apply special configurations only for initial import and change them back for applications tests so they better match production environments.

Update:

Given the feedback, I've disabled autocommit and the various checks:

$ time ( echo "SET autocommit=0; SET unique_checks=0; SET foreign_key_checks=0;" \
; cat testdb_innodb.sql ; echo "COMMIT;" ) | mysql testdb_innodb;date

real    47m59.019s
user    0m10.665s
sys     0m2.896s

The speed improved, but not that much. Is my test flawed?

Update 2:

I was able to gain access to a different machine were imports only took about 8 minutes. I compared the configurations and applied the following settings to my MySQL installation:

innodb_additional_mem_pool_size = 20971520
innodb_buffer_pool_size = 536870912
innodb_file_per_table
innodb_log_buffer_size = 8388608
join_buffer_size = 67104768
max_allowed_packet = 5241856
max_binlog_size = 1073741824
max_heap_table_size = 41943040
query_cache_limit = 10485760
query_cache_size = 157286400
read_buffer_size = 20967424
sort_buffer_size = 67108856
table_cache = 256
thread_cache_size = 128
thread_stack = 327680
tmp_table_size = 41943040

With these settings I'm now down to about 25 minutes. Still far away from the few minutes MyISAM takes, but it's getting more usable for me.


Source: (StackOverflow)

MySQL InnoDB foreign key between different databases

I would like to know if it's possible in InnoDB in MySQL to have a table with foreign key that references another table in a different database ?

And if so, how this can be done ?


Source: (StackOverflow)

MySql: MyISAM vs. Inno DB!

What are the differences between MyISAM and Inno DB types in MySql?


Source: (StackOverflow)

ERROR 1114 (HY000): The table is full

I'm trying to add a row to an InnoDB table with a simply query:

INSERT INTO zip_codes (zip_code, city) VALUES ('90210', 'Beverly Hills');

But when I attempt this query, I get the following:

ERROR 1114 (HY000): The table `zip_codes` is full

Doing a "SELECT COUNT(*) FROM zip_codes" gives me 188,959 rows, which doesn't seem like too many considering I have another table with 810,635 rows in that same database.

I am fairly inexperienced with the InnoDB engine and never experienced this issue with MyISAM. What are some of the potential problems here?

EDIT: This only occurs when adding a row to the zip_codes table.


Source: (StackOverflow)

Is there any performance gain in indexing a boolean field?

I'm just about to write a query that includes a WHERE isok=1. As the name implies, isok is a boolean field (actually a TINYINT(1) UNSIGNED that is set to 0 or 1 as needed).

Is there any performance gain in indexing this field? Would the engine (InnoDB in this case) perform better or worse looking up the index?


Source: (StackOverflow)

How do I know if a mysql table is using myISAM or InnoDB Engine?

In MySQL, there is no way to specify a storage engine for a certain database, only for single tables. However, you can specify a storage engine to be used during one session with:

SET storage_engine=InnoDB;

So you don't have to specify it for each table.

How do I confirm, if indeed all the tables are using InnoDB?


Source: (StackOverflow)

#1025 - Error on rename of './database/#sql-2e0f_1254ba7' to './database/table' (errno: 150)

So I am trying to add a primary key to one of the tables in my database. Right now it has a primary key like this:

PRIMARY KEY (user_id, round_number)

Where user_id is a foreign key.

I am trying to change it to this:

PRIMARY KEY (user_id, round_number, created_at)

I am doing this in phpmyadmin by clicking on the primary key icon in the table structure view.

This is the error I get:

#1025 - Error on rename of './database/#sql-2e0f_1254ba7' to './database/table' (errno: 150)

It is a MySQL database with InnoDB table engine.


Source: (StackOverflow)