Chapter 5. MySQL Server Administration

Table of Contents

5.1. The MySQL Server
5.1.1. Server Option and Variable Reference
5.1.2. Server Configuration Defaults
5.1.3. Server Command Options
5.1.4. Server System Variables
5.1.5. Using System Variables
5.1.6. Server Status Variables
5.1.7. Server SQL Modes
5.1.8. Server Plugins
5.1.9. IPv6 Support
5.1.10. Server-Side Help
5.1.11. Server Response to Signals
5.1.12. The Shutdown Process
5.2. MySQL Server Logs
5.2.1. Selecting General Query and Slow Query Log Output Destinations
5.2.2. The Error Log
5.2.3. The General Query Log
5.2.4. The Binary Log
5.2.5. The Slow Query Log
5.2.6. Server Log Maintenance
5.3. Managing Disk I/O and File Space for InnoDB Tables
5.3.1. InnoDB Disk I/O
5.3.2. File Space Management
5.3.3. InnoDB Checkpoints
5.3.4. Defragmenting a Table
5.4. Creating and Using InnoDB Tables and Indexes
5.4.1. Managing InnoDB Tablespaces
5.4.2. Grouping DML Operations with Transactions
5.4.3. Converting Tables from MyISAM to InnoDB
5.4.4. AUTO_INCREMENT Handling in InnoDB
5.4.5. InnoDB and FOREIGN KEY Constraints
5.4.6. Working with InnoDB Compressed Tables
5.4.7. InnoDB File-Format Management
5.4.8. How InnoDB Stores Variable-Length Columns
5.5. Online DDL for InnoDB Tables
5.5.1. Overview of Online DDL
5.5.2. Performance and Concurrency Considerations for Online DDL
5.5.3. SQL Syntax for Online DDL
5.5.4. Combining or Separating DDL Statements
5.5.5. Examples of Online DDL
5.5.6. Implementation Details of Online DDL
5.5.7. How Crash Recovery Works with Online DDL
5.5.8. Online DDL for Partitioned InnoDB Tables
5.5.9. Limitations of Online DDL
5.6. Running Multiple MySQL Instances on One Machine
5.6.1. Setting Up Multiple Data Directories
5.6.2. Running Multiple MySQL Instances on Windows
5.6.3. Running Multiple MySQL Instances on Unix
5.6.4. Using Client Programs in a Multiple-Server Environment
5.7. Tracing mysqld Using DTrace
5.7.1. mysqld DTrace Probe Reference

MySQL Server (mysqld) is the main program that does most of the work in a MySQL installation. This chapter provides an overview of MySQL Server and covers general server administration:

For additional information on administrative topics, see also:

5.1. The MySQL Server

mysqld is the MySQL server. The following discussion covers these MySQL server configuration topics:

  • Startup options that the server supports. You can specify these options on the command line, through configuration files, or both.

  • Server system variables. These variables reflect the current state and values of the startup options, some of which can be modified while the server is running.

  • Server status variables. These variables contain counters and statistics about runtime operation.

  • How to set the server SQL mode. This setting modifies certain aspects of SQL syntax and semantics, for example for compatibility with code from other database systems, or to control the error handling for particular situations.

  • The server shutdown process. There are performance and reliability considerations depending on the type of table (transactional or nontransactional) and whether you use replication.

Note

Not all storage engines are supported by all MySQL server binaries and configurations. To find out how to determine which storage engines your MySQL server installation supports, see Section 13.7.5.15, “SHOW ENGINES Syntax”.

5.1.1. Server Option and Variable Reference

The following table provides a list of all the command line options, server and status variables applicable within mysqld.

The table lists command-line options (Cmd-line), options valid in configuration files (Option file), server system variables (System Var), and status variables (Status var) in one unified list, with notification of where each option/variable is valid. If a server option set on the command line or in an option file differs from the name of the corresponding server system or status variable, the variable name is noted immediately below the corresponding option. For status variables, the scope of the variable is shown (Scope) as either global, session, or both. Please see the corresponding sections for details on setting and using the options and variables. Where appropriate, a direct link to further information on the item as available.

Table 5.1. Option/Variable Summary

NameCmd-LineOption fileSystem VarStatus VarVar ScopeDynamic
abort-slave-event-countYesYes    
Aborted_clients   YesGlobalNo
Aborted_connects   YesGlobalNo
allow-suspicious-udfsYesYes    
ansiYesYes    
auto_increment_increment  Yes BothYes
auto_increment_offset  Yes BothYes
autocommitYesYesYes BothYes
automatic_sp_privileges  Yes GlobalYes
back_log  Yes GlobalNo
basedirYesYesYes GlobalNo
bind-addressYesYes  GlobalNo
- Variable: bind_address  Yes GlobalNo
Binlog_cache_disk_use   YesGlobalNo
binlog_cache_sizeYesYesYes GlobalYes
Binlog_cache_use   YesGlobalNo
binlog-checksumYesYes    
binlog_checksum  Yes GlobalYes
binlog_direct_non_transactional_updatesYesYesYes BothYes
binlog-do-dbYesYes    
binlog-formatYesYes  BothYes
- Variable: binlog_format  Yes BothYes
binlog-ignore-dbYesYes    
binlog_max_flush_queue_time  Yes GlobalYes
binlog_order_commits  Yes GlobalYes
binlog-row-event-max-sizeYesYes    
binlog_row_imageYesYesYes BothYes
binlog_rows_query_log_events  Yes BothYes
binlog-rows-query-log-eventsYesYes    
- Variable: binlog_rows_query_log_events      
Binlog_stmt_cache_disk_use   YesGlobalNo
binlog_stmt_cache_sizeYesYesYes GlobalYes
Binlog_stmt_cache_use   YesGlobalNo
bootstrapYesYes    
bulk_insert_buffer_sizeYesYesYes BothYes
Bytes_received   YesBothNo
Bytes_sent   YesBothNo
character_set_client  Yes BothYes
character-set-client-handshakeYesYes    
character_set_connection  Yes BothYes
character_set_database[a]  Yes BothYes
character-set-filesystemYesYes  BothYes
- Variable: character_set_filesystem  Yes BothYes
character_set_results  Yes BothYes
character-set-serverYesYes  BothYes
- Variable: character_set_server  Yes BothYes
character_set_system  Yes GlobalNo
character-sets-dirYesYes  GlobalNo
- Variable: character_sets_dir  Yes GlobalNo
chrootYesYes    
collation_connection  Yes BothYes
collation_database[b]  Yes BothYes
collation-serverYesYes  BothYes
- Variable: collation_server  Yes BothYes
Com_admin_commands   YesBothNo
Com_alter_db   YesBothNo
Com_alter_db_upgrade   YesBothNo
Com_alter_event   YesBothNo
Com_alter_function   YesBothNo
Com_alter_procedure   YesBothNo
Com_alter_server   YesBothNo
Com_alter_table   YesBothNo
Com_alter_tablespace   YesBothNo
Com_alter_user   YesBothNo
Com_analyze   YesBothNo
Com_assign_to_keycache   YesBothNo
Com_begin   YesBothNo
Com_binlog   YesBothNo
Com_call_procedure   YesBothNo
Com_change_db   YesBothNo
Com_change_master   YesBothNo
Com_check   YesBothNo
Com_checksum   YesBothNo
Com_commit   YesBothNo
Com_create_db   YesBothNo
Com_create_event   YesBothNo
Com_create_function   YesBothNo
Com_create_index   YesBothNo
Com_create_procedure   YesBothNo
Com_create_server   YesBothNo
Com_create_table   YesBothNo
Com_create_trigger   YesBothNo
Com_create_udf   YesBothNo
Com_create_user   YesBothNo
Com_create_view   YesBothNo
Com_dealloc_sql   YesBothNo
Com_delete   YesBothNo
Com_delete_multi   YesBothNo
Com_do   YesBothNo
Com_drop_db   YesBothNo
Com_drop_event   YesBothNo
Com_drop_function   YesBothNo
Com_drop_index   YesBothNo
Com_drop_procedure   YesBothNo
Com_drop_server   YesBothNo
Com_drop_table   YesBothNo
Com_drop_trigger   YesBothNo
Com_drop_user   YesBothNo
Com_drop_view   YesBothNo
Com_empty_query   YesBothNo
Com_execute_sql   YesBothNo
Com_flush   YesBothNo
Com_get_diagnostics   YesBothNo
Com_grant   YesBothNo
Com_ha_close   YesBothNo
Com_ha_open   YesBothNo
Com_ha_read   YesBothNo
Com_help   YesBothNo
Com_insert   YesBothNo
Com_insert_select   YesBothNo
Com_install_plugin   YesBothNo
Com_kill   YesBothNo
Com_load   YesBothNo
Com_lock_tables   YesBothNo
Com_optimize   YesBothNo
Com_preload_keys   YesBothNo
Com_prepare_sql   YesBothNo
Com_purge   YesBothNo
Com_purge_before_date   YesBothNo
Com_release_savepoint   YesBothNo
Com_rename_table   YesBothNo
Com_rename_user   YesBothNo
Com_repair   YesBothNo
Com_replace   YesBothNo
Com_replace_select   YesBothNo
Com_reset   YesBothNo
Com_resignal   YesBothNo
Com_revoke   YesBothNo
Com_revoke_all   YesBothNo
Com_rollback   YesBothNo
Com_rollback_to_savepoint   YesBothNo
Com_savepoint   YesBothNo
Com_select   YesBothNo
Com_set_option   YesBothNo
Com_show_authors   YesBothNo
Com_show_binlog_events   YesBothNo
Com_show_binlogs   YesBothNo
Com_show_charsets   YesBothNo
Com_show_collations   YesBothNo
Com_show_contributors   YesBothNo
Com_show_create_db   YesBothNo
Com_show_create_event   YesBothNo
Com_show_create_func   YesBothNo
Com_show_create_proc   YesBothNo
Com_show_create_table   YesBothNo
Com_show_create_trigger   YesBothNo
Com_show_databases   YesBothNo
Com_show_engine_logs   YesBothNo
Com_show_engine_mutex   YesBothNo
Com_show_engine_status   YesBothNo
Com_show_errors   YesBothNo
Com_show_events   YesBothNo
Com_show_fields   YesBothNo
Com_show_function_code   YesBothNo
Com_show_function_status   YesBothNo
Com_show_grants   YesBothNo
Com_show_keys   YesBothNo
Com_show_master_status   YesBothNo
Com_show_new_master   YesBothNo
Com_show_open_tables   YesBothNo
Com_show_plugins   YesBothNo
Com_show_privileges   YesBothNo
Com_show_procedure_code   YesBothNo
Com_show_procedure_status   YesBothNo
Com_show_processlist   YesBothNo
Com_show_profile   YesBothNo
Com_show_profiles   YesBothNo
Com_show_relaylog_events   YesBothNo
Com_show_slave_hosts   YesBothNo
Com_show_slave_status   YesBothNo
Com_show_status   YesBothNo
Com_show_storage_engines   YesBothNo
Com_show_table_status   YesBothNo
Com_show_tables   YesBothNo
Com_show_triggers   YesBothNo
Com_show_variables   YesBothNo
Com_show_warnings   YesBothNo
Com_signal   YesBothNo
Com_slave_start   YesBothNo
Com_slave_stop   YesBothNo
Com_stmt_close   YesBothNo
Com_stmt_execute   YesBothNo
Com_stmt_fetch   YesBothNo
Com_stmt_prepare   YesBothNo
Com_stmt_reprepare   YesBothNo
Com_stmt_reset   YesBothNo
Com_stmt_send_long_data   YesBothNo
Com_truncate   YesBothNo
Com_uninstall_plugin   YesBothNo
Com_unlock_tables   YesBothNo
Com_update   YesBothNo
Com_update_multi   YesBothNo
Com_xa_commit   YesBothNo
Com_xa_end   YesBothNo
Com_xa_prepare   YesBothNo
Com_xa_recover   YesBothNo
Com_xa_rollback   YesBothNo
Com_xa_start   YesBothNo
completion_typeYesYesYes BothYes
Compression   YesSessionNo
concurrent_insertYesYesYes GlobalYes
connect_timeoutYesYesYes GlobalYes
Connection_errors_accept   YesGlobalNo
Connection_errors_internal   YesGlobalNo
Connection_errors_max_connections   YesGlobalNo
Connection_errors_peer_addr   YesGlobalNo
Connection_errors_select   YesGlobalNo
Connection_errors_tcpwrap   YesGlobalNo
Connections   YesGlobalNo
consoleYesYes    
core_file  Yes GlobalNo
core-fileYesYes    
Created_tmp_disk_tables   YesBothNo
Created_tmp_files   YesGlobalNo
Created_tmp_tables   YesBothNo
daemon_memcached_enable_binlogYesYesYes GlobalNo
daemon_memcached_engine_lib_nameYesYesYes GlobalNo
daemon_memcached_engine_lib_pathYesYesYes GlobalNo
daemon_memcached_optionYesYesYes GlobalNo
daemon_memcached_r_batch_sizeYesYesYes GlobalNo
daemon_memcached_w_batch_sizeYesYesYes GlobalNo
datadirYesYesYes GlobalNo
date_format  Yes GlobalNo
datetime_format  Yes GlobalNo
debugYesYesYes BothYes
debug_sync  Yes SessionYes
debug-sync-timeoutYesYes    
default-authentication-pluginYesYes    
default-storage-engineYesYes  BothYes
- Variable: default_storage_engine  Yes BothYes
default-time-zoneYesYes    
default_tmp_storage_engineYesYesYes BothYes
default_week_formatYesYesYes BothYes
defaults-extra-fileYes     
defaults-fileYes     
defaults-group-suffixYes     
delay-key-writeYesYes  GlobalYes
- Variable: delay_key_write  Yes GlobalYes
Delayed_errors   YesGlobalNo
delayed_insert_limitYesYesYes GlobalYes
Delayed_insert_threads   YesGlobalNo
delayed_insert_timeoutYesYesYes GlobalYes
delayed_queue_sizeYesYesYes GlobalYes
Delayed_writes   YesGlobalNo
des-key-fileYesYes    
disconnect_on_expired_passwordYesYesYes SessionNo
disconnect-slave-event-countYesYes    
div_precision_incrementYesYesYes BothYes
enable-named-pipeYesYes    
- Variable: named_pipe      
end_markers_in_json  Yes BothYes
enforce_gtid_consistencyYesYesYes GlobalNo
enforce-gtid-consistencyYesYesYes GlobalNo
eq_range_index_dive_limit  Yes BothYes
error_count  Yes SessionNo
event-schedulerYesYes  GlobalYes
- Variable: event_scheduler  Yes GlobalYes
exit-infoYesYes    
expire_logs_daysYesYesYes GlobalYes
explicit_defaults_for_timestampYesYesYes SessionNo
external-lockingYesYes    
- Variable: skip_external_locking      
external_user  Yes SessionNo
federatedYesYes    
flushYesYesYes GlobalYes
Flush_commands   YesGlobalNo
flush_timeYesYesYes GlobalYes
foreign_key_checks  Yes BothYes
ft_boolean_syntaxYesYesYes GlobalYes
ft_max_word_lenYesYesYes GlobalNo
ft_min_word_lenYesYesYes GlobalNo
ft_query_expansion_limitYesYesYes GlobalNo
ft_stopword_fileYesYesYes GlobalNo
gdbYesYes    
general-logYesYes  GlobalYes
- Variable: general_log  Yes GlobalYes
general_log_fileYesYesYes GlobalYes
group_concat_max_lenYesYesYes BothYes
gtid_executed  Yes BothNo
gtid_mode  Yes GlobalNo
gtid-modeYesYes  GlobalNo
- Variable: gtid_mode  Yes GlobalNo
gtid_next  Yes SessionYes
gtid_owned  Yes BothNo
gtid_purged  Yes GlobalYes
Handler_commit   YesBothNo
Handler_delete   YesBothNo
Handler_discover   YesBothNo
Handler_external_lock   YesBothNo
Handler_mrr_init   YesBothNo
Handler_prepare   YesBothNo
Handler_read_first   YesBothNo
Handler_read_key   YesBothNo
Handler_read_last   YesBothNo
Handler_read_next   YesBothNo
Handler_read_prev   YesBothNo
Handler_read_rnd   YesBothNo
Handler_read_rnd_next   YesBothNo
Handler_rollback   YesBothNo
Handler_savepoint   YesBothNo
Handler_savepoint_rollback   YesBothNo
Handler_update   YesBothNo
Handler_write   YesBothNo
have_compress  Yes GlobalNo
have_crypt  Yes GlobalNo
have_dynamic_loading  Yes GlobalNo
have_geometry  Yes GlobalNo
have_openssl  Yes GlobalNo
have_profiling  Yes GlobalNo
have_query_cache  Yes GlobalNo
have_rtree_keys  Yes GlobalNo
have_ssl  Yes GlobalNo
have_symlink  Yes GlobalNo
helpYesYes    
host_cache_size  Yes GlobalYes
hostname  Yes GlobalNo
identity  Yes SessionYes
ignore-builtin-innodbYesYes  GlobalNo
- Variable: ignore_builtin_innodb  Yes GlobalNo
ignore-db-dirYesYes    
ignore_db_dirs  Yes GlobalNo
init_connectYesYesYes GlobalYes
init-fileYesYes  GlobalNo
- Variable: init_file  Yes GlobalNo
init_slaveYesYesYes GlobalYes
innodbYesYes    
innodb_adaptive_flushingYesYesYes GlobalYes
innodb_adaptive_flushing_lwmYesYesYes GlobalYes
innodb_adaptive_hash_indexYesYesYes GlobalYes
innodb_adaptive_max_sleep_delayYesYesYes GlobalYes
innodb_additional_mem_pool_sizeYesYesYes GlobalNo
innodb_api_bk_commit_intervalYesYesYes GlobalYes
innodb_api_disable_rowlockYesYesYes GlobalNo
innodb_api_enable_binlogYesYesYes GlobalNo
innodb_api_enable_mdlYesYesYes GlobalNo
innodb_api_trx_levelYesYesYes GlobalYes
innodb_autoextend_incrementYesYesYes GlobalYes
innodb_autoinc_lock_modeYesYesYes GlobalNo
Innodb_available_undo_logs   YesGlobalNo
Innodb_buffer_pool_bytes_data   YesGlobalNo
Innodb_buffer_pool_bytes_dirty   YesGlobalNo
innodb_buffer_pool_dump_at_shutdownYesYesYes GlobalYes
innodb_buffer_pool_dump_nowYesYesYes GlobalYes
innodb_buffer_pool_dump_pctYesYesYes GlobalYes
Innodb_buffer_pool_dump_status   YesGlobalNo
innodb_buffer_pool_filenameYesYesYes GlobalYes
innodb_buffer_pool_instancesYesYesYes GlobalNo
innodb_buffer_pool_load_abortYesYesYes GlobalYes
innodb_buffer_pool_load_at_startupYesYesYes GlobalNo
innodb_buffer_pool_load_nowYesYesYes GlobalYes
Innodb_buffer_pool_load_status   YesGlobalNo
Innodb_buffer_pool_pages_data   YesGlobalNo
Innodb_buffer_pool_pages_dirty   YesGlobalNo
Innodb_buffer_pool_pages_flushed   YesGlobalNo
Innodb_buffer_pool_pages_free   YesGlobalNo
Innodb_buffer_pool_pages_latched   YesGlobalNo
Innodb_buffer_pool_pages_misc   YesGlobalNo
Innodb_buffer_pool_pages_total   YesGlobalNo
Innodb_buffer_pool_read_ahead   YesGlobalNo
Innodb_buffer_pool_read_ahead_evicted   YesGlobalNo
Innodb_buffer_pool_read_requests   YesGlobalNo
Innodb_buffer_pool_reads   YesGlobalNo
innodb_buffer_pool_sizeYesYesYes GlobalNo
Innodb_buffer_pool_wait_free   YesGlobalNo
Innodb_buffer_pool_write_requests   YesGlobalNo
innodb_change_buffer_max_sizeYesYesYes GlobalYes
innodb_change_bufferingYesYesYes GlobalYes
innodb_checksum_algorithmYesYesYes GlobalYes
innodb_checksumsYesYesYes GlobalNo
innodb_cmp_per_index_enabledYesYesYes GlobalYes
innodb_commit_concurrencyYesYesYes GlobalYes
innodb_compression_failure_threshold_pctYesYesYes GlobalYes
innodb_compression_levelYesYesYes GlobalYes
innodb_compression_pad_pct_maxYesYesYes GlobalYes
innodb_concurrency_ticketsYesYesYes GlobalYes
innodb_data_file_pathYesYesYes GlobalNo
Innodb_data_fsyncs   YesGlobalNo
innodb_data_home_dirYesYesYes GlobalNo
Innodb_data_pending_fsyncs   YesGlobalNo
Innodb_data_pending_reads   YesGlobalNo
Innodb_data_pending_writes   YesGlobalNo
Innodb_data_read   YesGlobalNo
Innodb_data_reads   YesGlobalNo
Innodb_data_writes   YesGlobalNo
Innodb_data_written   YesGlobalNo
Innodb_dblwr_pages_written   YesGlobalNo
Innodb_dblwr_writes   YesGlobalNo
innodb_disable_sort_file_cacheYesYesYes GlobalYes
innodb_doublewriteYesYesYes GlobalNo
innodb_fast_shutdownYesYesYes GlobalYes
innodb_file_formatYesYesYes GlobalYes
innodb_file_format_checkYesYesYes GlobalNo
innodb_file_format_maxYesYesYes GlobalYes
innodb_file_per_tableYesYesYes GlobalYes
innodb_flush_log_at_timeout  Yes GlobalYes
innodb_flush_log_at_trx_commitYesYesYes GlobalYes
innodb_flush_methodYesYesYes GlobalNo
innodb_flush_neighborsYesYesYes GlobalYes
innodb_flushing_avg_loopsYesYesYes GlobalYes
innodb_force_load_corruptedYesYesYes GlobalNo
innodb_force_recoveryYesYesYes GlobalNo
innodb_ft_aux_tableYesYesYes GlobalYes
innodb_ft_cache_sizeYesYesYes GlobalNo
innodb_ft_enable_diag_printYesYesYes GlobalYes
innodb_ft_enable_stopwordYesYesYes GlobalYes
innodb_ft_max_token_sizeYesYesYes GlobalNo
innodb_ft_min_token_sizeYesYesYes GlobalNo
innodb_ft_num_word_optimizeYesYesYes GlobalYes
innodb_ft_server_stopword_tableYesYesYes GlobalYes
innodb_ft_sort_pll_degreeYesYesYes GlobalNo
innodb_ft_user_stopword_tableYesYesYes BothYes
Innodb_have_atomic_builtins   YesGlobalNo
innodb_io_capacityYesYesYes GlobalYes
innodb_io_capacity_maxYesYesYes GlobalYes
innodb_large_prefixYesYesYes GlobalYes
innodb_lock_wait_timeoutYesYesYes BothYes
innodb_locks_unsafe_for_binlogYesYesYes GlobalNo
innodb_log_buffer_sizeYesYesYes GlobalNo
innodb_log_compressed_pagesYesYesYes GlobalYes
innodb_log_file_sizeYesYesYes GlobalNo
innodb_log_files_in_groupYesYesYes GlobalNo
innodb_log_group_home_dirYesYesYes GlobalNo
Innodb_log_waits   YesGlobalNo
Innodb_log_write_requests   YesGlobalNo
Innodb_log_writes   YesGlobalNo
innodb_lru_scan_depthYesYesYes GlobalYes
innodb_max_dirty_pages_pctYesYesYes GlobalYes
innodb_max_dirty_pages_pct_lwmYesYesYes GlobalYes
innodb_max_purge_lagYesYesYes GlobalYes
innodb_max_purge_lag_delayYesYesYes GlobalYes
innodb_monitor_disableYesYesYes GlobalYes
innodb_monitor_enableYesYesYes GlobalYes
innodb_monitor_resetYesYesYes GlobalYes
innodb_monitor_reset_allYesYesYes GlobalYes
Innodb_num_open_files   YesGlobalNo
innodb_old_blocks_pctYesYesYes GlobalYes
innodb_old_blocks_timeYesYesYes GlobalYes
innodb_online_alter_log_max_sizeYesYesYes GlobalYes
innodb_open_filesYesYesYes GlobalNo
innodb_optimize_fulltext_onlyYesYesYes GlobalYes
Innodb_os_log_fsyncs   YesGlobalNo
Innodb_os_log_pending_fsyncs   YesGlobalNo
Innodb_os_log_pending_writes   YesGlobalNo
Innodb_os_log_written   YesGlobalNo
innodb_page_sizeYesYesYes GlobalNo
Innodb_page_size   YesGlobalNo
Innodb_pages_created   YesGlobalNo
Innodb_pages_read   YesGlobalNo
Innodb_pages_written   YesGlobalNo
innodb_print_all_deadlocksYesYesYes GlobalYes
innodb_purge_batch_sizeYesYesYes GlobalYes
innodb_purge_threadsYesYesYes GlobalNo
innodb_random_read_aheadYesYesYes GlobalYes
innodb_read_ahead_thresholdYesYesYes GlobalYes
innodb_read_io_threadsYesYesYes GlobalNo
innodb_read_onlyYesYesYes GlobalNo
innodb_replication_delayYesYesYes GlobalYes
innodb_rollback_on_timeoutYesYesYes GlobalNo
innodb_rollback_segmentsYesYesYes GlobalYes
Innodb_row_lock_current_waits   YesGlobalNo
Innodb_row_lock_time   YesGlobalNo
Innodb_row_lock_time_avg   YesGlobalNo
Innodb_row_lock_time_max   YesGlobalNo
Innodb_row_lock_waits   YesGlobalNo
Innodb_rows_deleted   YesGlobalNo
Innodb_rows_inserted   YesGlobalNo
Innodb_rows_read   YesGlobalNo
Innodb_rows_updated   YesGlobalNo
innodb_sort_buffer_sizeYesYesYes GlobalNo
innodb_spin_wait_delayYesYesYes GlobalYes
innodb_stats_auto_recalcYesYesYes GlobalYes
innodb_stats_methodYesYesYes GlobalYes
innodb_stats_on_metadataYesYesYes GlobalYes
innodb_stats_persistentYesYesYes GlobalYes
innodb_stats_persistent_sample_pagesYesYesYes GlobalYes
innodb_stats_sample_pagesYesYesYes GlobalYes
innodb_stats_transient_sample_pagesYesYesYes GlobalYes
innodb-status-fileYesYes    
innodb_strict_modeYesYesYes BothYes
innodb_support_xaYesYesYes BothYes
innodb_sync_array_sizeYesYesYes GlobalNo
innodb_sync_spin_loopsYesYesYes GlobalYes
innodb_table_locksYesYesYes BothYes
innodb_temp_data_file_pathYesYesYes GlobalNo
innodb_thread_concurrencyYesYesYes GlobalYes
innodb_thread_sleep_delayYesYesYes GlobalYes
Innodb_truncated_status_writes   YesGlobalNo
innodb_undo_directoryYesYesYes GlobalNo
innodb_undo_logsYesYesYes GlobalYes
innodb_undo_tablespacesYesYesYes GlobalNo
innodb_use_native_aioYesYesYes GlobalNo
innodb_use_sys_mallocYesYesYes GlobalNo
innodb_version  Yes GlobalNo
innodb_write_io_threadsYesYesYes GlobalNo
insert_id  Yes SessionYes
installYes     
install-manualYes     
interactive_timeoutYesYesYes BothYes
join_buffer_sizeYesYesYes BothYes
keep_files_on_createYesYesYes BothYes
Key_blocks_not_flushed   YesGlobalNo
Key_blocks_unused   YesGlobalNo
Key_blocks_used   YesGlobalNo
key_buffer_sizeYesYesYes GlobalYes
key_cache_age_thresholdYesYesYes GlobalYes
key_cache_block_sizeYesYesYes GlobalYes
key_cache_division_limitYesYesYes GlobalYes
Key_read_requests   YesGlobalNo
Key_reads   YesGlobalNo
Key_write_requests   YesGlobalNo
Key_writes   YesGlobalNo
languageYesYesYes GlobalNo
large_files_support  Yes GlobalNo
large_page_size  Yes GlobalNo
large-pagesYesYes  GlobalNo
- Variable: large_pages  Yes GlobalNo
last_insert_id  Yes SessionYes
Last_query_cost   YesSessionNo
Last_query_partial_plans   YesSessionNo
lc-messagesYesYes  BothYes
- Variable: lc_messages  Yes BothYes
lc-messages-dirYesYes  GlobalNo
- Variable: lc_messages_dir  Yes GlobalNo
lc_time_names  Yes BothYes
license  Yes GlobalNo
local_infile  Yes GlobalYes
lock_wait_timeoutYesYesYes BothYes
locked_in_memory  Yes GlobalNo
log_bin  Yes GlobalNo
log-binYesYesYes GlobalNo
log_bin_basename  Yes GlobalNo
log_bin_index  Yes GlobalNo
log-bin-indexYesYes    
log_bin_use_v1_row_eventsYesYesYes GlobalNo
log-bin-use-v1-row-eventsYesYes  GlobalNo
- Variable: log_bin_use_v1_row_events  Yes GlobalNo
log-errorYesYes  GlobalNo
- Variable: log_error  Yes GlobalNo
log-isamYesYes    
log-outputYesYes  GlobalYes
- Variable: log_output  Yes GlobalYes
log-queries-not-using-indexesYesYes  GlobalYes
- Variable: log_queries_not_using_indexes  Yes GlobalYes
log-rawYesYes    
log-short-formatYesYes    
log-slave-updatesYesYes  GlobalNo
- Variable: log_slave_updates  Yes GlobalNo
log_slave_updatesYesYesYes GlobalNo
log_slow_admin_statements  Yes GlobalYes
log-slow-admin-statementsYesYes    
log_slow_slave_statements  Yes GlobalYes
log-slow-slave-statementsYesYes    
log-tcYesYes    
log-tc-sizeYesYes    
log_throttle_queries_not_using_indexes  Yes GlobalYes
log-warningsYesYes  GlobalYes
- Variable: log_warnings  Yes GlobalYes
long_query_timeYesYesYes BothYes
low-priority-updatesYesYes  BothYes
- Variable: low_priority_updates  Yes BothYes
lower_case_file_system  Yes GlobalNo
lower_case_table_namesYesYesYes GlobalNo
master-info-fileYesYes    
master_info_repository  Yes GlobalYes
master-info-repositoryYesYes    
- Variable: master_info_repository      
master-retry-countYesYes    
master_verify_checksum  Yes GlobalYes
master-verify-checksumYesYes    
- Variable: master_verify_checksum      
max_allowed_packetYesYesYes GlobalYes
max_binlog_cache_sizeYesYesYes GlobalYes
max-binlog-dump-eventsYesYes    
max_binlog_sizeYesYesYes GlobalYes
max_binlog_stmt_cache_sizeYesYesYes GlobalYes
max_connect_errorsYesYesYes GlobalYes
max_connectionsYesYesYes GlobalYes
max_delayed_threadsYesYesYes BothYes
max_error_countYesYesYes BothYes
max_heap_table_sizeYesYesYes BothYes
max_insert_delayed_threads  Yes BothYes
max_join_sizeYesYesYes BothYes
max_length_for_sort_dataYesYesYes BothYes
max_prepared_stmt_countYesYesYes GlobalYes
max_relay_log_sizeYesYesYes GlobalYes
max_seeks_for_keyYesYesYes BothYes
max_sort_lengthYesYesYes BothYes
max_sp_recursion_depthYesYesYes BothYes
Max_used_connections   YesGlobalNo
max_user_connectionsYesYesYes BothYes
max_write_lock_countYesYesYes GlobalYes
memlockYesYesYes GlobalNo
metadata_locks_cache_size  Yes GlobalNo
metadata_locks_hash_instances  Yes GlobalNo
min-examined-row-limitYesYesYes BothYes
myisam-block-sizeYesYes    
myisam_data_pointer_sizeYesYesYes GlobalYes
myisam_max_sort_file_sizeYesYesYes GlobalYes
myisam_mmap_sizeYesYesYes GlobalNo
myisam-recover-optionsYesYes    
- Variable: myisam_recover_options      
myisam_recover_options  Yes GlobalNo
myisam_repair_threadsYesYesYes BothYes
myisam_sort_buffer_sizeYesYesYes BothYes
myisam_stats_methodYesYesYes BothYes
myisam_use_mmapYesYesYes GlobalYes
named_pipe  Yes GlobalNo
Ndb_conflict_fn_max   YesGlobalNo
Ndb_conflict_fn_old   YesGlobalNo
Ndb_number_of_data_nodes   YesGlobalNo
net_buffer_lengthYesYesYes BothYes
net_read_timeoutYesYesYes BothYes
net_retry_countYesYesYes BothYes
net_write_timeoutYesYesYes BothYes
newYesYesYes BothYes
no-defaultsYes     
Not_flushed_delayed_rows   YesGlobalNo
oldYesYesYes GlobalNo
old-alter-tableYesYes  BothYes
- Variable: old_alter_table  Yes BothYes
old_passwords  Yes BothYes
old-style-user-limitsYesYes    
Open_files   YesGlobalNo
open-files-limitYesYes  GlobalNo
- Variable: open_files_limit  Yes GlobalNo
Open_streams   YesGlobalNo
Open_table_definitions   YesGlobalNo
Open_tables   YesBothNo
Opened_files   YesGlobalNo
Opened_table_definitions   YesBothNo
Opened_tables   YesBothNo
optimizer_prune_levelYesYesYes BothYes
optimizer_search_depthYesYesYes BothYes
optimizer_switchYesYesYes BothYes
optimizer_trace  Yes BothYes
optimizer_trace_features  Yes BothYes
optimizer_trace_limit  Yes BothYes
optimizer_trace_max_mem_size  Yes BothYes
optimizer_trace_offset  Yes BothYes
partitionYesYes    
- Variable: have_partitioning      
performance_schemaYesYesYes GlobalNo
Performance_schema_accounts_lost   YesGlobalNo
performance_schema_accounts_sizeYesYesYes GlobalNo
Performance_schema_cond_classes_lost   YesGlobalNo
Performance_schema_cond_instances_lost   YesGlobalNo
performance_schema_digests_sizeYesYesYes GlobalNo
performance_schema_events_stages_history_long_sizeYesYesYes GlobalNo
performance_schema_events_stages_history_sizeYesYesYes GlobalNo
performance_schema_events_statements_history_long_sizeYesYesYes GlobalNo
performance_schema_events_statements_history_sizeYesYesYes GlobalNo
performance_schema_events_waits_history_long_sizeYesYesYes GlobalNo
performance_schema_events_waits_history_sizeYesYesYes GlobalNo
Performance_schema_file_classes_lost   YesGlobalNo
Performance_schema_file_handles_lost   YesGlobalNo
Performance_schema_file_instances_lost   YesGlobalNo
Performance_schema_hosts_lost   YesGlobalNo
performance_schema_hosts_sizeYesYesYes GlobalNo
performance-schema-instrumentYesYes    
Performance_schema_locker_lost   YesGlobalNo
performance_schema_max_cond_classesYesYesYes GlobalNo
performance_schema_max_cond_instancesYesYesYes GlobalNo
performance_schema_max_file_classesYesYesYes GlobalNo
performance_schema_max_file_handlesYesYesYes GlobalNo
performance_schema_max_file_instancesYesYesYes GlobalNo
performance_schema_max_memory_classesYesYesYes GlobalNo
performance_schema_max_mutex_classesYesYesYes GlobalNo
performance_schema_max_mutex_instancesYesYesYes GlobalNo
performance_schema_max_program_instancesYesYesYes GlobalNo
performance_schema_max_rwlock_classesYesYesYes GlobalNo
performance_schema_max_rwlock_instancesYesYesYes GlobalNo
performance_schema_max_socket_classesYesYesYes GlobalNo
performance_schema_max_socket_instancesYesYesYes GlobalNo
performance_schema_max_stage_classesYesYesYes GlobalNo
performance_schema_max_statement_classesYesYesYes GlobalNo
performance_schema_max_statement_stackYesYesYes GlobalNo
performance_schema_max_table_handlesYesYesYes GlobalNo
performance_schema_max_table_instancesYesYesYes GlobalNo
performance_schema_max_thread_classesYesYesYes GlobalNo
performance_schema_max_thread_instancesYesYesYes GlobalNo
Performance_schema_memory_classes_lost   YesGlobalNo
Performance_schema_mutex_classes_lost   YesGlobalNo
Performance_schema_mutex_instances_lost   YesGlobalNo
Performance_schema_nested_statement_lost   YesGlobalNo
Performance_schema_program_lost   YesGlobalNo
Performance_schema_rwlock_classes_lost   YesGlobalNo
Performance_schema_rwlock_instances_lost   YesGlobalNo
Performance_schema_session_connect_attrs_lost   YesGlobalNo
performance_schema_session_connect_attrs_sizeYesYesYes GlobalNo
performance_schema_setup_actors_sizeYesYesYes GlobalNo
performance_schema_setup_objects_sizeYesYesYes GlobalNo
Performance_schema_socket_classes_lost   YesGlobalNo
Performance_schema_socket_instances_lost   YesGlobalNo
Performance_schema_stage_classes_lost   YesGlobalNo
Performance_schema_statement_classes_lost   YesGlobalNo
Performance_schema_table_handles_lost   YesGlobalNo
Performance_schema_table_instances_lost   YesGlobalNo
Performance_schema_thread_classes_lost   YesGlobalNo
Performance_schema_thread_instances_lost   YesGlobalNo
Performance_schema_users_lost   YesGlobalNo
performance_schema_users_sizeYesYesYes GlobalNo
pid-fileYesYes  GlobalNo
- Variable: pid_file  Yes GlobalNo
pluginYesYes    
plugin_dirYesYesYes GlobalNo
plugin-loadYesYes    
plugin-load-addYesYes    
portYesYesYes GlobalNo
port-open-timeoutYesYes    
preload_buffer_sizeYesYesYes BothYes
Prepared_stmt_count   YesGlobalNo
print-defaultsYes     
profiling  Yes BothYes
profiling_history_sizeYesYesYes BothYes
protocol_version  Yes GlobalNo
proxy_user  Yes SessionNo
pseudo_slave_mode  Yes SessionYes
pseudo_thread_id  Yes SessionYes
Qcache_free_blocks   YesGlobalNo
Qcache_free_memory   YesGlobalNo
Qcache_hits   YesGlobalNo
Qcache_inserts   YesGlobalNo
Qcache_lowmem_prunes   YesGlobalNo
Qcache_not_cached   YesGlobalNo
Qcache_queries_in_cache   YesGlobalNo
Qcache_total_blocks   YesGlobalNo
Queries   YesBothNo
query_alloc_block_sizeYesYesYes BothYes
query_cache_limitYesYesYes GlobalYes
query_cache_min_res_unitYesYesYes GlobalYes
query_cache_sizeYesYesYes GlobalYes
query_cache_typeYesYesYes BothYes
query_cache_wlock_invalidateYesYesYes BothYes
query_prealloc_sizeYesYesYes BothYes
Questions   YesBothNo
rand_seed1  Yes SessionYes
rand_seed2  Yes SessionYes
range_alloc_block_sizeYesYesYes BothYes
read_buffer_sizeYesYesYes BothYes
read_onlyYesYesYes GlobalYes
read_rnd_buffer_sizeYesYesYes BothYes
relay-logYesYes  GlobalNo
- Variable: relay_log  Yes GlobalNo
relay_log_basename  Yes GlobalNo
relay-log-indexYesYes  GlobalNo
- Variable: relay_log_index  Yes GlobalNo
relay_log_indexYesYesYes GlobalNo
relay-log-info-fileYesYes    
- Variable: relay_log_info_file      
relay_log_info_fileYesYesYes GlobalNo
relay-log-info-repositoryYesYes    
- Variable: relay_log_info_repository      
relay_log_info_repository  Yes GlobalYes
relay_log_purgeYesYesYes GlobalYes
relay_log_recoveryYesYesYes GlobalYes
relay-log-recoveryYesYes    
- Variable: relay_log_recovery      
relay_log_space_limitYesYesYes GlobalNo
removeYes     
replicate-do-dbYesYes    
replicate-do-tableYesYes    
replicate-ignore-dbYesYes    
replicate-ignore-tableYesYes    
replicate-rewrite-dbYesYes    
replicate-same-server-idYesYes    
replicate-wild-do-tableYesYes    
replicate-wild-ignore-tableYesYes    
report-hostYesYes  GlobalNo
- Variable: report_host  Yes GlobalNo
report-passwordYesYes  GlobalNo
- Variable: report_password  Yes GlobalNo
report-portYesYes  GlobalNo
- Variable: report_port  Yes GlobalNo
report-userYesYes  GlobalNo
- Variable: report_user  Yes GlobalNo
Rpl_semi_sync_master_clients   YesGlobalNo
rpl_semi_sync_master_enabled  Yes GlobalYes
Rpl_semi_sync_master_net_avg_wait_time   YesGlobalNo
Rpl_semi_sync_master_net_wait_time   YesGlobalNo
Rpl_semi_sync_master_net_waits   YesGlobalNo
Rpl_semi_sync_master_no_times   YesGlobalNo
Rpl_semi_sync_master_no_tx   YesGlobalNo
Rpl_semi_sync_master_status   YesGlobalNo
Rpl_semi_sync_master_timefunc_failures   YesGlobalNo
rpl_semi_sync_master_timeout  Yes GlobalYes
rpl_semi_sync_master_trace_level  Yes GlobalYes
Rpl_semi_sync_master_tx_avg_wait_time   YesGlobalNo
Rpl_semi_sync_master_tx_wait_time   YesGlobalNo
Rpl_semi_sync_master_tx_waits   YesGlobalNo
rpl_semi_sync_master_wait_no_slave  Yes GlobalYes
rpl_semi_sync_master_wait_point  Yes GlobalYes
Rpl_semi_sync_master_wait_pos_backtraverse   YesGlobalNo
Rpl_semi_sync_master_wait_sessions   YesGlobalNo
Rpl_semi_sync_master_yes_tx   YesGlobalNo
rpl_semi_sync_slave_enabled  Yes GlobalYes
Rpl_semi_sync_slave_status   YesGlobalNo
rpl_semi_sync_slave_trace_level  Yes GlobalYes
rpl_stop_slave_timeoutYesYesYes GlobalYes
Rsa_public_key   YesGlobalNo
safe-user-createYesYes    
secure-authYesYes  GlobalYes
- Variable: secure_auth  Yes GlobalYes
secure-file-privYesYes  GlobalNo
- Variable: secure_file_priv  Yes GlobalNo
Select_full_join   YesBothNo
Select_full_range_join   YesBothNo
Select_range   YesBothNo
Select_range_check   YesBothNo
Select_scan   YesBothNo
server-idYesYes  GlobalYes
- Variable: server_id  Yes GlobalYes
server_uuid  Yes GlobalNo
sha256_password_private_key_path  Yes GlobalNo
sha256_password_public_key_path  Yes GlobalNo
shared_memory  Yes GlobalNo
shared_memory_base_name  Yes GlobalNo
show-slave-auth-infoYesYes    
skip-character-set-client-handshakeYesYes    
skip-concurrent-insertYesYes    
- Variable: concurrent_insert      
skip-event-schedulerYesYes    
skip_external_lockingYesYesYes GlobalNo
skip-grant-tablesYesYes    
skip-host-cacheYesYes    
skip-name-resolveYesYes  GlobalNo
- Variable: skip_name_resolve  Yes GlobalNo
skip-networkingYesYes  GlobalNo
- Variable: skip_networking  Yes GlobalNo
skip-newYesYes    
skip-partitionYesYes    
skip-show-databaseYesYes  GlobalNo
- Variable: skip_show_database  Yes GlobalNo
skip-slave-startYesYes    
skip-sslYesYes    
skip-stack-traceYesYes    
skip-symbolic-linksYes     
slave_allow_batchingYesYesYes GlobalYes
slave_checkpoint_groupYesYesYes GlobalYes
slave-checkpoint-groupYesYes    
- Variable: slave_checkpoint_group      
slave_checkpoint_periodYesYesYes GlobalYes
slave-checkpoint-periodYesYes    
- Variable: slave_checkpoint_period      
slave_compressed_protocolYesYesYes GlobalYes
slave_exec_modeYesYesYes GlobalYes
Slave_heartbeat_period   YesGlobalNo
Slave_last_heartbeat   YesGlobalNo
slave-load-tmpdirYesYes  GlobalNo
- Variable: slave_load_tmpdir  Yes GlobalNo
slave_max_allowed_packet  Yes GlobalYes
slave-max-allowed-packetYesYes    
- Variable: slave_max_allowed_packet      
slave-net-timeoutYesYes  GlobalYes
- Variable: slave_net_timeout  Yes GlobalYes
Slave_open_temp_tables   YesGlobalNo
slave_parallel_workers  Yes GlobalYes
slave-parallel-workersYesYes    
- Variable: slave_parallel_workers      
slave_pending_jobs_size_max  Yes GlobalYes
slave-pending-jobs-size-maxYes     
- Variable: slave_pending_jobs_size_max      
Slave_received_heartbeats   YesGlobalNo
Slave_retried_transactions   YesGlobalNo
slave-rows-search-algorithmsYesYes    
- Variable: slave_rows_search_algorithms      
slave_rows_search_algorithms  Yes GlobalYes
Slave_running   YesGlobalNo
slave-skip-errorsYesYes  GlobalNo
- Variable: slave_skip_errors  Yes GlobalNo
slave_sql_verify_checksum  Yes GlobalYes
slave-sql-verify-checksumYesYes    
slave_transaction_retriesYesYesYes GlobalYes
slave_type_conversionsYesYesYes GlobalNo
Slow_launch_threads   YesBothNo
slow_launch_timeYesYesYes GlobalYes
Slow_queries   YesBothNo
slow-query-logYesYes  GlobalYes
- Variable: slow_query_log  Yes GlobalYes
slow_query_log_fileYesYesYes GlobalYes
slow-start-timeoutYesYes    
socketYesYesYes GlobalNo
sort_buffer_sizeYesYesYes BothYes
Sort_merge_passes   YesBothNo
Sort_range   YesBothNo
Sort_rows   YesBothNo
Sort_scan   YesBothNo
sporadic-binlog-dump-failYesYes    
sql_auto_is_null  Yes BothYes
sql_big_selects  Yes BothYes
sql_big_tables  Yes BothYes
sql_buffer_result  Yes BothYes
sql_log_bin  Yes BothYes
sql_log_off  Yes BothYes
sql-modeYesYes  BothYes
- Variable: sql_mode  Yes BothYes
sql_notes  Yes BothYes
sql_quote_show_create  Yes BothYes
sql_safe_updates  Yes BothYes
sql_select_limit  Yes BothYes
sql_slave_skip_counter  Yes GlobalYes
sql_warnings  Yes BothYes
sslYesYes    
Ssl_accept_renegotiates   YesGlobalNo
Ssl_accepts   YesGlobalNo
ssl-caYesYes  GlobalNo
- Variable: ssl_ca  Yes GlobalNo
Ssl_callback_cache_hits   YesGlobalNo
ssl-capathYesYes  GlobalNo
- Variable: ssl_capath  Yes GlobalNo
ssl-certYesYes  GlobalNo
- Variable: ssl_cert  Yes GlobalNo
ssl-cipherYesYes  GlobalNo
- Variable: ssl_cipher  Yes GlobalNo
Ssl_cipher   YesBothNo
Ssl_cipher_list   YesBothNo
Ssl_client_connects   YesGlobalNo
Ssl_connect_renegotiates   YesGlobalNo
ssl-crlYesYes  GlobalNo
- Variable: ssl_crl  Yes GlobalNo
ssl-crlpathYesYes  GlobalNo
- Variable: ssl_crlpath  Yes GlobalNo
Ssl_ctx_verify_depth   YesGlobalNo
Ssl_ctx_verify_mode   YesGlobalNo
Ssl_default_timeout   YesBothNo
Ssl_finished_accepts   YesGlobalNo
Ssl_finished_connects   YesGlobalNo
ssl-keyYesYes  GlobalNo
- Variable: ssl_key  Yes GlobalNo
Ssl_server_not_after   YesBothNo
Ssl_server_not_before   YesBothNo
Ssl_session_cache_hits   YesGlobalNo
Ssl_session_cache_misses   YesGlobalNo
Ssl_session_cache_mode   YesGlobalNo
Ssl_session_cache_overflows   YesGlobalNo
Ssl_session_cache_size   YesGlobalNo
Ssl_session_cache_timeouts   YesGlobalNo
Ssl_sessions_reused   YesBothNo
Ssl_used_session_cache_entries   YesGlobalNo
Ssl_verify_depth   YesBothNo
Ssl_verify_mode   YesBothNo
ssl-verify-server-certYesYes    
Ssl_version   YesBothNo
standaloneYesYes    
storage_engine  Yes BothYes
stored_program_cacheYesYesYes GlobalYes
super-large-pagesYesYes    
symbolic-linksYesYes    
sync_binlogYesYesYes GlobalYes
sync_frmYesYesYes GlobalYes
sync_master_infoYesYesYes GlobalYes
sync_relay_logYesYesYes GlobalYes
sync_relay_log_infoYesYesYes GlobalYes
sysdate-is-nowYesYes    
system_time_zone  Yes GlobalNo
table_definition_cache  Yes GlobalYes
Table_locks_immediate   YesGlobalNo
Table_locks_waited   YesGlobalNo
table_open_cache  Yes GlobalYes
Table_open_cache_hits   YesBothNo
table_open_cache_instances  Yes GlobalNo
Table_open_cache_misses   YesBothNo
Table_open_cache_overflows   YesBothNo
tc-heuristic-recoverYesYes    
Tc_log_max_pages_used   YesGlobalNo
Tc_log_page_size   YesGlobalNo
Tc_log_page_waits   YesGlobalNo
temp-poolYesYes    
thread_cache_sizeYesYesYes GlobalYes
thread_concurrencyYesYesYes GlobalNo
thread_handlingYesYesYes GlobalNo
thread_stackYesYesYes GlobalNo
Threads_cached   YesGlobalNo
Threads_connected   YesGlobalNo
Threads_created   YesGlobalNo
Threads_running   YesGlobalNo
time_format  Yes GlobalNo
time_zone  Yes BothYes
timed_mutexesYesYesYes GlobalYes
timestamp  Yes SessionYes
tmp_table_sizeYesYesYes BothYes
tmpdirYesYesYes GlobalNo
transaction_alloc_block_sizeYesYesYes BothYes
transaction-isolationYesYes    
- Variable: tx_isolation      
transaction_prealloc_sizeYesYesYes BothYes
transaction-read-onlyYesYes    
- Variable: tx_read_only      
tx_isolation  Yes BothYes
tx_read_only  Yes BothYes
unique_checks  Yes BothYes
updatable_views_with_limitYesYesYes BothYes
Uptime   YesGlobalNo
Uptime_since_flush_status   YesGlobalNo
userYesYes    
validate-passwordYesYes    
validate_password_dictionary_file  Yes GlobalNo
validate_password_length  Yes GlobalYes
validate_password_mixed_case_count  Yes GlobalYes
validate_password_number_count  Yes GlobalYes
validate_password_policy  Yes GlobalYes
validate_password_special_char_count  Yes GlobalYes
validate_user_plugins  Yes GlobalNo
verboseYesYes    
version  Yes GlobalNo
version_comment  Yes GlobalNo
version_compile_machine  Yes GlobalNo
version_compile_os  Yes GlobalNo
wait_timeoutYesYesYes BothYes
warning_count  Yes SessionNo

[a] This option is dynamic, but only the server should set this information. You should not set the value of this variable manually.

[b] This option is dynamic, but only the server should set this information. You should not set the value of this variable manually.


5.1.2. Server Configuration Defaults

The MySQL server has many operating parameters, which you can change at server startup using command-line options or configuration files (option files). It is also possible to change many parameters at runtime. For general instructions on setting parameters at startup or runtime, see Section 5.1.3, “Server Command Options”, and Section 5.1.4, “Server System Variables”.

On Unix platforms, mysql_install_db creates a default option file named my.cnf in the base installation directory. This file is created from a template included in the distribution package named my-default.cnf. You can find the template in or under the base installation directory. When started using mysqld_safe, the server uses my.cnf file by default. If my.cnf already exists, mysql_install_db assumes it to be in use and writes a new file named my-new.cnf instead.

With one exception, the settings in the default option file are commented and have no effect. The exception is that the file changes the sql_mode system variable from its default of NO_ENGINE_SUBSTITUTION to also include STRICT_TRANS_TABLES:

sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES 

This setting produces a server configuration that results in errors rather than warnings for bad data in operations that modify transactional tables. See Section 5.1.7, “Server SQL Modes”.

On Windows, MySQL Installer interacts with the user and creates a file named my.ini in the base installation directory as the default option file. If you install on Windows from a Zip archive, you can copy the my-default.ini template file in the base installation directory to my.ini and use the latter as the default option file.

Note

On Windows, the .ini or .cnf option file extension might not be displayed.

On any platform, after completing the installation process, you can edit the default option file at any time to modify the parameters used by the server. For example, to use a parameter setting in the file that is commented with a # character at the beginning of the line, remove the #, and modify the parameter value if necessary. To disable a setting, either add a # to the beginning of the line or remove it.

For additional information about option file format and syntax, see Section 4.2.3.3, “Using Option Files”.

5.1.3. Server Command Options

When you start the mysqld server, you can specify program options using any of the methods described in Section 4.2.3, “Specifying Program Options”. The most common methods are to provide options in an option file or on the command line. However, in most cases it is desirable to make sure that the server uses the same options each time it runs. The best way to ensure this is to list them in an option file. See Section 4.2.3.3, “Using Option Files”.

mysqld reads options from the [mysqld] and [server] groups. mysqld_safe reads options from the [mysqld], [server], [mysqld_safe], and [safe_mysqld] groups. mysql.server reads options from the [mysqld] and [mysql.server] groups.

An embedded MySQL server usually reads options from the [server], [embedded], and [xxxxx_SERVER] groups, where xxxxx is the name of the application into which the server is embedded.

mysqld accepts many command options. For a brief summary, execute mysqld --help. To see the full list, use mysqld --verbose --help.

The following list shows some of the most common server options. Additional options are described in other sections:

You can also set the values of server system variables by using variable names as options, as described at the end of this section.

Some options control the size of buffers or caches. For a given buffer, the server might need to allocate internal data structures. These structures typically are allocated from the total memory allocated to the buffer, and the amount of space required might be platform dependent. This means that when you assign a value to an option that controls a buffer size, the amount of space actually available might differ from the value assigned. In some cases, the amount might be less than the value assigned. It is also possible that the server will adjust a value upward. For example, if you assign a value of 0 to an option for which the minimal value is 1024, the server will set the value to 1024.

Values for buffer sizes, lengths, and stack sizes are given in bytes unless otherwise specified.

Some options take file name values. Unless otherwise specified, the default file location is the data directory if the value is a relative path name. To specify the location explicitly, use an absolute path name. Suppose that the data directory is /var/mysql/data. If a file-valued option is given as a relative path name, it will be located under /var/mysql/data. If the value is an absolute path name, its location is as given by the path name.

  • --help, -?

    Command-Line Format-?
     --help
    Option-File Formathelp

    Display a short help message and exit. Use both the --verbose and --help options to see the full message.

  • --allow-suspicious-udfs

    Command-Line Format--allow-suspicious-udfs
    Option-File Formatallow-suspicious-udfs
     Permitted Values
    Typeboolean
    DefaultFALSE

    This option controls whether user-defined functions that have only an xxx symbol for the main function can be loaded. By default, the option is off and only UDFs that have at least one auxiliary symbol can be loaded; this prevents attempts at loading functions from shared object files other than those containing legitimate UDFs. See Section 22.3.2.6, “User-Defined Function Security Precautions”.

  • --ansi

    Command-Line Format--ansi
     -a
    Option-File Formatansi

    Use standard (ANSI) SQL syntax instead of MySQL syntax. For more precise control over the server SQL mode, use the --sql-mode option instead. See Section 1.8.3, “Running MySQL in ANSI Mode”, and Section 5.1.7, “Server SQL Modes”.

  • --basedir=path, -b path

    Command-Line Format--basedir=path
     -b
    Option-File Formatbasedir
    System Variable Namebasedir
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typefile name

    The path to the MySQL installation directory. All paths are usually resolved relative to this directory.

  • --big-tables

    Command-Line Format--big-tables
    Option-File Formatbig-tables
    System Variable Namebig_tables
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeboolean

    Enable large result sets by saving all temporary sets in files. This option prevents most table full errors, but also slows down queries for which in-memory tables would suffice. Since MySQL 3.23.2, the server is able to handle large result sets automatically by using memory for small temporary tables and switching to disk tables where necessary.

  • --bind-address=addr

    Command-Line Format--bind-address=addr
    Option-File Formatbind-address
    System Variable Namebind_address
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typestring
    Default*

    The MySQL server listens on a single network socket for TCP/IP connections. This socket is bound to a single address, but it is possible for an address to map onto multiple network interfaces. To specify an address, use the --bind-address=addr option at server startup, where addr is an IPv4 or IPv6 address or a host name. If addr is a host name, the server resolves the name to an IP address and binds to that address.

    The server treats different types of addresses as follows:

    • If the address is *, the server accepts TCP/IP connections on all server host IPv6 and IPv4 interfaces if the server host supports IPv6, or accepts TCP/IP connections on all IPv4 addresses otherwise. Use this address to permit both IPv4 and IPv6 connections on all server interfaces. This value is the default) in MySQL 5.7.

    • If the address is 0.0.0.0, the server accepts TCP/IP connections on all server host IPv4 interfaces.

    • If the address is ::, the server accepts TCP/IP connections on all server host IPv4 and IPv6 interfaces.

    • If the address is an IPv4-mapped address, the server accepts TCP/IP connections for that address, in either IPv4 or IPv6 format. For example, if the server is bound to ::ffff:127.0.0.1, clients can connect using --host=127.0.0.1 or --host=::ffff:127.0.0.1.

    • If the address is a regular IPv4 or IPv6 address (such as 127.0.0.1 or ::1), the server accepts TCP/IP connections only for that IPv4 or IPv6 address.

    If you intend to bind the server to a specific address, be sure that the mysql.user grant table contains an account with administrative privileges that you can use to connect to that address. Otherwise, you will not be able to shut down the server. For example, if you bind the server to *, you can connect to it using all existing accounts. But if you bind the server to ::1, it accepts connections only on that address. In that case, first make sure that the 'root'@'::1' account is present in the mysql.user table so you can still connect to the server to shut it down.

  • --binlog-format={ROW|STATEMENT|MIXED}

    Command-Line Format--binlog-format=format
    Option-File Formatbinlog-format
    System Variable Namebinlog_format
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeenumeration
    DefaultSTATEMENT
    Valid ValuesROW
    STATEMENT
    MIXED

    Specify whether to use row-based, statement-based, or mixed replication. Statement-based is the default in MySQL 5.7. See Section 16.1.2, “Replication Formats”.

    Under some conditions, changing this variable at runtime is not possible, or causes replication to fail. See Section 5.2.4.2, “Setting The Binary Log Format”, for more information.

    Setting the binary logging format without enabling binary logging sets the binlog_format global system variable and logs a warning.

  • --bootstrap

    Command-Line Format--bootstrap
    Option-File Formatbootstrap

    This option is used by the mysql_install_db script to create the MySQL privilege tables without having to start a full MySQL server.

    Replication and global transaction identifiers are automatically disabled whenever this option is used (Bug #1332602). See Section 16.1.3, “Replication with Global Transaction Identifiers”.

  • --character-sets-dir=path

    Command-Line Format--character-sets-dir=path
    Option-File Formatcharacter-sets-dir
    System Variable Namecharacter_sets_dir
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typedirectory name

    The directory where character sets are installed. See Section 10.5, “Character Set Configuration”.

  • --character-set-client-handshake

    Command-Line Format--character-set-client-handshake
    Option-File Formatcharacter-set-client-handshake
     Permitted Values
    Typeboolean
    DefaultTRUE

    Do not ignore character set information sent by the client. To ignore client information and use the default server character set, use --skip-character-set-client-handshake; this makes MySQL behave like MySQL 4.0.

  • --character-set-filesystem=charset_name

    Command-Line Format--character-set-filesystem=name
    Option-File Formatcharacter-set-filesystem
    System Variable Namecharacter_set_filesystem
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typestring

    The file system character set. This option sets the character_set_filesystem system variable.

  • --character-set-server=charset_name, -C charset_name

    Command-Line Format--character-set-server
    Option-File Formatcharacter-set-server
    System Variable Namecharacter_set_server
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typestring

    Use charset_name as the default server character set. See Section 10.5, “Character Set Configuration”. If you use this option to specify a nondefault character set, you should also use --collation-server to specify the collation.

  • --chroot=path, -r path

    Command-Line Format--chroot=name
     -r name
    Option-File Formatchroot
     Permitted Values
    Typefile name

    Put the mysqld server in a closed environment during startup by using the chroot() system call. This is a recommended security measure. Note that use of this option somewhat limits LOAD DATA INFILE and SELECT ... INTO OUTFILE.

  • --collation-server=collation_name

    Command-Line Format--collation-server
    Option-File Formatcollation-server
    System Variable Namecollation_server
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typestring

    Use collation_name as the default server collation. See Section 10.5, “Character Set Configuration”.

  • --console

    Command-Line Format--console
    Option-File Formatconsole
    Platform Specificwindows

    (Windows only.) Write error log messages to stderr and stdout even if --log-error is specified. mysqld does not close the console window if this option is used.

    If both --log-error and --console are specified, --console takes precedence. The server writes to the console, but not to the log file. (In MySQL 5.5 and 5.6, the precedence is reversed: --log-error causes --console to be ignored.)

  • --core-file

    Command-Line Format--core-file
    Option-File Formatcore-file
     Permitted Values
    Typeboolean
    DefaultOFF

    Write a core file if mysqld dies. The name and location of the core file is system dependent. On Linux, a core file named core.pid is written to the current working directory of the process, which for mysqld is the data directory. pid represents the process ID of the server process. On Mac OS X, a core file named core.pid is written to the /cores directory. On Solaris, use the coreadm command to specify where to write the core file and how to name it.

    For some systems, to get a core file you must also specify the --core-file-size option to mysqld_safe. See Section 4.3.2, “mysqld_safe — MySQL Server Startup Script”. On some systems, such as Solaris, you do not get a core file if you are also using the --user option. There might be additional restrictions or limitations. For example, it might be necessary to execute ulimit -c unlimited before starting the server. Consult your system documentation.

  • --datadir=path, -h path

    Command-Line Format--datadir=path
     -h
    Option-File Formatdatadir
    System Variable Namedatadir
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typefile name

    The path to the data directory.

  • --debug[=debug_options], -# [debug_options]

    Command-Line Format--debug[=debug_options]
    Option-File Formatdebug
    System Variable Namedebug
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typestring
    Default'd:t:o,/tmp/mysqld.trace'

    If MySQL is configured with -DWITH_DEBUG=1, you can use this option to get a trace file of what mysqld is doing. A typical debug_options string is 'd:t:o,file_name'. The default is 'd:t:i:o,mysqld.trace'.

    Using -DWITH_DEBUG=1 to configure MySQL with debugging support enables you to use the --debug="d,parser_debug" option when you start the server. This causes the Bison parser that is used to process SQL statements to dump a parser trace to the server's standard error output. Typically, this output is written to the error log.

    This option may be given multiple times. Values that begin with + or - are added to or subtracted from the previous value. For example, --debug=T --debug=+P sets the value to P:T.

    For more information, see Section 22.4.3, “The DBUG Package”.

  • --debug-sync-timeout[=N]

    Command-Line Format--debug-sync-timeout[=#]
    Option-File Formatdebug-sync-timeout
     Permitted Values
    Typenumeric

    Controls whether the Debug Sync facility for testing and debugging is enabled. Use of Debug Sync requires that MySQL be configured with the -DENABLE_DEBUG_SYNC=1 option (see Section 2.9.4, “MySQL Source-Configuration Options”). If Debug Sync is not compiled in, this option is not available. The option value is a timeout in seconds. The default value is 0, which disables Debug Sync. To enable it, specify a value greater than 0; this value also becomes the default timeout for individual synchronization points. If the option is given without a value, the timeout is set to 300 seconds.

    For a description of the Debug Sync facility and how to use synchronization points, see MySQL Internals: Test Synchronization.

  • --default-authentication-plugin=plugin_name

    Command-Line Format--default-authentication-plugin=plugin_name
    Option-File Formatdefault-authentication-plugin
     Permitted Values
    Typeenumeration
    Defaultmysql_native_password
    Valid Valuesmysql_native_password
    sha256_password

    This option sets the default authentication plugin. Permitted values are mysql_native_password (use MySQL native passwords) and sha256_password (use SHA-256 passwords). For more information about these plugins, see Section 6.3.7.2, “The Native Authentication Plugin”, and Section 6.3.7.4, “The SHA-256 Authentication Plugin”.

    Note

    If you use this option to change the default authentication plugin to a value other than mysql_native_password, clients older than MySQL 5.5.6 will no longer be able to connect because they will not understand the resulting change to the authentication protocol.

    The --default-authentication-plugin value affects these aspects of server operation:

    • It determines which authentication plugin the server assigns to new accounts created by CREATE USER and GRANT statements that do not name a plugin explicitly with an IDENTIFIED WITH clause.

    • It sets the old_passwords system variable at startup to the value that is consistent with the password hashing method required by the default plugin. The old_passwords value affects hashing of passwords specified in the IDENTIFIED BY clause of CREATE USER and GRANT, and passwords specified as the argument to the PASSWORD() function.

    • For an account created with either of the following statements, the server associates the account with the default authentication plugin and assigns the account the given password, hashed according to the value of old_passwords.

      CREATE USER ... IDENTIFIED BY 'cleartext password';
      GRANT ...  IDENTIFIED BY 'cleartext password';
      
    • For an account created with either of the following statements, the statement fails if the password hash is not encrypted using the hash format required by the default authentication plugin. Otherwise, the server associates the account with the default authentication plugin and assigns the account the given password hash.

      CREATE USER ... IDENTIFIED BY PASSWORD 'encrypted password';
      GRANT ...  IDENTIFIED BY PASSWORD 'encrypted password';
      
  • --default-storage-engine=type

    Command-Line Format--default-storage-engine=name
    Option-File Formatdefault-storage-engine
    System Variable Namedefault_storage_engine
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeenumeration
    DefaultInnoDB

    Set the default storage engine for tables. See Chapter 14, Storage Engines. This option sets the storage engine for permanent tables only. To set the storage engine for TEMPORARY tables, set the default_tmp_storage_engine system variable.

    If you disable the default storage engine at server startup, you must set the default engine for both permanent and TEMPORARY tables to a different engine or the server will not start.

  • --default-time-zone=timezone

    Command-Line Format--default-time-zone=name
    Option-File Formatdefault-time-zone
     Permitted Values
    Typestring

    Set the default server time zone. This option sets the global time_zone system variable. If this option is not given, the default time zone is the same as the system time zone (given by the value of the system_time_zone system variable.

  • --delay-key-write[={OFF|ON|ALL}]

    Command-Line Format--delay-key-write[=name]
    Option-File Formatdelay-key-write
    System Variable Namedelay_key_write
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeenumeration
    DefaultON
    Valid ValuesON
    OFF
    ALL

    Specify how to use delayed key writes. Delayed key writing causes key buffers not to be flushed between writes for MyISAM tables. OFF disables delayed key writes. ON enables delayed key writes for those tables that were created with the DELAY_KEY_WRITE option. ALL delays key writes for all MyISAM tables. See Section 8.11.2, “Tuning Server Parameters”, and Section 14.3.1, “MyISAM Startup Options”.

    Note

    If you set this variable to ALL, you should not use MyISAM tables from within another program (such as another MySQL server or myisamchk) when the tables are in use. Doing so leads to index corruption.

  • --des-key-file=file_name

    Command-Line Format--des-key-file=file_name
    Option-File Formatdes-key-file

    Read the default DES keys from this file. These keys are used by the DES_ENCRYPT() and DES_DECRYPT() functions.

  • --enable-named-pipe

    Command-Line Format--enable-named-pipe
    Option-File Formatenable-named-pipe
    Platform Specificwindows

    Enable support for named pipes. This option applies only on Windows.

  • --event-scheduler[=value]

    Command-Line Format--event-scheduler[=value]
    Option-File Formatevent-scheduler
    System Variable Nameevent_scheduler
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeenumeration
    DefaultOFF
    Valid ValuesON
    OFF
    DISABLED

    Enable or disable, and start or stop, the event scheduler.

    For detailed information, see The --event-scheduler Option.

  • --exit-info[=flags], -T [flags]

    Command-Line Format--exit-info[=flags]
     -T [flags]
    Option-File Formatexit-info
     Permitted Values
    Typenumeric

    This is a bit mask of different flags that you can use for debugging the mysqld server. Do not use this option unless you know exactly what it does!

  • --external-locking

    Command-Line Format--external-locking
    Option-File Formatexternal-locking
     Permitted Values
    Typeboolean
    DefaultFALSE

    Enable external locking (system locking), which is disabled by default as of MySQL 4.0. Note that if you use this option on a system on which lockd does not fully work (such as Linux), it is easy for mysqld to deadlock.

    To disable external locking explicitly, use --skip-external-locking.

    External locking affects only MyISAM table access. For more information, including conditions under which it can and cannot be used, see Section 8.10.5, “External Locking”.

  • --flush

    Command-Line Format--flush
    Option-File Formatflush
    System Variable Nameflush
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultOFF

    Flush (synchronize) all changes to disk after each SQL statement. Normally, MySQL does a write of all changes to disk only after each SQL statement and lets the operating system handle the synchronizing to disk. See Section C.5.4.2, “What to Do If MySQL Keeps Crashing”.

  • --gdb

    Command-Line Format--gdb
    Option-File Formatgdb
     Permitted Values
    Typeboolean
    DefaultFALSE

    Install an interrupt handler for SIGINT (needed to stop mysqld with ^C to set breakpoints) and disable stack tracing and core file handling. See Section 22.4, “Debugging and Porting MySQL”.

  • --general-log[={0|1}]

    Command-Line Format--general-log
    Option-File Formatgeneral-log
    System Variable Namegeneral_log
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultOFF

    Specify the initial general query log state. With no argument or an argument of 1, the --general-log option enables the log. If omitted or given with an argument of 0, the option disables the log.

  • --ignore-db-dir=dir_name

    Command-Line Format--ignore-db-dir
    Option-File Formatignore-db-dir
     Permitted Values
    Typedirectory name

    This option tells the server to ignore the given directory name for purposes of the SHOW DATABASES statement or INFORMATION_SCHEMA tables. For example, if a MySQL configuration locates the data directory at the root of a file system on Unix, the system might create a lost+found directory there that the server should ignore. Starting the server with --ignore-db-dir=lost+found causes that name not to be listed as a database.

    To specify more than one name, use this option multiple times, once for each name. Specifying the option with an empty value (that is, as --ignore-db-dir=) resets the directory list to the empty list.

    Instances of this option given at server startup are used to set the ignore_db_dirs system variable.

  • --init-file=file_name

    Command-Line Format--init-file=file_name
    Option-File Formatinit-file
    System Variable Nameinit_file
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typefile name

    Read SQL statements from this file at startup. Each statement must be on a single line and should not include comments.

  • --innodb-xxx

    Set an option for the InnoDB storage engine. The InnoDB options are listed in Section 14.2.6, “InnoDB Startup Options and System Variables”.

  • --install [service_name]

    Command-Line Format--install [service_name]

    (Windows only) Install the server as a Windows service that starts automatically during Windows startup. The default service name is MySQL if no service_name value is given. For more information, see Section 2.3.5.7, “Starting MySQL as a Windows Service”.

    Note

    If the server is started with the --defaults-file and --install options, --install must be first.

  • --install-manual [service_name]

    Command-Line Format--install-manual [service_name]

    (Windows only) Install the server as a Windows service that must be started manually. It does not start automatically during Windows startup. The default service name is MySQL if no service_name value is given. For more information, see Section 2.3.5.7, “Starting MySQL as a Windows Service”.

    Note

    If the server is started with the --defaults-file and --install-manual options, --install-manual must be first.

  • --language=lang_name, -L lang_name

    Deprecated5.6.1, by lc-messages-dir
    Command-Line Format--language=name
     -L
    Option-File Formatlanguage
    System Variable Namelanguage
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typedirectory name
    Default/usr/local/mysql/share/mysql/english/

    The language to use for error messages. lang_name can be given as the language name or as the full path name to the directory where the language files are installed. See Section 10.2, “Setting the Error Message Language”.

    In MySQL 5.7, --lc-messages-dir and --lc-messages should be used rather than --language, which is deprecated (and handled as an alias for --lc-messages-dir). The --language option will be removed in a future MySQL release.

  • --large-pages

    Command-Line Format--large-pages
    Option-File Formatlarge-pages
    System Variable Namelarge_pages
    Variable ScopeGlobal
    Dynamic VariableNo
    Platform Specificlinux
     Permitted Values
    Type (linux)boolean
    DefaultFALSE

    Some hardware/operating system architectures support memory pages greater than the default (usually 4KB). The actual implementation of this support depends on the underlying hardware and operating system. Applications that perform a lot of memory accesses may obtain performance improvements by using large pages due to reduced Translation Lookaside Buffer (TLB) misses.

    MySQL 5.7 supports the Linux implementation of large page support (which is called HugeTLB in Linux). See Section 8.11.4.2, “Enabling Large Page Support”. For Solaris support of large pages, see the description of the --super-large-pages option.

    --large-pages is disabled by default.

  • --lc-messages=locale_name

    Command-Line Format--lc-messages=name
    Option-File Formatlc-messages
    System Variable Namelc_messages
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typestring

    The locale to use for error messages. The server converts the argument to a language name and combines it with the value of the --lc-messages-dir to produce the location for the error message file. See Section 10.2, “Setting the Error Message Language”.

  • --lc-messages-dir=path

    Command-Line Format--lc-messages-dir=path
    Option-File Formatlc-messages-dir
    System Variable Namelc_messages_dir
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typedirectory name

    The directory where error messages are located. The value is used together with the value of --lc-messages to produce the location for the error message file. See Section 10.2, “Setting the Error Message Language”.

  • --log-error[=file_name]

    Command-Line Format--log-error[=name]
    Option-File Formatlog-error
    System Variable Namelog_error
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typefile name

    Log errors and startup messages to this file. See Section 5.2.2, “The Error Log”. If you omit the file name, MySQL uses host_name.err. If the file name has no extension, the server adds an extension of .err.

  • --log-isam[=file_name]

    Command-Line Format--log-isam[=name]
    Option-File Formatlog-isam
     Permitted Values
    Typefile name

    Log all MyISAM changes to this file (used only when debugging MyISAM).

  • --log-output=value,...

    Command-Line Format--log-output=name
    Option-File Formatlog-output
    System Variable Namelog_output
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeset
    DefaultFILE
    Valid ValuesTABLE
    FILE
    NONE

    This option determines the destination for general query log and slow query log output. The option value can be given as one or more of the words TABLE, FILE, or NONE. TABLE select logging to the general_log and slow_log tables in the mysql database as a destination. FILE selects logging to log files as a destination. NONE disables logging. If NONE is present in the option value, it takes precedence over any other words that are present. TABLE and FILE can both be given to select to both log output destinations.

    This option selects log output destinations, but does not enable log output. To do that, use the --general_log and --slow_query_log options. For FILE logging, the --general_log_file and -slow_query_log_file options determine the log file location. For more information, see Section 5.2.1, “Selecting General Query and Slow Query Log Output Destinations”.

  • --log-queries-not-using-indexes

    Command-Line Format--log-queries-not-using-indexes
    Option-File Formatlog-queries-not-using-indexes
    System Variable Namelog_queries_not_using_indexes
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultOFF

    If you are using this option with the slow query log enabled, queries that are expected to retrieve all rows are logged. See Section 5.2.5, “The Slow Query Log”. This option does not necessarily mean that no index is used. For example, a query that uses a full index scan uses an index but would be logged because the index would not limit the number of rows.

  • --log-raw

    Command-Line Format--log-raw[=value]
    Option-File Formatlog-raw
     Permitted Values
    Typeboolean
    DefaultOFF

    In MySQL 5.7, passwords in certain statements written to the general query log, slow query log, and binary log are rewritten by the server not to occur literally in plain text. Password rewriting can be suppressed for the general query log by starting the server with the --log-raw option. This option may be useful for diagnostic purposes, to see the exact text of statements as received by the server, but for security reasons is not recommended for production use.

    For more information, see Section 6.1.2.3, “Passwords and Logging”.

  • --log-short-format

    Command-Line Format--log-short-format
    Option-File Formatlog-short-format
     Permitted Values
    Typeboolean
    DefaultFALSE

    Log less information to the binary log and slow query log, if they have been activated.

  • --log-slow-admin-statements

    Removed5.7.1
    Command-Line Format--log-slow-admin-statementsthrough 5.7.0
    Option-File Formatlog-slow-admin-statements
     Permitted Values
    Typeboolean
    DefaultOFF

    Include slow administrative statements in the statements written to the slow query log. Administrative statements include ALTER TABLE, ANALYZE TABLE, CHECK TABLE, CREATE INDEX, DROP INDEX, OPTIMIZE TABLE, and REPAIR TABLE.

    This command-line option was removed in MySQL 5.7.1 and replaced by the log_slow_admin_statements system variable. The system variable can be set on the command line or in option files the same way as the option, so there is no need for any changes at server startup, but the system variable also makes it possible to examine or set the value at runtime.

  • --log-tc=file_name

    Command-Line Format--log-tc=name
    Option-File Formatlog-tc
     Permitted Values
    Typefile name
    Defaulttc.log

    The name of the memory-mapped transaction coordinator log file (for XA transactions that affect multiple storage engines when the binary log is disabled). The default name is tc.log. The file is created under the data directory if not given as a full path name. Currently, this option is unused.

  • --log-tc-size=size

    Command-Line Format--log-tc-size=#
    Option-File Formatlog-tc-size
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default24576
    Max Value4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default24576
    Max Value18446744073709547520

    The size in bytes of the memory-mapped transaction coordinator log. The default size is 24KB.

  • --log-warnings[=level], -W [level]

    Command-Line Format--log-warnings[=#]
     -W [#]
    Option-File Formatlog-warnings[=#]
    System Variable Namelog_warnings
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default1
    Range0 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default1
    Range0 .. 18446744073709547520

    Print out warnings such as Aborted connection... to the error log. This option is enabled (1) by default. To disable it, use --log-warnings=0. Specifying the option without a level value increments the current value by 1. Enabling this option by setting it greater than 0 is recommended, for example, if you use replication (you get more information about what is happening, such as messages about network failures and reconnections). If the value is greater than 1, aborted connections are written to the error log, and access-denied errors for new connection attempts are written. See Section C.5.2.11, “Communication Errors and Aborted Connections”.

    If a slave server was started with --log-warnings enabled, the slave prints messages to the error log to provide information about its status, such as the binary log and relay log coordinates where it starts its job, when it is switching to another relay log, when it reconnects after a disconnect, and so forth. The server logs messages about statements that are unsafe for statement-based logging if --log-warnings is greater than 0.

  • --low-priority-updates

    Command-Line Format--low-priority-updates
    Option-File Formatlow-priority-updates
    System Variable Namelow_priority_updates
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultFALSE

    Give table-modifying operations (INSERT, REPLACE, DELETE, UPDATE) lower priority than selects. This can also be done using {INSERT | REPLACE | DELETE | UPDATE} LOW_PRIORITY ... to lower the priority of only one query, or by SET LOW_PRIORITY_UPDATES=1 to change the priority in one thread. This affects only storage engines that use only table-level locking (MyISAM, MEMORY, MERGE). See Section 8.10.2, “Table Locking Issues”.

  • --min-examined-row-limit=number

    Command-Line Format--min-examined-row-limit=#
    Option-File Formatmin-examined-row-limit
    System Variable Namemin_examined_row_limit
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default0
    Range0 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default0
    Range0 .. 18446744073709547520

    When this option is set, queries which examine fewer than number rows are not written to the slow query log. The default is 0.

  • --memlock

    Command-Line Format--memlock
    Option-File Formatmemlock
    System Variable Namelocked_in_memory
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typeboolean
    DefaultFALSE

    Lock the mysqld process in memory. This option might help if you have a problem where the operating system is causing mysqld to swap to disk.

    --memlock works on systems that support the mlockall() system call; this includes Solaris, most Linux distributions that use a 2.4 or newer kernel, and perhaps other Unix systems. On Linux systems, you can tell whether or not mlockall() (and thus this option) is supported by checking to see whether or not it is defined in the system mman.h file, like this:

    shell> grep mlockall /usr/include/sys/mman.h
    

    If mlockall() is supported, you should see in the output of the previous command something like the following:

    extern int mlockall (int __flags) __THROW;
    Important

    Use of this option may require you to run the server as root, which, for reasons of security, is normally not a good idea. See Section 6.1.5, “How to Run MySQL as a Normal User”.

    On Linux and perhaps other systems, you can avoid the need to run the server as root by changing the limits.conf file. See the notes regarding the memlock limit in Section 8.11.4.2, “Enabling Large Page Support”.

    You must not try to use this option on a system that does not support the mlockall() system call; if you do so, mysqld will very likely crash as soon as you try to start it.

  • --myisam-block-size=N

    Command-Line Format--myisam-block-size=#
    Option-File Formatmyisam-block-size
     Permitted Values
    Typenumeric
    Default1024
    Range1024 .. 16384

    The block size to be used for MyISAM index pages.

  • --myisam-recover-options[=option[,option]...]]

    Command-Line Format--myisam-recover-options[=name]
    Option-File Formatmyisam-recover-options
     Permitted Values
    Typeenumeration
    DefaultOFF
    Valid ValuesOFF
    DEFAULT
    BACKUP
    FORCE
    QUICK

    Set the MyISAM storage engine recovery mode. The option value is any combination of the values of OFF, DEFAULT, BACKUP, FORCE, or QUICK. If you specify multiple values, separate them by commas. Specifying the option with no argument is the same as specifying DEFAULT, and specifying with an explicit value of "" disables recovery (same as a value of OFF). If recovery is enabled, each time mysqld opens a MyISAM table, it checks whether the table is marked as crashed or was not closed properly. (The last option works only if you are running with external locking disabled.) If this is the case, mysqld runs a check on the table. If the table was corrupted, mysqld attempts to repair it.

    The following options affect how the repair works.

    OptionDescription
    OFFNo recovery.
    DEFAULTRecovery without backup, forcing, or quick checking.
    BACKUPIf the data file was changed during recovery, save a backup of the tbl_name.MYD file as tbl_name-datetime.BAK.
    FORCERun recovery even if we would lose more than one row from the .MYD file.
    QUICKDo not check the rows in the table if there are not any delete blocks.

    Before the server automatically repairs a table, it writes a note about the repair to the error log. If you want to be able to recover from most problems without user intervention, you should use the options BACKUP,FORCE. This forces a repair of a table even if some rows would be deleted, but it keeps the old data file as a backup so that you can later examine what happened.

    See Section 14.3.1, “MyISAM Startup Options”.

  • --old-alter-table

    Command-Line Format--old-alter-table
    Option-File Formatold-alter-table
    System Variable Nameold_alter_table
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultOFF

    When this option is given, the server does not use the optimized method of processing an ALTER TABLE operation. It reverts to using a temporary table, copying over the data, and then renaming the temporary table to the original, as used by MySQL 5.0 and earlier. For more information on the operation of ALTER TABLE, see Section 13.1.6, “ALTER TABLE Syntax”.

  • --old-style-user-limits

    Command-Line Format--old-style-user-limits
    Option-File Formatold-style-user-limits
     Permitted Values
    Typeboolean
    DefaultFALSE

    Enable old-style user limits. (Before MySQL 5.0.3, account resource limits were counted separately for each host from which a user connected rather than per account row in the user table.) See Section 6.3.4, “Setting Account Resource Limits”.

  • --open-files-limit=count

    Command-Line Format--open-files-limit=#
    Option-File Formatopen-files-limit
    System Variable Nameopen_files_limit
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typenumeric
    Default-1 (autosized)
    Range0 .. 65535

    Changes the number of file descriptors available to mysqld. You should try increasing the value of this option if mysqld gives you the error Too many open files. mysqld uses the option value to reserve descriptors with setrlimit(). If the requested number of file descriptors cannot be allocated, mysqld writes a warning to the error log.

    mysqld may attempt to allocate more than the requested number of descriptors (if they are available), using the values of max_connections and table_open_cache to estimate whether more descriptors will be needed.

    On Unix, the value cannot be set less than ulimit -n.

  • --partition[=value]

    Command-Line Format--partition
    Option-File Formatpartition
    Disabled byskip-partition
     Permitted Values
    Typeboolean
    DefaultON

    Enables or disables user-defined partitioning support in the MySQL Server.

  • --performance-schema-xxx

    Configure a Performance Schema option. For details, see Section 20.11, “Performance Schema Command Options”.

  • --pid-file=path

    Command-Line Format--pid-file=file_name
    Option-File Formatpid-file
    System Variable Namepid_file
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typefile name

    The path name of the process ID file. The server creates the file in the data directory unless an absolute path name is given to specify a different directory. This file is used by other programs such as mysqld_safe to determine the server's process ID.

  • --plugin-xxx

    Specifies an option that pertains to a server plugin. For example, many storage engines can be built as plugins, and for such engines, options for them can be specified with a --plugin prefix. Thus, the --innodb_file_per_table option for InnoDB can be specified as --plugin-innodb_file_per_table.

    For boolean options that can be enabled or disabled, the --skip prefix and other alternative formats are supported as well (see Section 4.2.3.2, “Program Option Modifiers”). For example, --skip-plugin-innodb_file_per_table disables innodb_file_per_table.

    The rationale for the --plugin prefix is that it enables plugin options to be specified unambiguously if there is a name conflict with a built-in server option. For example, were a plugin writer to name a plugin sql and implement a mode option, the option name might be --sql-mode, which would conflict with the built-in option of the same name. In such cases, references to the conflicting name are resolved in favor of the built-in option. To avoid the ambiguity, users can specify the plugin option as --plugin-sql-mode. Use of the --plugin prefix for plugin options is recommended to avoid any question of ambiguity.

  • --plugin-load=plugin_list

    Command-Line Format--plugin-load=plugin_list
    Option-File Formatplugin-load
     Permitted Values
    Typestring

    This option tells the server to load the named plugins at startup. The option value is a semicolon-separated list of name=plugin_library pairs. Each name is the name of the plugin, and plugin_library is the name of the shared library that contains the plugin code. Each library file must be located in the directory named by the plugin_dir system variable. For example, if plugins named myplug1 and myplug2 have library files myplug1.so and myplug2.so, use this option to load them at startup:

    shell> mysqld --plugin-load="myplug1=myplug1.so;myplug2=myplug2.so"
    

    Quotes are used around the argument value here because semicolon (;) is interpreted as a special character by some command interpreters. (Unix shells treat it as a command terminator, for example.)

    If multiple --plugin-load options are given, only the last one is used. Additional plugins to load may be specified using --plugin-load-add options.

    If a plugin library is named without any preceding plugin name, the server loads all plugins in the library.

    Each plugin is loaded for a single invocation of mysqld only. After a restart, the plugin is not loaded unless --plugin-load is used again. This is in contrast to INSTALL PLUGIN, which adds an entry to the mysql.plugins table to cause the plugin to be loaded for every normal server startup.

    Under normal startup, the server determines which plugins to load by reading the mysql.plugins system table. If the server is started with the --skip-grant-tables option, it does not consult the mysql.plugins table and does not load plugins listed there. --plugin-load enables plugins to be loaded even when --skip-grant-tables is given. --plugin-load also enables plugins to be loaded at startup under configurations when plugins cannot be loaded at runtime.

    For additional information about plugin loading, see Section 5.1.8.1, “Installing and Uninstalling Plugins”.

  • --plugin-load-add=plugin_list

    Command-Line Format--plugin-load-add=plugin_list
    Option-File Formatplugin-load-add
     Permitted Values
    Typestring

    This option complements the --plugin-load option. --plugin-load-add adds a plugin or plugins to the set of plugins to be loaded at startup. The argument format is the same as for --plugin-load. --plugin-load-add can be used to avoid specifying a large set of plugins as a single long unwieldy --plugin-load argument.

    --plugin-load-add can be given in the absence of --plugin-load, but any instance of --plugin-load-add that appears before --plugin-load. has no effect because --plugin-load resets the set of plugins to load. In other words, these options:

    --plugin-load=x --plugin-load-add=y

    are equivalent to this option:

    --plugin-load="x;y"

    But these options:

    --plugin-load-add=y --plugin-load=x

    are equivalent to this option:

    --plugin-load=x

    For additional information about plugin loading, see Section 5.1.8.1, “Installing and Uninstalling Plugins”.

  • --port=port_num, -P port_num

    Command-Line Format--port=#
     -P
    Option-File Formatport
    System Variable Nameport
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typenumeric
    Default3306
    Range0 .. 65535

    The port number to use when listening for TCP/IP connections. The port number must be 1024 or higher unless the server is started by the root system user.

  • --port-open-timeout=num

    Command-Line Format--port-open-timeout=#
    Option-File Formatport-open-timeout
     Permitted Values
    Typenumeric
    Default0

    On some systems, when the server is stopped, the TCP/IP port might not become available immediately. If the server is restarted quickly afterward, its attempt to reopen the port can fail. This option indicates how many seconds the server should wait for the TCP/IP port to become free if it cannot be opened. The default is not to wait.

  • --remove [service_name]

    Command-Line Format--remove [service_name]

    (Windows only) Remove a MySQL Windows service. The default service name is MySQL if no service_name value is given. For more information, see Section 2.3.5.7, “Starting MySQL as a Windows Service”.

  • --safe-user-create

    Command-Line Format--safe-user-create
    Option-File Formatsafe-user-create
     Permitted Values
    Typeboolean
    DefaultFALSE

    If this option is enabled, a user cannot create new MySQL users by using the GRANT statement unless the user has the INSERT privilege for the mysql.user table or any column in the table. If you want a user to have the ability to create new users that have those privileges that the user has the right to grant, you should grant the user the following privilege:

    GRANT INSERT(user) ON mysql.user TO 'user_name'@'host_name';
    

    This ensures that the user cannot change any privilege columns directly, but has to use the GRANT statement to give privileges to other users.

  • --secure-auth

    Command-Line Format--secure-auth
    Option-File Formatsecure-auth
    System Variable Namesecure_auth
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultON

    This option causes the server to block connections by clients that attempt to use accounts that have passwords stored in the old (pre-4.1) format. Use it to prevent all use of passwords employing the old format (and hence insecure communication over the network). This option is enabled by default; to disable it, use --skip-secure-auth.

    Server startup fails with an error if this option is enabled and the privilege tables are in pre-4.1 format. See Section C.5.2.4, “Client does not support authentication protocol.

    The mysql client also has a --secure-auth option, which prevents connections to a server if the server requires a password in old format for the client account.

    Note

    Passwords that use the pre-4.1 hashing method are less secure than passwords that use the native password hashing method and should be avoided. Pre-4.1 passwords are deprecated and support for them will be removed in a future MySQL release.

  • --secure-file-priv=path

    Command-Line Format--secure-file-priv=path
    Option-File Formatsecure-file-priv
    System Variable Namesecure_file_priv
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typestring

    This option limits the effect of the LOAD_FILE() function and the LOAD DATA and SELECT ... INTO OUTFILE statements to work only with files in the specified directory.

  • --shared-memory

    System Variable Nameshared_memory
    Variable ScopeGlobal
    Dynamic VariableNo
    Platform Specificwindows

    Enable shared-memory connections by local clients. This option is available only on Windows.

  • --shared-memory-base-name=name

    System Variable Nameshared_memory_base_name
    Variable ScopeGlobal
    Dynamic VariableNo
    Platform Specificwindows

    The name of shared memory to use for shared-memory connections. This option is available only on Windows. The default name is MYSQL. The name is case sensitive.

  • --skip-concurrent-insert

    Turn off the ability to select and insert at the same time on MyISAM tables. (This is to be used only if you think you have found a bug in this feature.) See Section 8.10.3, “Concurrent Inserts”.

  • --skip-event-scheduler

    Command-Line Format--skip-event-scheduler
     --disable-event-scheduler
    Option-File Formatskip-event-scheduler

    Turns the Event Scheduler OFF. This is not the same as disabling the Event Scheduler, which requires setting --event-scheduler=DISABLED; see The --event-scheduler Option, for more information.

  • --skip-grant-tables

    This option causes the server to start without using the privilege system at all, which gives anyone with access to the server unrestricted access to all databases. You can cause a running server to start using the grant tables again by executing mysqladmin flush-privileges or mysqladmin reload command from a system shell, or by issuing a MySQL FLUSH PRIVILEGES statement after connecting to the server. This option also suppresses loading of plugins that were installed with the INSTALL PLUGIN statement, user-defined functions (UDFs), and scheduled events. To cause plugins to be loaded anyway, use the --plugin-load option.

    Note that FLUSH PRIVILEGES might be executed implicitly by other actions performed after startup. For example, mysql_upgrade flushes the privileges during the upgrade procedure.

  • --skip-host-cache

    Disable use of the internal host cache for faster name-to-IP resolution. In this case, the server performs a DNS lookup every time a client connects. See Section 8.11.5.2, “DNS Lookup Optimization and the Host Cache”.

    Use of --skip-host-cache is similar to setting the host_cache_size system variable to 0, but host_cache_size is more flexible because it can also be used to resize, enable, or disable the host cache at runtime, not just at server startup.

    If you start the server with --skip-host-cache, that does not prevent changes to the value of host_cache_size, but such changes have no effect and the cache is not re-enabled even if host_cache_size is set larger than 0.

  • --skip-innodb

    Disable the InnoDB storage engine. In this case, because the default storage engine is InnoDB, the server will not start unless you also use --default-storage-engine and --default-tmp-storage-engine to set the default to some other engine for both permanent and TEMPORARY tables.

  • --skip-name-resolve

    Do not resolve host names when checking client connections. Use only IP addresses. If you use this option, all Host column values in the grant tables must be IP addresses or localhost. See Section 8.11.5.2, “DNS Lookup Optimization and the Host Cache”.

  • --skip-networking

    Do not listen for TCP/IP connections at all. All interaction with mysqld must be made using named pipes or shared memory (on Windows) or Unix socket files (on Unix). This option is highly recommended for systems where only local clients are permitted. See Section 8.11.5.2, “DNS Lookup Optimization and the Host Cache”.

  • --skip-partition

    Command-Line Format--skip-partition
     --disable-partition
    Option-File Formatskip-partition

    Disables user-defined partitioning. Partitioned tables can be seen using SHOW TABLES or by querying the INFORMATION_SCHEMA.TABLES table, but cannot be created or modified, nor can data in such tables be accessed. All partition-specific columns in the INFORMATION_SCHEMA.PARTITIONS table display NULL.

    Since DROP TABLE removes table definition (.frm) files, this statement works on partitioned tables even when partitioning is disabled using the option. The statement, however, does not remove .par files associated with partitioned tables in such cases. For this reason, you should avoid dropping partitioned tables with partitioning disabled, or take action to remove the orphaned .par files manually.

  • --ssl*

    Options that begin with --ssl specify whether to permit clients to connect using SSL and indicate where to find SSL keys and certificates. See Section 6.3.9.4, “SSL Command Options”.

  • --standalone

    Command-Line Format--standalone
    Option-File Formatstandalone
    Platform Specificwindows

    Available on Windows only; instructs the MySQL server not to run as a service.

  • --super-large-pages

    Command-Line Format--super-large-pages
    Option-File Formatsuper-large-pages
    Platform Specificsolaris
     Permitted Values
    Type (solaris)boolean
    DefaultFALSE

    Standard use of large pages in MySQL attempts to use the largest size supported, up to 4MB. Under Solaris, a super large pages feature enables uses of pages up to 256MB. This feature is available for recent SPARC platforms. It can be enabled or disabled by using the --super-large-pages or --skip-super-large-pages option.

  • --symbolic-links, --skip-symbolic-links

    Command-Line Format--symbolic-links
    Option-File Formatsymbolic-links

    Enable or disable symbolic link support. On Unix, enabling symbolic links means that you can link a MyISAM index file or data file to another directory with the INDEX DIRECTORY or DATA DIRECTORY options of the CREATE TABLE statement. If you delete or rename the table, the files that its symbolic links point to also are deleted or renamed. See Section 8.11.3.1.2, “Using Symbolic Links for MyISAM Tables on Unix”.

    This option has no meaning on Windows.

  • --skip-show-database

    Command-Line Format--skip-show-database
    Option-File Formatskip-show-database
    System Variable Nameskip_show_database
    Variable ScopeGlobal
    Dynamic VariableNo

    This option sets the skip_show_database system variable that controls who is permitted to use the SHOW DATABASES statement. See Section 5.1.4, “Server System Variables”.

  • --skip-stack-trace

    Command-Line Format--skip-stack-trace
    Option-File Formatskip-stack-trace

    Do not write stack traces. This option is useful when you are running mysqld under a debugger. On some systems, you also must use this option to get a core file. See Section 22.4, “Debugging and Porting MySQL”.

  • --slow-query-log[={0|1}]

    Command-Line Format--slow-query-log
    Option-File Formatslow-query-log
    System Variable Nameslow_query_log
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultOFF

    Specify the initial slow query log state. With no argument or an argument of 1, the --slow-query-log option enables the log. If omitted or given with an argument of 0, the option disables the log.

  • --slow-start-timeout=timeout

    Command-Line Format--slow-start-timeout=#
    Option-File Formatslow-start-timeout
     Permitted Values
    Type (windows)numeric
    Default15000

    This option controls the Windows service control manager's service start timeout. The value is the maximum number of milliseconds that the service control manager waits before trying to kill the windows service during startup. The default value is 15000 (15 seconds). If the MySQL service takes too long to start, you may need to increase this value. A value of 0 means there is no timeout.

  • --socket=path

    Command-Line Format--socket=name
    Option-File Formatsocket
    System Variable Namesocket
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typefile name
    Default/tmp/mysql.sock

    On Unix, this option specifies the Unix socket file to use when listening for local connections. The default value is /tmp/mysql.sock. If this option is given, the server creates the file in the data directory unless an absolute path name is given to specify a different directory. On Windows, the option specifies the pipe name to use when listening for local connections that use a named pipe. The default value is MySQL (not case sensitive).

  • --sql-mode=value[,value[,value...]]

    Command-Line Format--sql-mode=name
    Option-File Formatsql-mode
    System Variable Namesql_mode
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeset
    DefaultNO_ENGINE_SUBSTITUTION
    Valid ValuesALLOW_INVALID_DATES
    ANSI_QUOTES
    ERROR_FOR_DIVISION_BY_ZERO
    HIGH_NOT_PRECEDENCE
    IGNORE_SPACE
    NO_AUTO_CREATE_USER
    NO_AUTO_VALUE_ON_ZERO
    NO_BACKSLASH_ESCAPES
    NO_DIR_IN_CREATE
    NO_ENGINE_SUBSTITUTION
    NO_FIELD_OPTIONS
    NO_KEY_OPTIONS
    NO_TABLE_OPTIONS
    NO_UNSIGNED_SUBTRACTION
    NO_ZERO_DATE
    NO_ZERO_IN_DATE
    ONLY_FULL_GROUP_BY
    PAD_CHAR_TO_FULL_LENGTH
    PIPES_AS_CONCAT
    REAL_AS_FLOAT
    STRICT_ALL_TABLES
    STRICT_TRANS_TABLES

    Set the SQL mode. See Section 5.1.7, “Server SQL Modes”.

    Note

    MySQL installation programs may configure the SQL mode during the installation process. For example, mysql_install_db creates a default option file named my.cnf in the base installation directory. This file contains a line that sets the SQL mode; see Section 4.4.3, “mysql_install_db — Initialize MySQL Data Directory”.

    If the SQL mode differs from the default or from what you expect, check for a setting in an option file that the server reads at startup.

  • --sysdate-is-now

    Command-Line Format--sysdate-is-now
    Option-File Formatsysdate-is-now
     Permitted Values
    Typeboolean
    DefaultFALSE

    SYSDATE() by default returns the time at which it executes, not the time at which the statement in which it occurs begins executing. This differs from the behavior of NOW(). This option causes SYSDATE() to be an alias for NOW(). For information about the implications for binary logging and replication, see the description for SYSDATE() in Section 12.7, “Date and Time Functions” and for SET TIMESTAMP in Section 5.1.4, “Server System Variables”.

  • --tc-heuristic-recover={COMMIT|ROLLBACK}

    Command-Line Format--tc-heuristic-recover=name
    Option-File Formattc-heuristic-recover
     Permitted Values
    Typeenumeration
    Valid ValuesCOMMIT
    RECOVER

    The type of decision to use in the heuristic recovery process. Currently, this option is unused.

  • --temp-pool

    Command-Line Format--temp-pool
    Option-File Formattemp-pool
     Permitted Values
    Typeboolean
    DefaultTRUE

    This option causes most temporary files created by the server to use a small set of names, rather than a unique name for each new file. This works around a problem in the Linux kernel dealing with creating many new files with different names. With the old behavior, Linux seems to leak memory, because it is being allocated to the directory entry cache rather than to the disk cache. This option is ignored except on Linux.

  • --transaction-isolation=level

    Command-Line Format--transaction-isolation=name
    Option-File Formattransaction-isolation
     Permitted Values
    Typeenumeration
    Valid ValuesREAD-UNCOMMITTED
    READ-COMMITTED
    REPEATABLE-READ
    SERIALIZABLE

    Sets the default transaction isolation level. The level value can be READ-UNCOMMITTED, READ-COMMITTED, REPEATABLE-READ, or SERIALIZABLE. See Section 13.3.6, “SET TRANSACTION Syntax”.

    The default transaction isolation level can also be set at runtime using the SET TRANSACTION statement or by setting the tx_isolation system variable.

  • --transaction-read-only

    Command-Line Format--transaction-read-only
    Option-File Formattransaction-read-only
     Permitted Values
    Typeboolean
    DefaultOFF

    Sets the default transaction access mode. By default, read-only mode is disabled, so the mode is read/write.

    To set the default transaction access mode at runtime, use the SET TRANSACTION statement or set the tx_read_only system variable. See Section 13.3.6, “SET TRANSACTION Syntax”.

  • --tmpdir=path, -t path

    Command-Line Format--tmpdir=path
     -t
    Option-File Formattmpdir
    System Variable Nametmpdir
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typefile name

    The path of the directory to use for creating temporary files. It might be useful if your default /tmp directory resides on a partition that is too small to hold temporary tables. This option accepts several paths that are used in round-robin fashion. Paths should be separated by colon characters (:) on Unix and semicolon characters (;) on Windows. If the MySQL server is acting as a replication slave, you should not set --tmpdir to point to a directory on a memory-based file system or to a directory that is cleared when the server host restarts. For more information about the storage location of temporary files, see Section C.5.4.4, “Where MySQL Stores Temporary Files”. A replication slave needs some of its temporary files to survive a machine restart so that it can replicate temporary tables or LOAD DATA INFILE operations. If files in the temporary file directory are lost when the server restarts, replication fails.

  • --user={user_name|user_id}, -u {user_name|user_id}

    Command-Line Format--user=name
     -u name
    Option-File Formatuser
     Permitted Values
    Typestring

    Run the mysqld server as the user having the name user_name or the numeric user ID user_id. (User in this context refers to a system login account, not a MySQL user listed in the grant tables.)

    This option is mandatory when starting mysqld as root. The server changes its user ID during its startup sequence, causing it to run as that particular user rather than as root. See Section 6.1.1, “Security Guidelines”.

    To avoid a possible security hole where a user adds a --user=root option to a my.cnf file (thus causing the server to run as root), mysqld uses only the first --user option specified and produces a warning if there are multiple --user options. Options in /etc/my.cnf and $MYSQL_HOME/my.cnf are processed before command-line options, so it is recommended that you put a --user option in /etc/my.cnf and specify a value other than root. The option in /etc/my.cnf is found before any other --user options, which ensures that the server runs as a user other than root, and that a warning results if any other --user option is found.

  • --verbose, -v

    Use this option with the --help option for detailed help.

  • --version, -V

    Display version information and exit.

You can assign a value to a server system variable by using an option of the form --var_name=value. For example, --key_buffer_size=32M sets the key_buffer_size variable to a value of 32MB.

Note that when you assign a value to a variable, MySQL might automatically correct the value to stay within a given range, or adjust the value to the closest permissible value if only certain values are permitted.

If you want to restrict the maximum value to which a variable can be set at runtime with SET, you can define this by using the --maximum-var_name=value command-line option.

You can change the values of most system variables for a running server with the SET statement. See Section 13.7.4, “SET Syntax”.

Section 5.1.4, “Server System Variables”, provides a full description for all variables, and additional information for setting them at server startup and runtime. Section 8.11.2, “Tuning Server Parameters”, includes information on optimizing the server by tuning system variables.

5.1.4. Server System Variables

The MySQL server maintains many system variables that indicate how it is configured. Each system variable has a default value. System variables can be set at server startup using options on the command line or in an option file. Most of them can be changed dynamically while the server is running by means of the SET statement, which enables you to modify operation of the server without having to stop and restart it. You can refer to system variable values in expressions.

There are several ways to see the names and values of system variables:

  • To see the values that a server will use based on its compiled-in defaults and any option files that it reads, use this command:

    mysqld --verbose --help
  • To see the values that a server will use based on its compiled-in defaults, ignoring the settings in any option files, use this command:

    mysqld --no-defaults --verbose --help
  • To see the current values used by a running server, use the SHOW VARIABLES statement.

This section provides a description of each system variable. Variables with no version indicated are present in all MySQL 5.7 releases. For historical information concerning their implementation, please see http://dev.mysql.com/doc/refman/5.0/en/, and http://dev.mysql.com/doc/refman/4.1/en/.

The following table lists all available system variables.

Table 5.2. System Variable Summary

NameCmd-LineOption fileSystem VarVar ScopeDynamic
auto_increment_increment  YesBothYes
auto_increment_offset  YesBothYes
autocommitYesYesYesBothYes
automatic_sp_privileges  YesGlobalYes
back_log  YesGlobalNo
basedirYesYesYesGlobalNo
bind-addressYesYes  No
- Variable: bind_address  YesGlobalNo
binlog_cache_sizeYesYesYesGlobalYes
binlog_checksum  YesGlobalYes
binlog_direct_non_transactional_updatesYesYesYesBothYes
binlog-formatYesYes  Yes
- Variable: binlog_format  YesBothYes
binlog_max_flush_queue_time  YesGlobalYes
binlog_order_commits  YesGlobalYes
binlog_row_imageYesYesYesBothYes
binlog_rows_query_log_events  YesBothYes
binlog_stmt_cache_sizeYesYesYesGlobalYes
bulk_insert_buffer_sizeYesYesYesBothYes
character_set_client  YesBothYes
character_set_connection  YesBothYes
character_set_database[a]  YesBothYes
character-set-filesystemYesYes  Yes
- Variable: character_set_filesystem  YesBothYes
character_set_results  YesBothYes
character-set-serverYesYes  Yes
- Variable: character_set_server  YesBothYes
character_set_system  YesGlobalNo
character-sets-dirYesYes  No
- Variable: character_sets_dir  YesGlobalNo
collation_connection  YesBothYes
collation_database[b]  YesBothYes
collation-serverYesYes  Yes
- Variable: collation_server  YesBothYes
completion_typeYesYesYesBothYes
concurrent_insertYesYesYesGlobalYes
connect_timeoutYesYesYesGlobalYes
core_file  YesGlobalNo
daemon_memcached_enable_binlogYesYesYesGlobalNo
daemon_memcached_engine_lib_nameYesYesYesGlobalNo
daemon_memcached_engine_lib_pathYesYesYesGlobalNo
daemon_memcached_optionYesYesYesGlobalNo
daemon_memcached_r_batch_sizeYesYesYesGlobalNo
daemon_memcached_w_batch_sizeYesYesYesGlobalNo
datadirYesYesYesGlobalNo
date_format  YesGlobalNo
datetime_format  YesGlobalNo
debugYesYesYesBothYes
debug_sync  YesSessionYes
default-storage-engineYesYes  Yes
- Variable: default_storage_engine  YesBothYes
default_tmp_storage_engineYesYesYesBothYes
default_week_formatYesYesYesBothYes
delay-key-writeYesYes  Yes
- Variable: delay_key_write  YesGlobalYes
delayed_insert_limitYesYesYesGlobalYes
delayed_insert_timeoutYesYesYesGlobalYes
delayed_queue_sizeYesYesYesGlobalYes
disconnect_on_expired_passwordYesYesYesSessionNo
div_precision_incrementYesYesYesBothYes
end_markers_in_json  YesBothYes
enforce_gtid_consistencyYesYesYesGlobalNo
enforce-gtid-consistencyYesYesYesGlobalNo
eq_range_index_dive_limit  YesBothYes
error_count  YesSessionNo
event-schedulerYesYes  Yes
- Variable: event_scheduler  YesGlobalYes
expire_logs_daysYesYesYesGlobalYes
explicit_defaults_for_timestampYesYesYesSessionNo
external_user  YesSessionNo
flushYesYesYesGlobalYes
flush_timeYesYesYesGlobalYes
foreign_key_checks  YesBothYes
ft_boolean_syntaxYesYesYesGlobalYes
ft_max_word_lenYesYesYesGlobalNo
ft_min_word_lenYesYesYesGlobalNo
ft_query_expansion_limitYesYesYesGlobalNo
ft_stopword_fileYesYesYesGlobalNo
general-logYesYes  Yes
- Variable: general_log  YesGlobalYes
general_log_fileYesYesYesGlobalYes
group_concat_max_lenYesYesYesBothYes
gtid_executed  YesBothNo
gtid_mode  YesGlobalNo
gtid-modeYesYes  No
- Variable: gtid_mode  YesGlobalNo
gtid_next  YesSessionYes
gtid_owned  YesBothNo
gtid_purged  YesGlobalYes
have_compress  YesGlobalNo
have_crypt  YesGlobalNo
have_dynamic_loading  YesGlobalNo
have_geometry  YesGlobalNo
have_openssl  YesGlobalNo
have_profiling  YesGlobalNo
have_query_cache  YesGlobalNo
have_rtree_keys  YesGlobalNo
have_ssl  YesGlobalNo
have_symlink  YesGlobalNo
host_cache_size  YesGlobalYes
hostname  YesGlobalNo
identity  YesSessionYes
ignore-builtin-innodbYesYes  No
- Variable: ignore_builtin_innodb  YesGlobalNo
ignore_db_dirs  YesGlobalNo
init_connectYesYesYesGlobalYes
init-fileYesYes  No
- Variable: init_file  YesGlobalNo
init_slaveYesYesYesGlobalYes
innodb_adaptive_flushingYesYesYesGlobalYes
innodb_adaptive_flushing_lwmYesYesYesGlobalYes
innodb_adaptive_hash_indexYesYesYesGlobalYes
innodb_adaptive_max_sleep_delayYesYesYesGlobalYes
innodb_additional_mem_pool_sizeYesYesYesGlobalNo
innodb_api_bk_commit_intervalYesYesYesGlobalYes
innodb_api_disable_rowlockYesYesYesGlobalNo
innodb_api_enable_binlogYesYesYesGlobalNo
innodb_api_enable_mdlYesYesYesGlobalNo
innodb_api_trx_levelYesYesYesGlobalYes
innodb_autoextend_incrementYesYesYesGlobalYes
innodb_autoinc_lock_modeYesYesYesGlobalNo
innodb_buffer_pool_dump_at_shutdownYesYesYesGlobalYes
innodb_buffer_pool_dump_nowYesYesYesGlobalYes
innodb_buffer_pool_dump_pctYesYesYesGlobalYes
innodb_buffer_pool_filenameYesYesYesGlobalYes
innodb_buffer_pool_instancesYesYesYesGlobalNo
innodb_buffer_pool_load_abortYesYesYesGlobalYes
innodb_buffer_pool_load_at_startupYesYesYesGlobalNo
innodb_buffer_pool_load_nowYesYesYesGlobalYes
innodb_buffer_pool_sizeYesYesYesGlobalNo
innodb_change_buffer_max_sizeYesYesYesGlobalYes
innodb_change_bufferingYesYesYesGlobalYes
innodb_checksum_algorithmYesYesYesGlobalYes
innodb_checksumsYesYesYesGlobalNo
innodb_cmp_per_index_enabledYesYesYesGlobalYes
innodb_commit_concurrencyYesYesYesGlobalYes
innodb_compression_failure_threshold_pctYesYesYesGlobalYes
innodb_compression_levelYesYesYesGlobalYes
innodb_compression_pad_pct_maxYesYesYesGlobalYes
innodb_concurrency_ticketsYesYesYesGlobalYes
innodb_data_file_pathYesYesYesGlobalNo
innodb_data_home_dirYesYesYesGlobalNo
innodb_disable_sort_file_cacheYesYesYesGlobalYes
innodb_doublewriteYesYesYesGlobalNo
innodb_fast_shutdownYesYesYesGlobalYes
innodb_file_formatYesYesYesGlobalYes
innodb_file_format_checkYesYesYesGlobalNo
innodb_file_format_maxYesYesYesGlobalYes
innodb_file_per_tableYesYesYesGlobalYes
innodb_flush_log_at_timeout  YesGlobalYes
innodb_flush_log_at_trx_commitYesYesYesGlobalYes
innodb_flush_methodYesYesYesGlobalNo
innodb_flush_neighborsYesYesYesGlobalYes
innodb_flushing_avg_loopsYesYesYesGlobalYes
innodb_force_load_corruptedYesYesYesGlobalNo
innodb_force_recoveryYesYesYesGlobalNo
innodb_ft_aux_tableYesYesYesGlobalYes
innodb_ft_cache_sizeYesYesYesGlobalNo
innodb_ft_enable_diag_printYesYesYesGlobalYes
innodb_ft_enable_stopwordYesYesYesGlobalYes
innodb_ft_max_token_sizeYesYesYesGlobalNo
innodb_ft_min_token_sizeYesYesYesGlobalNo
innodb_ft_num_word_optimizeYesYesYesGlobalYes
innodb_ft_server_stopword_tableYesYesYesGlobalYes
innodb_ft_sort_pll_degreeYesYesYesGlobalNo
innodb_ft_user_stopword_tableYesYesYesBothYes
innodb_io_capacityYesYesYesGlobalYes
innodb_io_capacity_maxYesYesYesGlobalYes
innodb_large_prefixYesYesYesGlobalYes
innodb_lock_wait_timeoutYesYesYesBothYes
innodb_locks_unsafe_for_binlogYesYesYesGlobalNo
innodb_log_buffer_sizeYesYesYesGlobalNo
innodb_log_compressed_pagesYesYesYesGlobalYes
innodb_log_file_sizeYesYesYesGlobalNo
innodb_log_files_in_groupYesYesYesGlobalNo
innodb_log_group_home_dirYesYesYesGlobalNo
innodb_lru_scan_depthYesYesYesGlobalYes
innodb_max_dirty_pages_pctYesYesYesGlobalYes
innodb_max_dirty_pages_pct_lwmYesYesYesGlobalYes
innodb_max_purge_lagYesYesYesGlobalYes
innodb_max_purge_lag_delayYesYesYesGlobalYes
innodb_monitor_disableYesYesYesGlobalYes
innodb_monitor_enableYesYesYesGlobalYes
innodb_monitor_resetYesYesYesGlobalYes
innodb_monitor_reset_allYesYesYesGlobalYes
innodb_old_blocks_pctYesYesYesGlobalYes
innodb_old_blocks_timeYesYesYesGlobalYes
innodb_online_alter_log_max_sizeYesYesYesGlobalYes
innodb_open_filesYesYesYesGlobalNo
innodb_optimize_fulltext_onlyYesYesYesGlobalYes
innodb_page_sizeYesYesYesGlobalNo
innodb_print_all_deadlocksYesYesYesGlobalYes
innodb_purge_batch_sizeYesYesYesGlobalYes
innodb_purge_threadsYesYesYesGlobalNo
innodb_random_read_aheadYesYesYesGlobalYes
innodb_read_ahead_thresholdYesYesYesGlobalYes
innodb_read_io_threadsYesYesYesGlobalNo
innodb_read_onlyYesYesYesGlobalNo
innodb_replication_delayYesYesYesGlobalYes
innodb_rollback_on_timeoutYesYesYesGlobalNo
innodb_rollback_segmentsYesYesYesGlobalYes
innodb_sort_buffer_sizeYesYesYesGlobalNo
innodb_spin_wait_delayYesYesYesGlobalYes
innodb_stats_auto_recalcYesYesYesGlobalYes
innodb_stats_methodYesYesYesGlobalYes
innodb_stats_on_metadataYesYesYesGlobalYes
innodb_stats_persistentYesYesYesGlobalYes
innodb_stats_persistent_sample_pagesYesYesYesGlobalYes
innodb_stats_sample_pagesYesYesYesGlobalYes
innodb_stats_transient_sample_pagesYesYesYesGlobalYes
innodb_strict_modeYesYesYesBothYes
innodb_support_xaYesYesYesBothYes
innodb_sync_array_sizeYesYesYesGlobalNo
innodb_sync_spin_loopsYesYesYesGlobalYes
innodb_table_locksYesYesYesBothYes
innodb_temp_data_file_pathYesYesYesGlobalNo
innodb_thread_concurrencyYesYesYesGlobalYes
innodb_thread_sleep_delayYesYesYesGlobalYes
innodb_undo_directoryYesYesYesGlobalNo
innodb_undo_logsYesYesYesGlobalYes
innodb_undo_tablespacesYesYesYesGlobalNo
innodb_use_native_aioYesYesYesGlobalNo
innodb_use_sys_mallocYesYesYesGlobalNo
innodb_version  YesGlobalNo
innodb_write_io_threadsYesYesYesGlobalNo
insert_id  YesSessionYes
interactive_timeoutYesYesYesBothYes
join_buffer_sizeYesYesYesBothYes
keep_files_on_createYesYesYesBothYes
key_buffer_sizeYesYesYesGlobalYes
key_cache_age_thresholdYesYesYesGlobalYes
key_cache_block_sizeYesYesYesGlobalYes
key_cache_division_limitYesYesYesGlobalYes
languageYesYesYesGlobalNo
large_files_support  YesGlobalNo
large_page_size  YesGlobalNo
large-pagesYesYes  No
- Variable: large_pages  YesGlobalNo
last_insert_id  YesSessionYes
lc-messagesYesYes  Yes
- Variable: lc_messages  YesBothYes
lc-messages-dirYesYes  No
- Variable: lc_messages_dir  YesGlobalNo
lc_time_names  YesBothYes
license  YesGlobalNo
local_infile  YesGlobalYes
lock_wait_timeoutYesYesYesBothYes
locked_in_memory  YesGlobalNo
log_bin  YesGlobalNo
log-binYesYesYesGlobalNo
log_bin_basename  YesGlobalNo
log_bin_index  YesGlobalNo
log_bin_use_v1_row_eventsYesYesYesGlobalNo
log-bin-use-v1-row-eventsYesYes  No
- Variable: log_bin_use_v1_row_events  YesGlobalNo
log-errorYesYes  No
- Variable: log_error  YesGlobalNo
log-outputYesYes  Yes
- Variable: log_output  YesGlobalYes
log-queries-not-using-indexesYesYes  Yes
- Variable: log_queries_not_using_indexes  YesGlobalYes
log-slave-updatesYesYes  No
- Variable: log_slave_updates  YesGlobalNo
log_slave_updatesYesYesYesGlobalNo
log_slow_admin_statements  YesGlobalYes
log_slow_slave_statements  YesGlobalYes
log_throttle_queries_not_using_indexes  YesGlobalYes
log-warningsYesYes  Yes
- Variable: log_warnings  YesGlobalYes
long_query_timeYesYesYesBothYes
low-priority-updatesYesYes  Yes
- Variable: low_priority_updates  YesBothYes
lower_case_file_system  YesGlobalNo
lower_case_table_namesYesYesYesGlobalNo
master_info_repository  YesGlobalYes
master_verify_checksum  YesGlobalYes
max_allowed_packetYesYesYesGlobalYes
max_binlog_cache_sizeYesYesYesGlobalYes
max_binlog_sizeYesYesYesGlobalYes
max_binlog_stmt_cache_sizeYesYesYesGlobalYes
max_connect_errorsYesYesYesGlobalYes
max_connectionsYesYesYesGlobalYes
max_delayed_threadsYesYesYesBothYes
max_error_countYesYesYesBothYes
max_heap_table_sizeYesYesYesBothYes
max_insert_delayed_threads  YesBothYes
max_join_sizeYesYesYesBothYes
max_length_for_sort_dataYesYesYesBothYes
max_prepared_stmt_countYesYesYesGlobalYes
max_relay_log_sizeYesYesYesGlobalYes
max_seeks_for_keyYesYesYesBothYes
max_sort_lengthYesYesYesBothYes
max_sp_recursion_depthYesYesYesBothYes
max_user_connectionsYesYesYesBothYes
max_write_lock_countYesYesYesGlobalYes
memlockYesYesYesGlobalNo
metadata_locks_cache_size  YesGlobalNo
metadata_locks_hash_instances  YesGlobalNo
min-examined-row-limitYesYesYesBothYes
myisam_data_pointer_sizeYesYesYesGlobalYes
myisam_max_sort_file_sizeYesYesYesGlobalYes
myisam_mmap_sizeYesYesYesGlobalNo
myisam_recover_options  YesGlobalNo
myisam_repair_threadsYesYesYesBothYes
myisam_sort_buffer_sizeYesYesYesBothYes
myisam_stats_methodYesYesYesBothYes
myisam_use_mmapYesYesYesGlobalYes
named_pipe  YesGlobalNo
net_buffer_lengthYesYesYesBothYes
net_read_timeoutYesYesYesBothYes
net_retry_countYesYesYesBothYes
net_write_timeoutYesYesYesBothYes
newYesYesYesBothYes
oldYesYesYesGlobalNo
old-alter-tableYesYes  Yes
- Variable: old_alter_table  YesBothYes
old_passwords  YesBothYes
open-files-limitYesYes  No
- Variable: open_files_limit  YesGlobalNo
optimizer_prune_levelYesYesYesBothYes
optimizer_search_depthYesYesYesBothYes
optimizer_switchYesYesYesBothYes
optimizer_trace  YesBothYes
optimizer_trace_features  YesBothYes
optimizer_trace_limit  YesBothYes
optimizer_trace_max_mem_size  YesBothYes
optimizer_trace_offset  YesBothYes
performance_schemaYesYesYesGlobalNo
performance_schema_accounts_sizeYesYesYesGlobalNo
performance_schema_digests_sizeYesYesYesGlobalNo
performance_schema_events_stages_history_long_sizeYesYesYesGlobalNo
performance_schema_events_stages_history_sizeYesYesYesGlobalNo
performance_schema_events_statements_history_long_sizeYesYesYesGlobalNo
performance_schema_events_statements_history_sizeYesYesYesGlobalNo
performance_schema_events_waits_history_long_sizeYesYesYesGlobalNo
performance_schema_events_waits_history_sizeYesYesYesGlobalNo
performance_schema_hosts_sizeYesYesYesGlobalNo
performance_schema_max_cond_classesYesYesYesGlobalNo
performance_schema_max_cond_instancesYesYesYesGlobalNo
performance_schema_max_file_classesYesYesYesGlobalNo
performance_schema_max_file_handlesYesYesYesGlobalNo
performance_schema_max_file_instancesYesYesYesGlobalNo
performance_schema_max_memory_classesYesYesYesGlobalNo
performance_schema_max_mutex_classesYesYesYesGlobalNo
performance_schema_max_mutex_instancesYesYesYesGlobalNo
performance_schema_max_program_instancesYesYesYesGlobalNo
performance_schema_max_rwlock_classesYesYesYesGlobalNo
performance_schema_max_rwlock_instancesYesYesYesGlobalNo
performance_schema_max_socket_classesYesYesYesGlobalNo
performance_schema_max_socket_instancesYesYesYesGlobalNo
performance_schema_max_stage_classesYesYesYesGlobalNo
performance_schema_max_statement_classesYesYesYesGlobalNo
performance_schema_max_statement_stackYesYesYesGlobalNo
performance_schema_max_table_handlesYesYesYesGlobalNo
performance_schema_max_table_instancesYesYesYesGlobalNo
performance_schema_max_thread_classesYesYesYesGlobalNo
performance_schema_max_thread_instancesYesYesYesGlobalNo
performance_schema_session_connect_attrs_sizeYesYesYesGlobalNo
performance_schema_setup_actors_sizeYesYesYesGlobalNo
performance_schema_setup_objects_sizeYesYesYesGlobalNo
performance_schema_users_sizeYesYesYesGlobalNo
pid-fileYesYes  No
- Variable: pid_file  YesGlobalNo
plugin_dirYesYesYesGlobalNo
portYesYesYesGlobalNo
preload_buffer_sizeYesYesYesBothYes
profiling  YesBothYes
profiling_history_sizeYesYesYesBothYes
protocol_version  YesGlobalNo
proxy_user  YesSessionNo
pseudo_slave_mode  YesSessionYes
pseudo_thread_id  YesSessionYes
query_alloc_block_sizeYesYesYesBothYes
query_cache_limitYesYesYesGlobalYes
query_cache_min_res_unitYesYesYesGlobalYes
query_cache_sizeYesYesYesGlobalYes
query_cache_typeYesYesYesBothYes
query_cache_wlock_invalidateYesYesYesBothYes
query_prealloc_sizeYesYesYesBothYes
rand_seed1  YesSessionYes
rand_seed2  YesSessionYes
range_alloc_block_sizeYesYesYesBothYes
read_buffer_sizeYesYesYesBothYes
read_onlyYesYesYesGlobalYes
read_rnd_buffer_sizeYesYesYesBothYes
relay-logYesYes  No
- Variable: relay_log  YesGlobalNo
relay_log_basename  YesGlobalNo
relay-log-indexYesYes  No
- Variable: relay_log_index  YesGlobalNo
relay_log_indexYesYesYesGlobalNo
relay_log_info_fileYesYesYesGlobalNo
relay_log_info_repository  YesGlobalYes
relay_log_purgeYesYesYesGlobalYes
relay_log_recoveryYesYesYesGlobalYes
relay_log_space_limitYesYesYesGlobalNo
report-hostYesYes  No
- Variable: report_host  YesGlobalNo
report-passwordYesYes  No
- Variable: report_password  YesGlobalNo
report-portYesYes  No
- Variable: report_port  YesGlobalNo
report-userYesYes  No
- Variable: report_user  YesGlobalNo
rpl_semi_sync_master_enabled  YesGlobalYes
rpl_semi_sync_master_timeout  YesGlobalYes
rpl_semi_sync_master_trace_level  YesGlobalYes
rpl_semi_sync_master_wait_no_slave  YesGlobalYes
rpl_semi_sync_master_wait_point  YesGlobalYes
rpl_semi_sync_slave_enabled  YesGlobalYes
rpl_semi_sync_slave_trace_level  YesGlobalYes
rpl_stop_slave_timeoutYesYesYesGlobalYes
secure-authYesYes  Yes
- Variable: secure_auth  YesGlobalYes
secure-file-privYesYes  No
- Variable: secure_file_priv  YesGlobalNo
server-idYesYes  Yes
- Variable: server_id  YesGlobalYes
server_uuid  YesGlobalNo
sha256_password_private_key_path  YesGlobalNo
sha256_password_public_key_path  YesGlobalNo
shared_memory  YesGlobalNo
shared_memory_base_name  YesGlobalNo
skip_external_lockingYesYesYesGlobalNo
skip-name-resolveYesYes  No
- Variable: skip_name_resolve  YesGlobalNo
skip-networkingYesYes  No
- Variable: skip_networking  YesGlobalNo
skip-show-databaseYesYes  No
- Variable: skip_show_database  YesGlobalNo
slave_allow_batchingYesYesYesGlobalYes
slave_checkpoint_groupYesYesYesGlobalYes
slave_checkpoint_periodYesYesYesGlobalYes
slave_compressed_protocolYesYesYesGlobalYes
slave_exec_modeYesYesYesGlobalYes
slave-load-tmpdirYesYes  No
- Variable: slave_load_tmpdir  YesGlobalNo
slave_max_allowed_packet  YesGlobalYes
slave-net-timeoutYesYes  Yes
- Variable: slave_net_timeout  YesGlobalYes
slave_parallel_workers  YesGlobalYes
slave_pending_jobs_size_max  YesGlobalYes
slave_rows_search_algorithms  YesGlobalYes
slave-skip-errorsYesYes  No
- Variable: slave_skip_errors  YesGlobalNo
slave_sql_verify_checksum  YesGlobalYes
slave_transaction_retriesYesYesYesGlobalYes
slave_type_conversionsYesYesYesGlobalNo
slow_launch_timeYesYesYesGlobalYes
slow-query-logYesYes  Yes
- Variable: slow_query_log  YesGlobalYes
slow_query_log_fileYesYesYesGlobalYes
socketYesYesYesGlobalNo
sort_buffer_sizeYesYesYesBothYes
sql_auto_is_null  YesBothYes
sql_big_selects  YesBothYes
sql_big_tables  YesBothYes
sql_buffer_result  YesBothYes
sql_log_bin  YesBothYes
sql_log_off  YesBothYes
sql-modeYesYes  Yes
- Variable: sql_mode  YesBothYes
sql_notes  YesBothYes
sql_quote_show_create  YesBothYes
sql_safe_updates  YesBothYes
sql_select_limit  YesBothYes
sql_slave_skip_counter  YesGlobalYes
sql_warnings  YesBothYes
ssl-caYesYes  No
- Variable: ssl_ca  YesGlobalNo
ssl-capathYesYes  No
- Variable: ssl_capath  YesGlobalNo
ssl-certYesYes  No
- Variable: ssl_cert  YesGlobalNo
ssl-cipherYesYes  No
- Variable: ssl_cipher  YesGlobalNo
ssl-crlYesYes  No
- Variable: ssl_crl  YesGlobalNo
ssl-crlpathYesYes  No
- Variable: ssl_crlpath  YesGlobalNo
ssl-keyYesYes  No
- Variable: ssl_key  YesGlobalNo
storage_engine  YesBothYes
stored_program_cacheYesYesYesGlobalYes
sync_binlogYesYesYesGlobalYes
sync_frmYesYesYesGlobalYes
sync_master_infoYesYesYesGlobalYes
sync_relay_logYesYesYesGlobalYes
sync_relay_log_infoYesYesYesGlobalYes
system_time_zone  YesGlobalNo
table_definition_cache  YesGlobalYes
table_open_cache  YesGlobalYes
table_open_cache_instances  YesGlobalNo
thread_cache_sizeYesYesYesGlobalYes
thread_concurrencyYesYesYesGlobalNo
thread_handlingYesYesYesGlobalNo
thread_stackYesYesYesGlobalNo
time_format  YesGlobalNo
time_zone  YesBothYes
timed_mutexesYesYesYesGlobalYes
timestamp  YesSessionYes
tmp_table_sizeYesYesYesBothYes
tmpdirYesYesYesGlobalNo
transaction_alloc_block_sizeYesYesYesBothYes
transaction_prealloc_sizeYesYesYesBothYes
tx_isolation  YesBothYes
tx_read_only  YesBothYes
unique_checks  YesBothYes
updatable_views_with_limitYesYesYesBothYes
validate_password_dictionary_file  YesGlobalNo
validate_password_length  YesGlobalYes
validate_password_mixed_case_count  YesGlobalYes
validate_password_number_count  YesGlobalYes
validate_password_policy  YesGlobalYes
validate_password_special_char_count  YesGlobalYes
validate_user_plugins  YesGlobalNo
version  YesGlobalNo
version_comment  YesGlobalNo
version_compile_machine  YesGlobalNo
version_compile_os  YesGlobalNo
wait_timeoutYesYesYesBothYes
warning_count  YesSessionNo

[a] This option is dynamic, but only the server should set this information. You should not set the value of this variable manually.

[b] This option is dynamic, but only the server should set this information. You should not set the value of this variable manually.


For additional system variable information, see these sections:

Note

Some of the following variable descriptions refer to enabling or disabling a variable. These variables can be enabled with the SET statement by setting them to ON or 1, or disabled by setting them to OFF or 0. In MySQL 5.7, boolean variables can be set at startup to the values ON, TRUE, OFF, and FALSE (not case sensitive), as well as 1 and 0. See Section 4.2.3.2, “Program Option Modifiers”.

Some system variables control the size of buffers or caches. For a given buffer, the server might need to allocate internal data structures. These structures typically are allocated from the total memory allocated to the buffer, and the amount of space required might be platform dependent. This means that when you assign a value to a system variable that controls a buffer size, the amount of space actually available might differ from the value assigned. In some cases, the amount might be less than the value assigned. It is also possible that the server will adjust a value upward. For example, if you assign a value of 0 to a variable for which the minimal value is 1024, the server will set the value to 1024.

Values for buffer sizes, lengths, and stack sizes are given in bytes unless otherwise specified.

Some system variables take file name values. Unless otherwise specified, the default file location is the data directory if the value is a relative path name. To specify the location explicitly, use an absolute path name. Suppose that the data directory is /var/mysql/data. If a file-valued variable is given as a relative path name, it will be located under /var/mysql/data. If the value is an absolute path name, its location is as given by the path name.

  • autocommit

    Command-Line Format--autocommit[=#]
    Option-File Formatautocommit
    System Variable Nameautocommit
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultON

    The autocommit mode. If set to 1, all changes to a table take effect immediately. If set to 0, you must use COMMIT to accept a transaction or ROLLBACK to cancel it. If autocommit is 0 and you change it to 1, MySQL performs an automatic COMMIT of any open transaction. Another way to begin a transaction is to use a START TRANSACTION or BEGIN statement. See Section 13.3.1, “START TRANSACTION, COMMIT, and ROLLBACK Syntax”.

    By default, client connections begin with autocommit set to 1. To cause clients to begin with a default of 0, set the global autocommit value by starting the server with the --autocommit=0 option. To set the variable using an option file, include these lines:

    [mysqld]
    autocommit=0
  • automatic_sp_privileges

    System Variable Nameautomatic_sp_privileges
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultTRUE

    When this variable has a value of 1 (the default), the server automatically grants the EXECUTE and ALTER ROUTINE privileges to the creator of a stored routine, if the user cannot already execute and alter or drop the routine. (The ALTER ROUTINE privilege is required to drop the routine.) The server also automatically drops those privileges from the creator when the routine is dropped. If automatic_sp_privileges is 0, the server does not automatically add or drop these privileges.

    The creator of a routine is the account used to execute the CREATE statement for it. This might not be the same as the account named as the DEFINER in the routine definition.

    See also Section 18.2.2, “Stored Routines and MySQL Privileges”.

  • back_log

    System Variable Nameback_log
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typenumeric
    Default-1 (autosized)
    Range1 .. 65535

    The number of outstanding connection requests MySQL can have. This comes into play when the main MySQL thread gets very many connection requests in a very short time. It then takes some time (although very little) for the main thread to check the connection and start a new thread. The back_log value indicates how many requests can be stacked during this short time before MySQL momentarily stops answering new requests. You need to increase this only if you expect a large number of connections in a short period of time.

    In other words, this value is the size of the listen queue for incoming TCP/IP connections. Your operating system has its own limit on the size of this queue. The manual page for the Unix listen() system call should have more details. Check your OS documentation for the maximum value for this variable. back_log cannot be set higher than your operating system limit.

    The default value is based on the following formula, capped to a limit of 900:

    50 + (max_connections / 5)
  • basedir

    Command-Line Format--basedir=path
     -b
    Option-File Formatbasedir
    System Variable Namebasedir
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typefile name

    The MySQL installation base directory. This variable can be set with the --basedir option. Relative path names for other variables usually are resolved relative to the base directory.

  • big_tables

    If set to 1, all temporary tables are stored on disk rather than in memory. This is a little slower, but the error The table tbl_name is full does not occur for SELECT operations that require a large temporary table. The default value for a new connection is 0 (use in-memory temporary tables). Normally, you should never need to set this variable, because in-memory tables are automatically converted to disk-based tables as required.

  • bind_address

    Command-Line Format--bind-address=addr
    Option-File Formatbind-address
    System Variable Namebind_address
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typestring
    Default*

    The value of the --bind-address option.

    This variable has no effect for the embedded server (libmysqld) and as of MySQL 5.7.2 is no longer visible within the embedded server.

  • bulk_insert_buffer_size

    Command-Line Format--bulk_insert_buffer_size=#
    Option-File Formatbulk_insert_buffer_size
    System Variable Namebulk_insert_buffer_size
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default8388608
    Range0 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default8388608
    Range0 .. 18446744073709547520

    MyISAM uses a special tree-like cache to make bulk inserts faster for INSERT ... SELECT, INSERT ... VALUES (...), (...), ..., and LOAD DATA INFILE when adding data to nonempty tables. This variable limits the size of the cache tree in bytes per thread. Setting it to 0 disables this optimization. The default value is 8MB.

  • character_set_client

    System Variable Namecharacter_set_client
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typestring

    The character set for statements that arrive from the client. The session value of this variable is set using the character set requested by the client when the client connects to the server. (Many clients support a --default-character-set option to enable this character set to be specified explicitly. See also Section 10.1.4, “Connection Character Sets and Collations”.) The global value of the variable is used to set the session value in cases when the client-requested value is unknown or not available, or the server is configured to ignore client requests:

    • The client is from a version of MySQL older than MySQL 4.1, and thus does not request a character set.

    • The client requests a character set not known to the server. For example, a Japanese-enabled client requests sjis when connecting to a server not configured with sjis support.

    • mysqld was started with the --skip-character-set-client-handshake option, which causes it to ignore client character set configuration. This reproduces MySQL 4.0 behavior and is useful should you wish to upgrade the server without upgrading all the clients.

    ucs2, utf16, utf16le, and utf32 cannot be used as a client character set, which means that they also do not work for SET NAMES or SET CHARACTER SET.

  • character_set_connection

    System Variable Namecharacter_set_connection
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typestring

    The character set used for literals that do not have a character set introducer and for number-to-string conversion.

  • character_set_database

    System Variable Namecharacter_set_database
    Variable ScopeGlobal, Session
    Dynamic VariableYes
    FootnoteThis option is dynamic, but only the server should set this information. You should not set the value of this variable manually.
     Permitted Values
    Typestring

    The character set used by the default database. The server sets this variable whenever the default database changes. If there is no default database, the variable has the same value as character_set_server.

  • character_set_filesystem

    Command-Line Format--character-set-filesystem=name
    Option-File Formatcharacter-set-filesystem
    System Variable Namecharacter_set_filesystem
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typestring

    The file system character set. This variable is used to interpret string literals that refer to file names, such as in the LOAD DATA INFILE and SELECT ... INTO OUTFILE statements and the LOAD_FILE() function. Such file names are converted from character_set_client to character_set_filesystem before the file opening attempt occurs. The default value is binary, which means that no conversion occurs. For systems on which multi-byte file names are permitted, a different value may be more appropriate. For example, if the system represents file names using UTF-8, set character_set_filesystem to 'utf8'.

  • character_set_results

    System Variable Namecharacter_set_results
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typestring

    The character set used for returning query results such as result sets or error messages to the client.

  • character_set_server

    Command-Line Format--character-set-server
    Option-File Formatcharacter-set-server
    System Variable Namecharacter_set_server
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typestring

    The server's default character set.

  • character_set_system

    System Variable Namecharacter_set_system
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typestring

    The character set used by the server for storing identifiers. The value is always utf8.

  • character_sets_dir

    Command-Line Format--character-sets-dir=path
    Option-File Formatcharacter-sets-dir
    System Variable Namecharacter_sets_dir
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typedirectory name

    The directory where character sets are installed.

  • collation_connection

    System Variable Namecollation_connection
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typestring

    The collation of the connection character set.

  • collation_database

    System Variable Namecollation_database
    Variable ScopeGlobal, Session
    Dynamic VariableYes
    FootnoteThis option is dynamic, but only the server should set this information. You should not set the value of this variable manually.
     Permitted Values
    Typestring

    The collation used by the default database. The server sets this variable whenever the default database changes. If there is no default database, the variable has the same value as collation_server.

  • collation_server

    Command-Line Format--collation-server
    Option-File Formatcollation-server
    System Variable Namecollation_server
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typestring

    The server's default collation.

  • completion_type

    Command-Line Format--completion_type=#
    Option-File Formatcompletion_type
    System Variable Namecompletion_type
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeenumeration
    DefaultNO_CHAIN
    Valid ValuesNO_CHAIN
    CHAIN
    RELEASE
    0
    1
    2

    The transaction completion type. This variable can take the values shown in the following table. The variable can be assigned using either the name values or corresponding integer values.

    ValueDescription
    NO_CHAIN (or 0)COMMIT and ROLLBACK are unaffected. This is the default value.
    CHAIN (or 1)COMMIT and ROLLBACK are equivalent to COMMIT AND CHAIN and ROLLBACK AND CHAIN, respectively. (A new transaction starts immediately with the same isolation level as the just-terminated transaction.)
    RELEASE (or 2)COMMIT and ROLLBACK are equivalent to COMMIT RELEASE and ROLLBACK RELEASE, respectively. (The server disconnects after terminating the transaction.)

    completion_type affects transactions that begin with START TRANSACTION or BEGIN and end with COMMIT or ROLLBACK. It does not apply to implicit commits resulting from execution of the statements listed in Section 13.3.3, “Statements That Cause an Implicit Commit”. It also does not apply for XA COMMIT, XA ROLLBACK, or when autocommit=1.

  • concurrent_insert

    Command-Line Format--concurrent_insert[=#]
    Option-File Formatconcurrent_insert
    System Variable Nameconcurrent_insert
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeenumeration
    DefaultAUTO
    Valid ValuesNEVER
    AUTO
    ALWAYS
    0
    1
    2

    If AUTO (the default), MySQL permits INSERT and SELECT statements to run concurrently for MyISAM tables that have no free blocks in the middle of the data file. If you start mysqld with --skip-new, this variable is set to NEVER.

    This variable can take the values shown in the following table. The variable can be assigned using either the name values or corresponding integer values.

    ValueDescription
    NEVER (or 0)Disables concurrent inserts
    AUTO (or 1)(Default) Enables concurrent insert for MyISAM tables that do not have holes
    ALWAYS (or 2)Enables concurrent inserts for all MyISAM tables, even those that have holes. For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread. Otherwise, MySQL acquires a normal write lock and inserts the row into the hole.

    See also Section 8.10.3, “Concurrent Inserts”.

  • connect_timeout

    Command-Line Format--connect_timeout=#
    Option-File Formatconnect_timeout
    System Variable Nameconnect_timeout
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default10

    The number of seconds that the mysqld server waits for a connect packet before responding with Bad handshake. The default value is 10 seconds.

    Increasing the connect_timeout value might help if clients frequently encounter errors of the form Lost connection to MySQL server at 'XXX', system error: errno.

  • core_file

    System Variable Namecore_file
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typeboolean
    DefaultOFF

    Whether to write a core file if the server crashes. This variable is set by the --core-file option.

  • datadir

    Command-Line Format--datadir=path
     -h
    Option-File Formatdatadir
    System Variable Namedatadir
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typefile name

    The MySQL data directory. This variable can be set with the --datadir option.

  • date_format

    This variable is unused. It is deprecated and will be removed in a future MySQL release.

  • datetime_format

    This variable is unused. It is deprecated and will be removed in a future MySQL release.

  • debug

    Command-Line Format--debug[=debug_options]
    Option-File Formatdebug
    System Variable Namedebug
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typestring
    Default'd:t:o,/tmp/mysqld.trace'

    This variable indicates the current debugging settings. It is available only for servers built with debugging support. The initial value comes from the value of instances of the --debug option given at server startup. The global and session values may be set at runtime; the SUPER privilege is required, even for the session value.

    Assigning a value that begins with + or - cause the value to added to or subtracted from the current value:

    mysql> SET debug = 'T';
    mysql> SELECT @@debug;
    +---------+
    | @@debug |
    +---------+
    | T       |
    +---------+
    
    mysql> SET debug = '+P';
    mysql> SELECT @@debug;
    +---------+
    | @@debug |
    +---------+
    | P:T     |
    +---------+
    
    mysql> SET debug = '-P';
    mysql> SELECT @@debug;
    +---------+
    | @@debug |
    +---------+
    | T       |
    +---------+
    

    For more information, see Section 22.4.3, “The DBUG Package”.

  • debug_sync

    System Variable Namedebug_sync
    Variable ScopeSession
    Dynamic VariableYes
     Permitted Values
    Typestring

    This variable is the user interface to the Debug Sync facility. Use of Debug Sync requires that MySQL be configured with the -DENABLE_DEBUG_SYNC=1 option (see Section 2.9.4, “MySQL Source-Configuration Options”). If Debug Sync is not compiled in, this system variable is not available.

    The global variable value is read only and indicates whether the facility is enabled. By default, Debug Sync is disabled and the value of debug_sync is OFF. If the server is started with --debug-sync-timeout=N, where N is a timeout value greater than 0, Debug Sync is enabled and the value of debug_sync is ON - current signal followed by the signal name. Also, N becomes the default timeout for individual synchronization points.

    The session value can be read by any user and will have the same value as the global variable. The session value can be set by users that have the SUPER privilege to control synchronization points.

    For a description of the Debug Sync facility and how to use synchronization points, see MySQL Internals: Test Synchronization.

  • default_storage_engine

    Command-Line Format--default-storage-engine=name
    Option-File Formatdefault-storage-engine
    System Variable Namedefault_storage_engine
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeenumeration
    DefaultInnoDB

    The default storage engine. This variable sets the storage engine for permanent tables only. To set the storage engine for TEMPORARY tables, set the default_tmp_storage_engine system variable.

    default_storage_engine should be used in preference to storage_engine, which is deprecated.

    If you disable the default storage engine at server startup, you must set the default engine for both permanent and TEMPORARY tables to a different engine or the server will not start.

  • default_tmp_storage_engine

    Command-Line Format--default_tmp_storage_engine=name
    Option-File Formatdefault_tmp_storage_engine
    System Variable Namedefault_tmp_storage_engine
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeenumeration
    DefaultInnoDB

    The default storage engine for TEMPORARY tables (created with CREATE TEMPORARY TABLE). To set the storage engine for permanent tables, set the default_storage_engine system variable.

    If you disable the default storage engine at server startup, you must set the default engine for both permanent and TEMPORARY tables to a different engine or the server will not start.

  • default_week_format

    Command-Line Format--default_week_format=#
    Option-File Formatdefault_week_format
    System Variable Namedefault_week_format
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default0
    Range0 .. 7

    The default mode value to use for the WEEK() function. See Section 12.7, “Date and Time Functions”.

  • delay_key_write

    Command-Line Format--delay-key-write[=name]
    Option-File Formatdelay-key-write
    System Variable Namedelay_key_write
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeenumeration
    DefaultON
    Valid ValuesON
    OFF
    ALL

    This option applies only to MyISAM tables. It can have one of the following values to affect handling of the DELAY_KEY_WRITE table option that can be used in CREATE TABLE statements.

    OptionDescription
    OFFDELAY_KEY_WRITE is ignored.
    ONMySQL honors any DELAY_KEY_WRITE option specified in CREATE TABLE statements. This is the default value.
    ALLAll new opened tables are treated as if they were created with the DELAY_KEY_WRITE option enabled.

    If DELAY_KEY_WRITE is enabled for a table, the key buffer is not flushed for the table on every index update, but only when the table is closed. This speeds up writes on keys a lot, but if you use this feature, you should add automatic checking of all MyISAM tables by starting the server with the --myisam-recover-options option (for example, --myisam-recover-options=BACKUP,FORCE). See Section 5.1.3, “Server Command Options”, and Section 14.3.1, “MyISAM Startup Options”.

    Warning

    If you enable external locking with --external-locking, there is no protection against index corruption for tables that use delayed key writes.

  • delayed_insert_limit

    Deprecated5.6.7
    Command-Line Format--delayed_insert_limit=#
    Option-File Formatdelayed_insert_limit
    System Variable Namedelayed_insert_limit
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default100
    Range1 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default100
    Range1 .. 18446744073709547520

    In MySQL 5.7, this system variable is deprecated (because DELAYED inserts are not supported), and will be removed in a future release.

  • delayed_insert_timeout

    Deprecated5.6.7
    Command-Line Format--delayed_insert_timeout=#
    Option-File Formatdelayed_insert_timeout
    System Variable Namedelayed_insert_timeout
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default300

    In MySQL 5.7, this system variable is deprecated (because DELAYED inserts are not supported), and will be removed in a future release.

  • delayed_queue_size

    Deprecated5.6.7
    Command-Line Format--delayed_queue_size=#
    Option-File Formatdelayed_queue_size
    System Variable Namedelayed_queue_size
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default1000
    Range1 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default1000
    Range1 .. 18446744073709547520

    In MySQL 5.7, this system variable is deprecated (because DELAYED inserts are not supported), and will be removed in a future release.

  • disconnect_on_expired_password

    Introduced5.7.1
    Command-Line Format--disconnect_on_expired_password=#
    Option-File Formatdisconnect_on_expired_password
    System Variable Namedisconnect_on_expired_password
    Variable ScopeSession
    Dynamic VariableNo
     Permitted Values
    Typeboolean
    DefaultON

    This variable controls how the server handles clients with expired passwords:

    For more information about the interaction of client and server settings relating to expired-password handling, see Section 6.3.6, “Password Expiration and Sandbox Mode”.

  • div_precision_increment

    Command-Line Format--div_precision_increment=#
    Option-File Formatdiv_precision_increment
    System Variable Namediv_precision_increment
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default4
    Range0 .. 30

    This variable indicates the number of digits by which to increase the scale of the result of division operations performed with the / operator. The default value is 4. The minimum and maximum values are 0 and 30, respectively. The following example illustrates the effect of increasing the default value.

    mysql> SELECT 1/7;
    +--------+
    | 1/7    |
    +--------+
    | 0.1429 |
    +--------+
    mysql> SET div_precision_increment = 12;
    mysql> SELECT 1/7;
    +----------------+
    | 1/7            |
    +----------------+
    | 0.142857142857 |
    +----------------+
    
  • end_markers_in_json

    System Variable Nameend_markers_in_json
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultOFF

    Whether optimizer JSON output should add end markers.

  • eq_range_index_dive_limit

    System Variable Nameeq_range_index_dive_limit
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default10
    Range0 .. 4294967295

    This variable indicates the number of equality ranges in an equality comparison condition when the optimizer should switch from using index dives to index statistics in estimating the number of qualifying rows. It applies to evaluation of expressions that have either of these equivalent forms, where the optimizer uses a nonunique index to look up col_name values:

    col_name IN(val1, ..., valN)
    col_name = val1 OR ... OR col_name = valN
    

    In both cases, the expression contains N equality ranges. The optimizer can make row estimates using index dives or index statistics. If eq_range_index_dive_limit is greater than 0, the optimizer uses existing index statistics instead of index dives if there are eq_range_index_dive_limit or more equality ranges. Thus, to permit use of index dives for up to N equality ranges, set eq_range_index_dive_limit to N + 1. Set eq_range_index_dive_limit to 0 to disable use of index statistics and always use index dives regardless of N.

    For more information, see Section 8.2.1.3.3, “Equality Range Optimization of Many-Valued Comparisons”.

    To update table index statistics for best estimates, use ANALYZE TABLE.

  • error_count

    The number of errors that resulted from the last statement that generated messages. This variable is read only. See Section 13.7.5.16, “SHOW ERRORS Syntax”.

  • event_scheduler

    Command-Line Format--event-scheduler[=value]
    Option-File Formatevent-scheduler
    System Variable Nameevent_scheduler
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeenumeration
    DefaultOFF
    Valid ValuesON
    OFF
    DISABLED

    This variable indicates the status of the Event Scheduler; possible values are ON, OFF, and DISABLED, with the default being OFF. This variable and its effects on the Event Scheduler's operation are discussed in greater detail in the Overview section of the Events chapter.

  • expire_logs_days

    Command-Line Format--expire_logs_days=#
    Option-File Formatexpire_logs_days
    System Variable Nameexpire_logs_days
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default0
    Range0 .. 99

    The number of days for automatic binary log file removal. The default is 0, which means no automatic removal. Possible removals happen at startup and when the binary log is flushed. Log flushing occurs as indicated in Section 5.2, “MySQL Server Logs”.

    To remove binary log files manually, use the PURGE BINARY LOGS statement. See Section 13.4.1.1, “PURGE BINARY LOGS Syntax”.

  • explicit_defaults_for_timestamp

    Command-Line Format--explicit_defaults_for_timestamp=#
    Option-File Formatexplicit_defaults_for_timestamp
    System Variable Nameexplicit_defaults_for_timestamp
    Variable ScopeSession
    Dynamic VariableNo
     Permitted Values
    Typeboolean
    DefaultFALSE

    In MySQL, the TIMESTAMP data type differs in nonstandard ways from other data types:

    • TIMESTAMP columns not explicitly declared with the NULL attribute are assigned the NOT NULL attribute. (Columns of other data types, if not explicitly declared as NOT NULL, permit NULL values.) Setting such a column to NULL sets it to the current timestamp.

    • The first TIMESTAMP column in a table, if not declared with the NULL attribute or an explicit DEFAULT or ON UPDATE clause, is automatically assigned the DEFAULT CURRENT_TIMESTAMP and ON UPDATE CURRENT_TIMESTAMP attributes.

    • TIMESTAMP columns following the first one, if not declared with the NULL attribute or an explicit DEFAULT clause, are automatically assigned DEFAULT '0000-00-00 00:00:00' (the zero timestamp). For inserted rows that specify no explicit value for such a column, the column is assigned '0000-00-00 00:00:00' and no warning occurs.

    Those nonstandard behaviors remain the default for TIMESTAMP but as of MySQL 5.6.6 are deprecated and this warning appears at startup:

    [Warning] TIMESTAMP with implicit DEFAULT value is deprecated.
    Please use --explicit_defaults_for_timestamp server option (see
    documentation for more details).

    As indicated by the warning, to turn off the nonstandard behaviors, enable the new explicit_defaults_for_timestamp system variable at server startup. With this variable enabled, the server handles TIMESTAMP as follows instead:

    • TIMESTAMP columns not explicitly declared as NOT NULL permit NULL values. Setting such a column to NULL sets it to NULL, not the current timestamp.

    • No TIMESTAMP column is assigned the DEFAULT CURRENT_TIMESTAMP or ON UPDATE CURRENT_TIMESTAMP attributes automatically. Those attributes must be explicitly specified.

    • TIMESTAMP columns declared as NOT NULL and without an explicit DEFAULT clause are treated as having no default value. For inserted rows that specify no explicit value for such a column, the result depends on the SQL mode. If strict SQL mode is enabled, an error occurs. If strict SQL mode is not enabled, the column is assigned the implicit default of '0000-00-00 00:00:00' and a warning occurs. This is similar to how MySQL treats other temporal types such as DATETIME.

    Note

    explicit_defaults_for_timestamp is itself deprecated because its only purpose is to permit control over now-deprecated TIMESTAMP behaviors that will be removed in a future MySQL release. When that removal occurs, explicit_defaults_for_timestamp will have no purpose and will be removed as well.

  • external_user

    System Variable Nameexternal_user
    Variable ScopeSession
    Dynamic VariableNo
     Permitted Values
    Typestring

    The external user name used during the authentication process, as set by the plugin used to authenticate the client. With native (built-in) MySQL authentication, or if the plugin does not set the value, this variable is NULL. See Section 6.3.8, “Proxy Users”.

  • flush

    Command-Line Format--flush
    Option-File Formatflush
    System Variable Nameflush
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultOFF

    If ON, the server flushes (synchronizes) all changes to disk after each SQL statement. Normally, MySQL does a write of all changes to disk only after each SQL statement and lets the operating system handle the synchronizing to disk. See Section C.5.4.2, “What to Do If MySQL Keeps Crashing”. This variable is set to ON if you start mysqld with the --flush option.

  • flush_time

    Command-Line Format--flush_time=#
    Option-File Formatflush_time
    System Variable Nameflush_time
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Type (windows)numeric
    Default0
    Min Value0

    If this is set to a nonzero value, all tables are closed every flush_time seconds to free up resources and synchronize unflushed data to disk. This option is best used only on systems with minimal resources.

  • foreign_key_checks

    If set to 1 (the default), foreign key constraints for InnoDB tables are checked. If set to 0, they are ignored. Typically you leave this setting enabled during normal operation, to enforce referential integrity. Disabling foreign key checking can be useful for reloading InnoDB tables in an order different from that required by their parent/child relationships. See Section 5.4.5, “InnoDB and FOREIGN KEY Constraints”.

    Setting foreign_key_checks to 0 also affects data definition statements: DROP SCHEMA drops a schema even if it contains tables that have foreign keys that are referred to by tables outside the schema, and DROP TABLE drops tables that have foreign keys that are referred to by other tables.

    Note

    Setting foreign_key_checks to 1 does not trigger a scan of the existing table data. Therefore, rows added to the table while foreign_key_checks = 0 will not be verified for consistency.

  • ft_boolean_syntax

    Command-Line Format--ft_boolean_syntax=name
    Option-File Formatft_boolean_syntax
    System Variable Nameft_boolean_syntax
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typestring
    Default+ -><()~*:""&|

    The list of operators supported by boolean full-text searches performed using IN BOOLEAN MODE. See Section 12.9.2, “Boolean Full-Text Searches”.

    The default variable value is '+ -><()~*:""&|'. The rules for changing the value are as follows:

    • Operator function is determined by position within the string.

    • The replacement value must be 14 characters.

    • Each character must be an ASCII nonalphanumeric character.

    • Either the first or second character must be a space.

    • No duplicates are permitted except the phrase quoting operators in positions 11 and 12. These two characters are not required to be the same, but they are the only two that may be.

    • Positions 10, 13, and 14 (which by default are set to :, &, and |) are reserved for future extensions.

  • ft_max_word_len

    Command-Line Format--ft_max_word_len=#
    Option-File Formatft_max_word_len
    System Variable Nameft_max_word_len
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typenumeric
    Min Value10

    The maximum length of the word to be included in a MyISAM FULLTEXT index.

    Note

    FULLTEXT indexes on MyISAM tables must be rebuilt after changing this variable. Use REPAIR TABLE tbl_name QUICK.

  • ft_min_word_len

    Command-Line Format--ft_min_word_len=#
    Option-File Formatft_min_word_len
    System Variable Nameft_min_word_len
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typenumeric
    Default4
    Min Value1

    The minimum length of the word to be included in a MyISAM FULLTEXT index.

    Note

    FULLTEXT indexes on MyISAM tables must be rebuilt after changing this variable. Use REPAIR TABLE tbl_name QUICK.

  • ft_query_expansion_limit

    Command-Line Format--ft_query_expansion_limit=#
    Option-File Formatft_query_expansion_limit
    System Variable Nameft_query_expansion_limit
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typenumeric
    Default20
    Range0 .. 1000

    The number of top matches to use for full-text searches performed using WITH QUERY EXPANSION.

  • ft_stopword_file

    Command-Line Format--ft_stopword_file=file_name
    Option-File Formatft_stopword_file
    System Variable Nameft_stopword_file
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typefile name

    The file from which to read the list of stopwords for full-text searches on MyISAM tables. The server looks for the file in the data directory unless an absolute path name is given to specify a different directory. All the words from the file are used; comments are not honored. By default, a built-in list of stopwords is used (as defined in the storage/myisam/ft_static.c file). Setting this variable to the empty string ('') disables stopword filtering. See also Section 12.9.4, “Full-Text Stopwords”.

    Note

    FULLTEXT indexes on MyISAM tables must be rebuilt after changing this variable or the contents of the stopword file. Use REPAIR TABLE tbl_name QUICK.

  • general_log

    Command-Line Format--general-log
    Option-File Formatgeneral-log
    System Variable Namegeneral_log
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultOFF

    Whether the general query log is enabled. The value can be 0 (or OFF) to disable the log or 1 (or ON) to enable the log. The default value depends on whether the --general_log option is given. The destination for log output is controlled by the log_output system variable; if that value is NONE, no log entries are written even if the log is enabled.

  • general_log_file

    Command-Line Format--general-log-file=file_name
    Option-File Formatgeneral_log_file
    System Variable Namegeneral_log_file
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typefile name
    Defaulthost_name.log

    The name of the general query log file. The default value is host_name.log, but the initial value can be changed with the --general_log_file option.

  • group_concat_max_len

    Command-Line Format--group_concat_max_len=#
    Option-File Formatgroup_concat_max_len
    System Variable Namegroup_concat_max_len
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default1024
    Range4 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default1024
    Range4 .. 18446744073709547520

    The maximum permitted result length in bytes for the GROUP_CONCAT() function. The default is 1024.

  • have_compress

    YES if the zlib compression library is available to the server, NO if not. If not, the COMPRESS() and UNCOMPRESS() functions cannot be used.

  • have_crypt

    YES if the crypt() system call is available to the server, NO if not. If not, the ENCRYPT() function cannot be used.

  • have_dynamic_loading

    YES if mysqld supports dynamic loading of plugins, NO if not.

  • have_geometry

    YES if the server supports spatial data types, NO if not.

  • have_openssl

    This variable is an alias for have_ssl.

  • have_profiling

    YES if statement profiling capability is present, NO if not. If present, the profiling system variable controls whether this capability is enabled or disabled. See Section 13.7.5.30, “SHOW PROFILES Syntax”.

    This variable is deprecated and will be removed in a future MySQL release.

  • have_query_cache

    YES if mysqld supports the query cache, NO if not.

  • have_rtree_keys

    YES if RTREE indexes are available, NO if not. (These are used for spatial indexes in MyISAM tables.)

  • have_ssl

    YES if mysqld supports SSL connections, NO if not. DISABLED indicates that the server was compiled with SSL support, but but was not started with the appropriate --ssl-xxx options. For more information, see Section 6.3.9.2, “Configuring MySQL for SSL”.

  • have_symlink

    YES if symbolic link support is enabled, NO if not. This is required on Unix for support of the DATA DIRECTORY and INDEX DIRECTORY table options. If the server is started with the --skip-symbolic-links option, the value is DISABLED.

    This variable has no meaning on Windows.

  • host_cache_size

    System Variable Namehost_cache_size
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default-1 (autosized)
    Range0 .. 65536

    The size of the internal host cache (see Section 8.11.5.2, “DNS Lookup Optimization and the Host Cache”). Setting the size to 0 disables the host cache. Changing the cache size at runtime implicitly causes a FLUSH HOSTS operation to clear the host cache and truncate the host_cache table.

    The default value is 128, plus 1 for a value of max_connections up to 500, plus 1 for every increment of 20 over 500 in the max_connections value, capped to a limit of 2000.

    Use of --skip-host-cache is similar to setting the host_cache_size system variable to 0, but host_cache_size is more flexible because it can also be used to resize, enable, or disable the host cache at runtime, not just at server startup.

    If you start the server with --skip-host-cache, that does not prevent changes to the value of host_cache_size, but such changes have no effect and the cache is not re-enabled even if host_cache_size is set larger than 0.

  • hostname

    System Variable Namehostname
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typestring

    The server sets this variable to the server host name at startup.

  • identity

    This variable is a synonym for the last_insert_id variable. It exists for compatibility with other database systems. You can read its value with SELECT @@identity, and set it using SET identity.

  • ignore_db_dirs

    System Variable Nameignore_db_dirs
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typestring

    A comma-separated list of names that are not considered as database directories in the data directory. The value is set from any instances of --ignore-db-dir given at server startup.

  • init_connect

    Command-Line Format--init-connect=name
    Option-File Formatinit_connect
    System Variable Nameinit_connect
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typestring

    A string to be executed by the server for each client that connects. The string consists of one or more SQL statements, separated by semicolon characters. For example, each client session begins by default with autocommit mode enabled. For older servers (before MySQL 5.5.8), there is no global autocommit system variable to specify that autocommit should be disabled by default, but as a workaround init_connect can be used to achieve the same effect:

    SET GLOBAL init_connect='SET autocommit=0';

    The init_connect variable can also be set on the command line or in an option file. To set the variable as just shown using an option file, include these lines:

    [mysqld]
    init_connect='SET autocommit=0'

    The content of init_connect is not executed for users that have the SUPER privilege. This is done so that an erroneous value for init_connect does not prevent all clients from connecting. For example, the value might contain a statement that has a syntax error, thus causing client connections to fail. Not executing init_connect for users that have the SUPER privilege enables them to open a connection and fix the init_connect value.

  • init_file

    Command-Line Format--init-file=file_name
    Option-File Formatinit-file
    System Variable Nameinit_file
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typefile name

    The name of the file specified with the --init-file option when you start the server. This should be a file containing SQL statements that you want the server to execute when it starts. Each statement must be on a single line and should not include comments. No statement terminator such as ;, \g, or \G should be given at the end of each statement.

  • innodb_xxx

    InnoDB system variables are listed in Section 14.2.6, “InnoDB Startup Options and System Variables”. These variables control many aspects of storage, memory use, and I/O patterns for InnoDB tables, and are especially important now that InnoDB is the default storage engine.

  • insert_id

    The value to be used by the following INSERT or ALTER TABLE statement when inserting an AUTO_INCREMENT value. This is mainly used with the binary log.

  • interactive_timeout

    Command-Line Format--interactive_timeout=#
    Option-File Formatinteractive_timeout
    System Variable Nameinteractive_timeout
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default28800
    Min Value1

    The number of seconds the server waits for activity on an interactive connection before closing it. An interactive client is defined as a client that uses the CLIENT_INTERACTIVE option to mysql_real_connect(). See also wait_timeout.

  • join_buffer_size

    Command-Line Format--join_buffer_size=#
    Option-File Formatjoin_buffer_size
    System Variable Namejoin_buffer_size
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default262144
    Range128 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default262144
    Range128 .. 18446744073709547520

    The minimum size of the buffer that is used for plain index scans, range index scans, and joins that do not use indexes and thus perform full table scans. Normally, the best way to get fast joins is to add indexes. Increase the value of join_buffer_size to get a faster full join when adding indexes is not possible. One join buffer is allocated for each full join between two tables. For a complex join between several tables for which indexes are not used, multiple join buffers might be necessary. There is no gain from setting the buffer larger than required to hold each matching row, and all joins allocate at least the minimum size, so use caution in setting this variable to a large value globally. It is better to keep the global setting small and change to a larger setting only in sessions that are doing large joins. Memory allocation time can cause substantial performance drops if the global size is larger than needed by most queries that use it.

    The default is 256KB. The maximum permissible setting for join_buffer_size is 4GB. Values larger than 4GB are permitted for 64-bit platforms (except 64-bit Windows, for which large values are truncated to 4GB with a warning).

    For additional information about join buffering, see Section 8.2.1.10, “Nested-Loop Join Algorithms”.

  • keep_files_on_create

    Command-Line Format--keep_files_on_create=#
    Option-File Formatkeep_files_on_create
    System Variable Namekeep_files_on_create
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultOFF

    If a MyISAM table is created with no DATA DIRECTORY option, the .MYD file is created in the database directory. By default, if MyISAM finds an existing .MYD file in this case, it overwrites it. The same applies to .MYI files for tables created with no INDEX DIRECTORY option. To suppress this behavior, set the keep_files_on_create variable to ON (1), in which case MyISAM will not overwrite existing files and returns an error instead. The default value is OFF (0).

    If a MyISAM table is created with a DATA DIRECTORY or INDEX DIRECTORY option and an existing .MYD or .MYI file is found, MyISAM always returns an error. It will not overwrite a file in the specified directory.

  • key_buffer_size

    Command-Line Format--key_buffer_size=#
    Option-File Formatkey_buffer_size
    System Variable Namekey_buffer_size
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default8388608
    Range8 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default8388608
    Range8 .. OS_PER_PROCESS_LIMIT

    Index blocks for MyISAM tables are buffered and are shared by all threads. key_buffer_size is the size of the buffer used for index blocks. The key buffer is also known as the key cache.

    The maximum permissible setting for key_buffer_size is 4GB on 32-bit platforms. Values larger than 4GB are permitted for 64-bit platforms. The effective maximum size might be less, depending on your available physical RAM and per-process RAM limits imposed by your operating system or hardware platform. The value of this variable indicates the amount of memory requested. Internally, the server allocates as much memory as possible up to this amount, but the actual allocation might be less.

    You can increase the value to get better index handling for all reads and multiple writes; on a system whose primary function is to run MySQL using the MyISAM storage engine, 25% of the machine's total memory is an acceptable value for this variable. However, you should be aware that, if you make the value too large (for example, more than 50% of the machine's total memory), your system might start to page and become extremely slow. This is because MySQL relies on the operating system to perform file system caching for data reads, so you must leave some room for the file system cache. You should also consider the memory requirements of any other storage engines that you may be using in addition to MyISAM.

    For even more speed when writing many rows at the same time, use LOCK TABLES. See Section 8.2.2.1, “Speed of INSERT Statements”.

    You can check the performance of the key buffer by issuing a SHOW STATUS statement and examining the Key_read_requests, Key_reads, Key_write_requests, and Key_writes status variables. (See Section 13.7.5, “SHOW Syntax”.) The Key_reads/Key_read_requests ratio should normally be less than 0.01. The Key_writes/Key_write_requests ratio is usually near 1 if you are using mostly updates and deletes, but might be much smaller if you tend to do updates that affect many rows at the same time or if you are using the DELAY_KEY_WRITE table option.

    The fraction of the key buffer in use can be determined using key_buffer_size in conjunction with the Key_blocks_unused status variable and the buffer block size, which is available from the key_cache_block_size system variable:

    1 - ((Key_blocks_unused * key_cache_block_size) / key_buffer_size)

    This value is an approximation because some space in the key buffer is allocated internally for administrative structures. Factors that influence the amount of overhead for these structures include block size and pointer size. As block size increases, the percentage of the key buffer lost to overhead tends to decrease. Larger blocks results in a smaller number of read operations (because more keys are obtained per read), but conversely an increase in reads of keys that are not examined (if not all keys in a block are relevant to a query).

    It is possible to create multiple MyISAM key caches. The size limit of 4GB applies to each cache individually, not as a group. See Section 8.9.2, “The MyISAM Key Cache”.

  • key_cache_age_threshold

    Command-Line Format--key_cache_age_threshold=#
    Option-File Formatkey_cache_age_threshold
    System Variable Namekey_cache_age_threshold
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default300
    Range100 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default300
    Range100 .. 18446744073709547520

    This value controls the demotion of buffers from the hot sublist of a key cache to the warm sublist. Lower values cause demotion to happen more quickly. The minimum value is 100. The default value is 300. See Section 8.9.2, “The MyISAM Key Cache”.

  • key_cache_block_size

    Command-Line Format--key_cache_block_size=#
    Option-File Formatkey_cache_block_size
    System Variable Namekey_cache_block_size
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default1024
    Range512 .. 16384

    The size in bytes of blocks in the key cache. The default value is 1024. See Section 8.9.2, “The MyISAM Key Cache”.

  • key_cache_division_limit

    Command-Line Format--key_cache_division_limit=#
    Option-File Formatkey_cache_division_limit
    System Variable Namekey_cache_division_limit
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default100
    Range1 .. 100

    The division point between the hot and warm sublists of the key cache buffer list. The value is the percentage of the buffer list to use for the warm sublist. Permissible values range from 1 to 100. The default value is 100. See Section 8.9.2, “The MyISAM Key Cache”.

  • large_files_support

    System Variable Namelarge_files_support
    Variable ScopeGlobal
    Dynamic VariableNo

    Whether mysqld was compiled with options for large file support.

  • large_pages

    Command-Line Format--large-pages
    Option-File Formatlarge-pages
    System Variable Namelarge_pages
    Variable ScopeGlobal
    Dynamic VariableNo
    Platform Specificlinux
     Permitted Values
    Type (linux)boolean
    DefaultFALSE

    Whether large page support is enabled (via the --large-pages option). See Section 8.11.4.2, “Enabling Large Page Support”.

  • large_page_size

    System Variable Namelarge_page_size
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Type (linux)numeric
    Default0

    If large page support is enabled, this shows the size of memory pages. Currently, large memory pages are supported only on Linux; on other platforms, the value of this variable is always 0. See Section 8.11.4.2, “Enabling Large Page Support”.

  • last_insert_id

    The value to be returned from LAST_INSERT_ID(). This is stored in the binary log when you use LAST_INSERT_ID() in a statement that updates a table. Setting this variable does not update the value returned by the mysql_insert_id() C API function.

  • lc_messages

    Command-Line Format--lc-messages=name
    Option-File Formatlc-messages
    System Variable Namelc_messages
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typestring

    The locale to use for error messages. The server converts the value to a language name and combines it with the value of the lc_messages_dir to produce the location for the error message file. See Section 10.2, “Setting the Error Message Language”.

  • lc_messages_dir

    Command-Line Format--lc-messages-dir=path
    Option-File Formatlc-messages-dir
    System Variable Namelc_messages_dir
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typedirectory name

    The directory where error messages are located. The value is used together with the value of lc_messages to produce the location for the error message file. See Section 10.2, “Setting the Error Message Language”.

  • lc_time_names

    System Variable Namelc_time_names
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typestring

    This variable specifies the locale that controls the language used to display day and month names and abbreviations. This variable affects the output from the DATE_FORMAT(), DAYNAME() and MONTHNAME() functions. Locale names are POSIX-style values such as 'ja_JP' or 'pt_BR'. The default value is 'en_US' regardless of your system's locale setting. For further information, see Section 10.7, “MySQL Server Locale Support”.

  • license

    System Variable Namelicense
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typestring
    DefaultGPL

    The type of license the server has.

  • local_infile

    System Variable Namelocal_infile
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeboolean

    Whether LOCAL is supported for LOAD DATA INFILE statements. If this variable is disabled, clients cannot use LOCAL in LOAD DATA statements. See Section 6.1.6, “Security Issues with LOAD DATA LOCAL.

  • lock_wait_timeout

    Command-Line Format--lock_wait_timeout=#
    Option-File Formatlock_wait_timeout
    System Variable Namelock_wait_timeout
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default31536000
    Range1 .. 31536000

    This variable specifies the timeout in seconds for attempts to acquire metadata locks. The permissible values range from 1 to 31536000 (1 year). The default is 31536000.

    This timeout applies to all statements that use metadata locks. These include DML and DDL operations on tables, views, stored procedures, and stored functions, as well as LOCK TABLES, FLUSH TABLES WITH READ LOCK, and HANDLER statements.

    This timeout does not apply to implicit accesses to system tables in the mysql database, such as grant tables modified by GRANT or REVOKE statements or table logging statements. The timeout does apply to system tables accessed directly, such as with SELECT or UPDATE.

    The timeout value applies separately for each metadata lock attempt. A given statement can require more than one lock, so it is possible for the statement to block for longer than the lock_wait_timeout value before reporting a timeout error. When lock timeout occurs, ER_LOCK_WAIT_TIMEOUT is reported.

    lock_wait_timeout does not apply to delayed inserts, which always execute with a timeout of 1 year. This is done to avoid unnecessary timeouts because a session that issues a delayed insert receives no notification of delayed insert timeouts.

  • locked_in_memory

    System Variable Namelocked_in_memory
    Variable ScopeGlobal
    Dynamic VariableNo

    Whether mysqld was locked in memory with --memlock.

  • log_bin_trust_function_creators

    Command-Line Format--log-bin-trust-function-creators
    Option-File Formatlog-bin-trust-function-creators
    System Variable Namelog_bin_trust_function_creators
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultFALSE

    This variable applies when binary logging is enabled. It controls whether stored function creators can be trusted not to create stored functions that will cause unsafe events to be written to the binary log. If set to 0 (the default), users are not permitted to create or alter stored functions unless they have the SUPER privilege in addition to the CREATE ROUTINE or ALTER ROUTINE privilege. A setting of 0 also enforces the restriction that a function must be declared with the DETERMINISTIC characteristic, or with the READS SQL DATA or NO SQL characteristic. If the variable is set to 1, MySQL does not enforce these restrictions on stored function creation. This variable also applies to trigger creation. See Section 18.7, “Binary Logging of Stored Programs”.

  • log_error

    Command-Line Format--log-error[=name]
    Option-File Formatlog-error
    System Variable Namelog_error
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typefile name

    The location of the error log, or stderr if the server is writing error message to the standard error output. See Section 5.2.2, “The Error Log”.

  • log_output

    Command-Line Format--log-output=name
    Option-File Formatlog-output
    System Variable Namelog_output
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeset
    DefaultFILE
    Valid ValuesTABLE
    FILE
    NONE

    The destination for general query log and slow query log output. The value can be a comma-separated list of one or more of the words TABLE (log to tables), FILE (log to files), or NONE (do not log to tables or files). The default value is FILE. NONE, if present, takes precedence over any other specifiers. If the value is NONE log entries are not written even if the logs are enabled. If the logs are not enabled, no logging occurs even if the value of log_output is not NONE. For more information, see Section 5.2.1, “Selecting General Query and Slow Query Log Output Destinations”.

  • log_queries_not_using_indexes

    Command-Line Format--log-queries-not-using-indexes
    Option-File Formatlog-queries-not-using-indexes
    System Variable Namelog_queries_not_using_indexes
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultOFF

    Whether queries that do not use indexes are logged to the slow query log. See Section 5.2.5, “The Slow Query Log”.

  • log_throttle_queries_not_using_indexes

    System Variable Namelog_throttle_queries_not_using_indexes
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default0

    If log_queries_not_using_indexes is enabled, the log_throttle_queries_not_using_indexes variable limits the number of such queries per minute that can be written to the slow query log. A value of 0 (the default) means no limit. For more information, see Section 5.2.5, “The Slow Query Log”.

  • log_slow_admin_statements

    Introduced5.7.1
    System Variable Namelog_slow_admin_statements
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultOFF

    Include slow administrative statements in the statements written to the slow query log. Administrative statements include ALTER TABLE, ANALYZE TABLE, CHECK TABLE, CREATE INDEX, DROP INDEX, OPTIMIZE TABLE, and REPAIR TABLE.

    This variable was added in MySQL 5.7.1.

  • log_warnings

    Command-Line Format--log-warnings[=#]
     -W [#]
    Option-File Formatlog-warnings[=#]
    System Variable Namelog_warnings
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default1
    Range0 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default1
    Range0 .. 18446744073709547520

    Whether to produce additional warning messages to the error log. This variable is enabled (1) by default and can be disabled by setting it to 0. The server logs messages about statements that are unsafe for statement-based logging if the value is greater than 0. Aborted connections and access-denied errors for new connection attempts are logged if the value is greater than 1.

  • long_query_time

    Command-Line Format--long_query_time=#
    Option-File Formatlong_query_time
    System Variable Namelong_query_time
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default10
    Min Value0

    If a query takes longer than this many seconds, the server increments the Slow_queries status variable. If the slow query log is enabled, the query is logged to the slow query log file. This value is measured in real time, not CPU time, so a query that is under the threshold on a lightly loaded system might be above the threshold on a heavily loaded one. The minimum and default values of long_query_time are 0 and 10, respectively. The value can be specified to a resolution of microseconds. For logging to a file, times are written including the microseconds part. For logging to tables, only integer times are written; the microseconds part is ignored. See Section 5.2.5, “The Slow Query Log”.

  • low_priority_updates

    Command-Line Format--low-priority-updates
    Option-File Formatlow-priority-updates
    System Variable Namelow_priority_updates
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultFALSE

    If set to 1, all INSERT, UPDATE, DELETE, and LOCK TABLE WRITE statements wait until there is no pending SELECT or LOCK TABLE READ on the affected table. This affects only storage engines that use only table-level locking (such as MyISAM, MEMORY, and MERGE).

  • lower_case_file_system

    System Variable Namelower_case_file_system
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typeboolean

    This variable describes the case sensitivity of file names on the file system where the data directory is located. OFF means file names are case sensitive, ON means they are not case sensitive. This variable is read only because it reflects a file system attribute and setting it would have no effect on the file system.

  • lower_case_table_names

    Command-Line Format--lower_case_table_names[=#]
    Option-File Formatlower_case_table_names
    System Variable Namelower_case_table_names
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typenumeric
    Default0
    Range0 .. 2

    If set to 0, table names are stored as specified and comparisons are case sensitive. If set to 1, table names are stored in lowercase on disk and comparisons are not case sensitive. If set to 2, table names are stored as given but compared in lowercase. This option also applies to database names and table aliases. For additional information, see Section 9.2.2, “Identifier Case Sensitivity”.

    You should not set this variable to 0 if you are running MySQL on a system that has case-insensitive file names (such as Windows or Mac OS X). If you set this variable to 0 on such a system and access MyISAM tablenames using different lettercases, index corruption may result. On Windows the default value is 1. On Mac OS X, the default value is 2.

    If you are using InnoDB tables, you should set this variable to 1 on all platforms to force names to be converted to lowercase.

    The setting of this variable in MySQL 5.7 affects the behavior of replication filtering options with regard to case sensitivity. (Bug #51639) See Section 16.2.3, “How Servers Evaluate Replication Filtering Rules”, for more information.

  • max_allowed_packet

    Command-Line Format--max_allowed_packet=#
    Option-File Formatmax_allowed_packet
    System Variable Namemax_allowed_packet
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default4194304
    Range1024 .. 1073741824

    The maximum size of one packet or any generated/intermediate string, or any parameter sent by the mysql_stmt_send_long_data() C API function. The default is 4MB.

    The packet message buffer is initialized to net_buffer_length bytes, but can grow up to max_allowed_packet bytes when needed. This value by default is small, to catch large (possibly incorrect) packets.

    You must increase this value if you are using large BLOB columns or long strings. It should be as big as the largest BLOB you want to use. The protocol limit for max_allowed_packet is 1GB. The value should be a multiple of 1024; nonmultiples are rounded down to the nearest multiple.

    When you change the message buffer size by changing the value of the max_allowed_packet variable, you should also change the buffer size on the client side if your client program permits it. The default max_allowed_packet value built in to the client library is 1GB, but individual client programs might override this. For example, mysql and mysqldump have defaults of 16MB and 24MB, respectively. They also enable you to change the client-side value by setting max_allowed_packet on the command line or in an option file.

    The session value of this variable is read only.

  • max_connect_errors

    Command-Line Format--max_connect_errors=#
    Option-File Formatmax_connect_errors
    System Variable Namemax_connect_errors
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default100
    Range1 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default100
    Range1 .. 18446744073709547520

    If more than this many successive connection requests from a host are interrupted without a successful connection, the server blocks that host from further connections. You can unblock blocked hosts by flushing the host cache. To do so, issue a FLUSH HOSTS statement or execute a mysqladmin flush-hosts command. If a connection is established successfully within fewer than max_connect_errors attempts after a previous connection was interrupted, the error count for the host is cleared to zero. However, once a host is blocked, flushing the host cache is the only way to unblock it. The default is 100.

  • max_connections

    Command-Line Format--max_connections=#
    Option-File Formatmax_connections
    System Variable Namemax_connections
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default151
    Range1 .. 100000

    The maximum permitted number of simultaneous client connections. By default, this is 151. See Section C.5.2.7, “Too many connections, for more information.

    Increasing this value increases the number of file descriptors that mysqld requires. See Section 8.4.3.1, “How MySQL Opens and Closes Tables”, for comments on file descriptor limits.

    Connections refused because the max_connections limit is reached increment the Connection_errors_max_connections status variable.

  • max_delayed_threads

    Deprecated5.6.7
    Command-Line Format--max_delayed_threads=#
    Option-File Formatmax_delayed_threads
    System Variable Namemax_delayed_threads
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default20
    Range0 .. 16384

    In MySQL 5.7, this system variable is deprecated (because DELAYED inserts are not supported), and will be removed in a future release.

  • max_error_count

    Command-Line Format--max_error_count=#
    Option-File Formatmax_error_count
    System Variable Namemax_error_count
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default64
    Range0 .. 65535

    The maximum number of error, warning, and note messages to be stored for display by the SHOW ERRORS and SHOW WARNINGS statements. This is the same as the number of condition areas in the diagnostics area, and thus the number of conditions that can be inspected by GET DIAGNOSTICS.

  • max_heap_table_size

    Command-Line Format--max_heap_table_size=#
    Option-File Formatmax_heap_table_size
    System Variable Namemax_heap_table_size
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default16777216
    Range16384 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default16777216
    Range16384 .. 1844674407370954752

    This variable sets the maximum size to which user-created MEMORY tables are permitted to grow. The value of the variable is used to calculate MEMORY table MAX_ROWS values. Setting this variable has no effect on any existing MEMORY table, unless the table is re-created with a statement such as CREATE TABLE or altered with ALTER TABLE or TRUNCATE TABLE. A server restart also sets the maximum size of existing MEMORY tables to the global max_heap_table_size value.

    This variable is also used in conjunction with tmp_table_size to limit the size of internal in-memory tables. See Section 8.4.3.3, “How MySQL Uses Internal Temporary Tables”.

    max_heap_table_size is not replicated. See Section 16.4.1.21, “Replication and MEMORY Tables”, and Section 16.4.1.33, “Replication and Variables”, for more information.

  • max_insert_delayed_threads

    Deprecated5.6.7
    System Variable Namemax_insert_delayed_threads
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric

    This variable is a synonym for max_delayed_threads.

    In MySQL 5.7, this system variable is deprecated (because DELAYED inserts are not supported), and will be removed in a future release.

  • max_join_size

    Command-Line Format--max_join_size=#
    Option-File Formatmax_join_size
    System Variable Namemax_join_size
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default18446744073709551615
    Range1 .. 18446744073709551615

    Do not permit statements that probably need to examine more than max_join_size rows (for single-table statements) or row combinations (for multiple-table statements) or that are likely to do more than max_join_size disk seeks. By setting this value, you can catch statements where keys are not used properly and that would probably take a long time. Set it if your users tend to perform joins that lack a WHERE clause, that take a long time, or that return millions of rows.

    Setting this variable to a value other than DEFAULT resets the value of sql_big_selects to 0. If you set the sql_big_selects value again, the max_join_size variable is ignored.

    If a query result is in the query cache, no result size check is performed, because the result has previously been computed and it does not burden the server to send it to the client.

  • max_length_for_sort_data

    Command-Line Format--max_length_for_sort_data=#
    Option-File Formatmax_length_for_sort_data
    System Variable Namemax_length_for_sort_data
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default1024
    Range4 .. 8388608

    The cutoff on the size of index values that determines which filesort algorithm to use. See Section 8.2.1.15, “ORDER BY Optimization”.

  • max_prepared_stmt_count

    Command-Line Format--max_prepared_stmt_count=#
    Option-File Formatmax_prepared_stmt_count
    System Variable Namemax_prepared_stmt_count
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default16382
    Range0 .. 1048576

    This variable limits the total number of prepared statements in the server. It can be used in environments where there is the potential for denial-of-service attacks based on running the server out of memory by preparing huge numbers of statements. If the value is set lower than the current number of prepared statements, existing statements are not affected and can be used, but no new statements can be prepared until the current number drops below the limit. The default value is 16,382. The permissible range of values is from 0 to 1 million. Setting the value to 0 disables prepared statements.

  • max_relay_log_size

    Command-Line Format--max_relay_log_size=#
    Option-File Formatmax_relay_log_size
    System Variable Namemax_relay_log_size
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default0
    Range0 .. 1073741824

    If a write by a replication slave to its relay log causes the current log file size to exceed the value of this variable, the slave rotates the relay logs (closes the current file and opens the next one). If max_relay_log_size is 0, the server uses max_binlog_size for both the binary log and the relay log. If max_relay_log_size is greater than 0, it constrains the size of the relay log, which enables you to have different sizes for the two logs. You must set max_relay_log_size to between 4096 bytes and 1GB (inclusive), or to 0. The default value is 0. See Section 16.2.1, “Replication Implementation Details”.

  • max_seeks_for_key

    Command-Line Format--max_seeks_for_key=#
    Option-File Formatmax_seeks_for_key
    System Variable Namemax_seeks_for_key
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default4294967295
    Range1 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default18446744073709547520
    Range1 .. 18446744073709547520

    Limit the assumed maximum number of seeks when looking up rows based on a key. The MySQL optimizer assumes that no more than this number of key seeks are required when searching for matching rows in a table by scanning an index, regardless of the actual cardinality of the index (see Section 13.7.5.21, “SHOW INDEX Syntax”). By setting this to a low value (say, 100), you can force MySQL to prefer indexes instead of table scans.

  • max_sort_length

    Command-Line Format--max_sort_length=#
    Option-File Formatmax_sort_length
    System Variable Namemax_sort_length
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default1024
    Range4 .. 8388608

    The number of bytes to use when sorting data values. Only the first max_sort_length bytes of each value are used; the rest are ignored.

    max_sort_length is ignored for integer, decimal, floating-point, and temporal data types.

  • max_sp_recursion_depth

    Command-Line Format--max_sp_recursion_depth[=#]
    Option-File Formatmax_sp_recursion_depth
    System Variable Namemax_sp_recursion_depth
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default0
    Max Value255

    The number of times that any given stored procedure may be called recursively. The default value for this option is 0, which completely disables recursion in stored procedures. The maximum value is 255.

    Stored procedure recursion increases the demand on thread stack space. If you increase the value of max_sp_recursion_depth, it may be necessary to increase thread stack size by increasing the value of thread_stack at server startup.

  • max_tmp_tables

    This variable is unused. It is deprecated and will be removed in a future MySQL release.

  • max_user_connections

    Command-Line Format--max_user_connections=#
    Option-File Formatmax_user_connections
    System Variable Namemax_user_connections
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default0
    Range0 .. 4294967295

    The maximum number of simultaneous connections permitted to any given MySQL user account. A value of 0 (the default) means no limit.

    This variable has a global value that can be set at server startup or runtime. It also has a read-only session value that indicates the effective simultaneous-connection limit that applies to the account associated with the current session. The session value is initialized as follows:

    • If the user account has a nonzero MAX_USER_CONNECTIONS resource limit, the session max_user_connections value is set to that limit.

    • Otherwise, the session max_user_connections value is set to the global value.

    Account resource limits are specified using the GRANT statement. See Section 6.3.4, “Setting Account Resource Limits”, and Section 13.7.1.4, “GRANT Syntax”.

  • max_write_lock_count

    Command-Line Format--max_write_lock_count=#
    Option-File Formatmax_write_lock_count
    System Variable Namemax_write_lock_count
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default4294967295
    Range1 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default18446744073709547520
    Range1 .. 18446744073709547520

    After this many write locks, permit some pending read lock requests to be processed in between.

  • metadata_locks_cache_size

    System Variable Namemetadata_locks_cache_size
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typenumeric
    Default1024
    Range1 .. 1048576

    The size of the metadata locks cache. The server uses this cache to avoid creation and destruction of synchronization objects. This is particularly helpful on systems where such operations are expensive, such as Windows XP.

  • metadata_locks_hash_instances

    System Variable Namemetadata_locks_hash_instances
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typenumeric
    Default8
    Range1 .. 1024

    The set of metadata locks can be partitioned into separate hashes to permit connections accessing different objects to use different locking hashes and reduce contention. The metadata_locks_hash_instances system variable specifies the number of hashes (default 8).

  • min_examined_row_limit

    Command-Line Format--min-examined-row-limit=#
    Option-File Formatmin-examined-row-limit
    System Variable Namemin_examined_row_limit
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default0
    Range0 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default0
    Range0 .. 18446744073709547520

    Queries that examine fewer than this number of rows are not logged to the slow query log.

  • myisam_data_pointer_size

    Command-Line Format--myisam_data_pointer_size=#
    Option-File Formatmyisam_data_pointer_size
    System Variable Namemyisam_data_pointer_size
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default6
    Range2 .. 7

    The default pointer size in bytes, to be used by CREATE TABLE for MyISAM tables when no MAX_ROWS option is specified. This variable cannot be less than 2 or larger than 7. The default value is 6. See Section C.5.2.12, “The table is full.

  • myisam_max_sort_file_size

    Command-Line Format--myisam_max_sort_file_size=#
    Option-File Formatmyisam_max_sort_file_size
    System Variable Namemyisam_max_sort_file_size
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default2147483648
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default9223372036854775807

    The maximum size of the temporary file that MySQL is permitted to use while re-creating a MyISAM index (during REPAIR TABLE, ALTER TABLE, or LOAD DATA INFILE). If the file size would be larger than this value, the index is created using the key cache instead, which is slower. The value is given in bytes.

    The default value is 2GB. If MyISAM index files exceed this size and disk space is available, increasing the value may help performance. The space must be available in the file system containing the directory where the original index file is located.

  • myisam_mmap_size

    Command-Line Format--myisam_mmap_size=#
    Option-File Formatmyisam_mmap_size
    System Variable Namemyisam_mmap_size
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default4294967295
    Range7 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default18446744073709547520
    Range7 .. 18446744073709547520

    The maximum amount of memory to use for memory mapping compressed MyISAM files. If many compressed MyISAM tables are used, the value can be decreased to reduce the likelihood of memory-swapping problems.

  • myisam_recover_options

    System Variable Namemyisam_recover_options
    Variable ScopeGlobal
    Dynamic VariableNo

    The value of the --myisam-recover-options option. See Section 5.1.3, “Server Command Options”.

  • myisam_repair_threads

    Command-Line Format--myisam_repair_threads=#
    Option-File Formatmyisam_repair_threads
    System Variable Namemyisam_repair_threads
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default1
    Range1 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default1
    Range1 .. 18446744073709547520

    If this value is greater than 1, MyISAM table indexes are created in parallel (each index in its own thread) during the Repair by sorting process. The default value is 1.

    Note

    Multi-threaded repair is still beta-quality code.

  • myisam_sort_buffer_size

    Command-Line Format--myisam_sort_buffer_size=#
    Option-File Formatmyisam_sort_buffer_size
    System Variable Namemyisam_sort_buffer_size
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default8388608
    Range4096 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default8388608
    Range4096 .. 18446744073709547520

    The size of the buffer that is allocated when sorting MyISAM indexes during a REPAIR TABLE or when creating indexes with CREATE INDEX or ALTER TABLE.

    The maximum permissible setting for myisam_sort_buffer_size is 4GB. Values larger than 4GB are permitted for 64-bit platforms (except 64-bit Windows, for which large values are truncated to 4GB with a warning).

  • myisam_stats_method

    Command-Line Format--myisam_stats_method=name
    Option-File Formatmyisam_stats_method
    System Variable Namemyisam_stats_method
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeenumeration
    Valid Valuesnulls_equal
    nulls_unequal
    nulls_ignored

    How the server treats NULL values when collecting statistics about the distribution of index values for MyISAM tables. This variable has three possible values, nulls_equal, nulls_unequal, and nulls_ignored. For nulls_equal, all NULL index values are considered equal and form a single value group that has a size equal to the number of NULL values. For nulls_unequal, NULL values are considered unequal, and each NULL forms a distinct value group of size 1. For nulls_ignored, NULL values are ignored.

    The method that is used for generating table statistics influences how the optimizer chooses indexes for query execution, as described in Section 8.3.7, “InnoDB and MyISAM Index Statistics Collection”.

  • myisam_use_mmap

    Command-Line Format--myisam_use_mmap
    Option-File Formatmyisam_use_mmap
    System Variable Namemyisam_use_mmap
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultOFF

    Use memory mapping for reading and writing MyISAM tables.

  • named_pipe

    System Variable Namenamed_pipe
    Variable ScopeGlobal
    Dynamic VariableNo
    Platform Specificwindows
     Permitted Values
    Type (windows)boolean
    DefaultOFF

    (Windows only.) Indicates whether the server supports connections over named pipes.

  • net_buffer_length

    Command-Line Format--net_buffer_length=#
    Option-File Formatnet_buffer_length
    System Variable Namenet_buffer_length
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default16384
    Range1024 .. 1048576

    Each client thread is associated with a connection buffer and result buffer. Both begin with a size given by net_buffer_length but are dynamically enlarged up to max_allowed_packet bytes as needed. The result buffer shrinks to net_buffer_length after each SQL statement.

    This variable should not normally be changed, but if you have very little memory, you can set it to the expected length of statements sent by clients. If statements exceed this length, the connection buffer is automatically enlarged. The maximum value to which net_buffer_length can be set is 1MB.

    The session value of this variable is read only.

  • net_read_timeout

    Command-Line Format--net_read_timeout=#
    Option-File Formatnet_read_timeout
    System Variable Namenet_read_timeout
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default30
    Min Value1

    The number of seconds to wait for more data from a connection before aborting the read. When the server is reading from the client, net_read_timeout is the timeout value controlling when to abort. When the server is writing to the client, net_write_timeout is the timeout value controlling when to abort. See also slave_net_timeout.

  • net_retry_count

    Command-Line Format--net_retry_count=#
    Option-File Formatnet_retry_count
    System Variable Namenet_retry_count
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default10
    Range1 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default10
    Range1 .. 18446744073709547520

    If a read or write on a communication port is interrupted, retry this many times before giving up. This value should be set quite high on FreeBSD because internal interrupts are sent to all threads.

  • net_write_timeout

    Command-Line Format--net_write_timeout=#
    Option-File Formatnet_write_timeout
    System Variable Namenet_write_timeout
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default60
    Min Value1

    The number of seconds to wait for a block to be written to a connection before aborting the write. See also net_read_timeout.

  • new

    Command-Line Format--new
     -n
    Option-File Formatnew
    System Variable Namenew
    Variable ScopeGlobal, Session
    Dynamic VariableYes
    Disabled byskip-new
     Permitted Values
    Typeboolean
    DefaultFALSE

    This variable was used in MySQL 4.0 to turn on some 4.1 behaviors, and is retained for backward compatibility. In MySQL 5.7, its value is always OFF.

  • old

    Command-Line Format--old
    Option-File Formatold
    System Variable Nameold
    Variable ScopeGlobal
    Dynamic VariableNo

    old is a compatibility variable. It is disabled by default, but can be enabled at startup to revert the server to behaviors present in older versions.

    Currently, when old is enabled, it changes the default scope of index hints to that used prior to MySQL 5.1.17. That is, index hints with no FOR clause apply only to how indexes are used for row retrieval and not to resolution of ORDER BY or GROUP BY clauses. (See Section 13.2.9.3, “Index Hint Syntax”.) Take care about enabling this in a replication setup. With statement-based binary logging, having different modes for the master and slaves might lead to replication errors.

  • old_alter_table

    Command-Line Format--old-alter-table
    Option-File Formatold-alter-table
    System Variable Nameold_alter_table
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultOFF

    When this variable is enabled, the server does not use the optimized method of processing an ALTER TABLE operation. It reverts to using a temporary table, copying over the data, and then renaming the temporary table to the original, as used by MySQL 5.0 and earlier. For more information on the operation of ALTER TABLE, see Section 13.1.6, “ALTER TABLE Syntax”.

  • old_passwords

    System Variable Nameold_passwords
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeenumeration
    Default0
    Valid Values0
    1
    2

    This variable controls the password hashing method used by the PASSWORD() function and for the IDENTIFIED BY clause of the CREATE USER and GRANT statements.

    If the --default-authentication-plugin option is given at server startup, the server sets old_passwords to the value that is consistent with the password hashing method required by the default plugin.

    The following table shows the permitted values of old_passwords, the password hashing method for each value, and which authentication plugins use passwords hashed with each method.

    ValuePassword Hashing FormatIntended Use
    0MySQL 4.1 native hashingAccounts that authenticate with the mysql_native_password plugin
    1Pre-4.1 (old) hashingAccounts that authenticate with the mysql_old_password plugin
    2SHA-256 hashingAccounts that authenticate with the sha256_password plugin

    If old_passwords=1, PASSWORD('str') returns the same value as OLD_PASSWORD('str'). The latter function is not affected by the value of old_passwords.

    If you set old_passwords=2, follow the instructions for using the sha256_password plugin at Section 6.3.7.4, “The SHA-256 Authentication Plugin”.

    For information about authentication plugins and hashing formats, see Section 6.3.7, “Pluggable Authentication”, and Section 6.1.2.4, “Password Hashing in MySQL”.

    Note

    Passwords that use the pre-4.1 hashing method are less secure than passwords that use the native password hashing method and should be avoided. Pre-4.1 passwords are deprecated and support for them will be removed in a future MySQL release.

  • open_files_limit

    Command-Line Format--open-files-limit=#
    Option-File Formatopen-files-limit
    System Variable Nameopen_files_limit
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typenumeric
    Default-1 (autosized)
    Range0 .. 65535

    The number of files that the operating system permits mysqld to open. The value of this variable at runtime is the real value permitted by the system and might be different from the value you specify at server startup. The value is 0 on systems where MySQL cannot change the number of open files.

    The effective open_files_limit value is based on the value specified at system startup (if any) and the values of max_connections and table_open_cache, using these formulas:

    1) 10 + max_connections + (table_open_cache * 2)
    2) max_connections * 5
    3) open_files_limit value specified at startup, 5000 if none

    The server attempts to obtain the number of file descriptors using the maximum of those three values. If that many descriptors cannot be obtained, the server attempts to obtain as many as the system will permit.

  • optimizer_prune_level

    Command-Line Format--optimizer_prune_level[=#]
    Option-File Formatoptimizer_prune_level
    System Variable Nameoptimizer_prune_level
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    Default1

    Controls the heuristics applied during query optimization to prune less-promising partial plans from the optimizer search space. A value of 0 disables heuristics so that the optimizer performs an exhaustive search. A value of 1 causes the optimizer to prune plans based on the number of rows retrieved by intermediate plans.

  • optimizer_search_depth

    Command-Line Format--optimizer_search_depth[=#]
    Option-File Formatoptimizer_search_depth
    System Variable Nameoptimizer_search_depth
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default62
    Range0 .. 62

    The maximum depth of search performed by the query optimizer. Values larger than the number of relations in a query result in better query plans, but take longer to generate an execution plan for a query. Values smaller than the number of relations in a query return an execution plan quicker, but the resulting plan may be far from being optimal. If set to 0, the system automatically picks a reasonable value.

  • optimizer_switch

    Command-Line Format--optimizer_switch=value
    Option-File Formatoptimizer_switch
    System Variable Nameoptimizer_switch
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeset
    Valid Valuesbatched_key_access={on|off}
    block_nested_loop={on|off}
    engine_condition_pushdown={on|off}
    firstmatch={on|off}
    index_condition_pushdown={on|off}
    index_merge={on|off}
    index_merge_intersection={on|off}
    index_merge_sort_union={on|off}
    index_merge_union={on|off}
    loosescan={on|off}
    materialization={on|off}
    mrr={on|off}
    mrr_cost_based={on|off}
    semijoin={on|off}
    subquery_materialization_cost_based={on|off}
    use_index_extensions={on|off}

    The optimizer_switch system variable enables control over optimizer behavior. The value of this variable is a set of flags, each of which has a value of on or off to indicate whether the corresponding optimizer behavior is enabled or disabled. This variable has global and session values and can be changed at runtime. The global default can be set at server startup.

    To see the current set of optimizer flags, select the variable value:

    mysql> SELECT @@optimizer_switch\G
    *************************** 1. row ***************************
    @@optimizer_switch: index_merge=on,index_merge_union=on,
                        index_merge_sort_union=on,
                        index_merge_intersection=on,
                        engine_condition_pushdown=on,
                        index_condition_pushdown=on,
                        mrr=on,mrr_cost_based=on,
                        block_nested_loop=on,batched_key_access=off,
                        materialization=on,semijoin=on,loosescan=on,
                        firstmatch=on,
                        subquery_materialization_cost_based=on,
                        use_index_extensions=on
    

    For more information about the syntax of this variable and the optimizer behaviors that it controls, see Section 8.8.6.2, “Controlling Switchable Optimizations”.

  • optimizer_trace

    System Variable Nameoptimizer_trace
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typestring

    This variable controls optimizer tracing. For details, see MySQL Internals: Tracing the Optimizer.

  • optimizer_trace_features

    System Variable Nameoptimizer_trace_features
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typestring

    This variable enables or disables selected optimizer tracing features. For details, see MySQL Internals: Tracing the Optimizer.

  • optimizer_trace_limit

    System Variable Nameoptimizer_trace_limit
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default1

    The maximum number of optimizer traces to display. For details, see MySQL Internals: Tracing the Optimizer.

  • optimizer_trace_max_mem_size

    System Variable Nameoptimizer_trace_max_mem_size
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default16384

    The maximum cumulative size of stored optimizer traces. For details, see MySQL Internals: Tracing the Optimizer.

  • optimizer_trace_offset

    System Variable Nameoptimizer_trace_offset
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default-1

    The offset of optimizer traces to display. For details, see MySQL Internals: Tracing the Optimizer.

  • performance_schema_xxx

    Performance Schema system variables are listed in Section 20.12, “Performance Schema System Variables”. These variables may be used to configure Performance Schema operation.

  • pid_file

    Command-Line Format--pid-file=file_name
    Option-File Formatpid-file
    System Variable Namepid_file
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typefile name

    The path name of the process ID (PID) file. This variable can be set with the --pid-file option.

  • plugin_dir

    Command-Line Format--plugin_dir=path
    Option-File Formatplugin_dir
    System Variable Nameplugin_dir
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typedirectory name
    DefaultBASEDIR/lib/plugin

    The path name of the plugin directory.

    If the plugin directory is writable by the server, it may be possible for a user to write executable code to a file in the directory using SELECT ... INTO DUMPFILE. This can be prevented by making plugin_dir read only to the server or by setting --secure-file-priv to a directory where SELECT writes can be made safely.

  • port

    Command-Line Format--port=#
     -P
    Option-File Formatport
    System Variable Nameport
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typenumeric
    Default3306
    Range0 .. 65535

    The number of the port on which the server listens for TCP/IP connections. This variable can be set with the --port option.

  • preload_buffer_size

    Command-Line Format--preload_buffer_size=#
    Option-File Formatpreload_buffer_size
    System Variable Namepreload_buffer_size
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default32768
    Range1024 .. 1073741824

    The size of the buffer that is allocated when preloading indexes.

  • profiling

    If set to 0 or OFF (the default), statement profiling is disabled. If set to 1 or ON, statement profiling is enabled and the SHOW PROFILE and SHOW PROFILES statements provide access to profiling information. See Section 13.7.5.30, “SHOW PROFILES Syntax”.

    This variable is deprecated and will be removed in a future MySQL release.

  • profiling_history_size

    The number of statements for which to maintain profiling information if profiling is enabled. The default value is 15. The maximum value is 100. Setting the value to 0 effectively disables profiling. See Section 13.7.5.30, “SHOW PROFILES Syntax”.

    This variable is deprecated and will be removed in a future MySQL release.

  • protocol_version

    System Variable Nameprotocol_version
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typenumeric

    The version of the client/server protocol used by the MySQL server.

  • proxy_user

    System Variable Nameproxy_user
    Variable ScopeSession
    Dynamic VariableNo
     Permitted Values
    Typestring

    If the current client is a proxy for another user, this variable is the proxy user account name. Otherwise, this variable is NULL. See Section 6.3.8, “Proxy Users”.

  • pseudo_slave_mode

    System Variable Namepseudo_slave_mode
    Variable ScopeSession
    Dynamic VariableYes
     Permitted Values
    Typenumeric

    This variable is for internal server use.

  • pseudo_thread_id

    System Variable Namepseudo_thread_id
    Variable ScopeSession
    Dynamic VariableYes
     Permitted Values
    Typenumeric

    This variable is for internal server use.

  • query_alloc_block_size

    Command-Line Format--query_alloc_block_size=#
    Option-File Formatquery_alloc_block_size
    System Variable Namequery_alloc_block_size
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default8192
    Range1024 .. 4294967295
    Block Size1024
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default8192
    Range1024 .. 18446744073709547520
    Block Size1024

    The allocation size of memory blocks that are allocated for objects created during statement parsing and execution. If you have problems with memory fragmentation, it might help to increase this parameter.

  • query_cache_limit

    Command-Line Format--query_cache_limit=#
    Option-File Formatquery_cache_limit
    System Variable Namequery_cache_limit
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default1048576
    Range0 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default1048576
    Range0 .. 18446744073709547520

    Do not cache results that are larger than this number of bytes. The default value is 1MB.

  • query_cache_min_res_unit

    Command-Line Format--query_cache_min_res_unit=#
    Option-File Formatquery_cache_min_res_unit
    System Variable Namequery_cache_min_res_unit
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default4096
    Range512 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default4096
    Range512 .. 18446744073709547520

    The minimum size (in bytes) for blocks allocated by the query cache. The default value is 4096 (4KB). Tuning information for this variable is given in Section 8.9.3.3, “Query Cache Configuration”.

  • query_cache_size

    Command-Line Format--query_cache_size=#
    Option-File Formatquery_cache_size
    System Variable Namequery_cache_size
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default1048576
    Range0 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default1048576
    Range0 .. 18446744073709547520

    The amount of memory allocated for caching query results. By default, the query cache is disabled. This is achieved using a default value of 1M, with a default for query_cache_type of 0. (To reduce overhead significantly if you set the size to 0, you should also start the server with query_cache_type=0.

    The permissible values are multiples of 1024; other values are rounded down to the nearest multiple. Note that query_cache_size bytes of memory are allocated even if query_cache_type is set to 0. See Section 8.9.3.3, “Query Cache Configuration”, for more information.

    The query cache needs a minimum size of about 40KB to allocate its structures. (The exact size depends on system architecture.) If you set the value of query_cache_size too small, a warning will occur, as described in Section 8.9.3.3, “Query Cache Configuration”.

  • query_cache_type

    Command-Line Format--query_cache_type=#
    Option-File Formatquery_cache_type
    System Variable Namequery_cache_type
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeenumeration
    Default0
    Valid Values0
    1
    2

    Set the query cache type. Setting the GLOBAL value sets the type for all clients that connect thereafter. Individual clients can set the SESSION value to affect their own use of the query cache. Possible values are shown in the following table.

    OptionDescription
    0 or OFFDo not cache results in or retrieve results from the query cache. Note that this does not deallocate the query cache buffer. To do that, you should set query_cache_size to 0.
    1 or ONCache all cacheable query results except for those that begin with SELECT SQL_NO_CACHE.
    2 or DEMANDCache results only for cacheable queries that begin with SELECT SQL_CACHE.

    This variable defaults to OFF.

    If the server is started with query_cache_type set to 0, it does not acquire the query cache mutex at all, which means that the query cache cannot be enabled at runtime and there is reduced overhead in query execution.

  • query_cache_wlock_invalidate

    Command-Line Format--query_cache_wlock_invalidate
    Option-File Formatquery_cache_wlock_invalidate
    System Variable Namequery_cache_wlock_invalidate
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultFALSE

    Normally, when one client acquires a WRITE lock on a MyISAM table, other clients are not blocked from issuing statements that read from the table if the query results are present in the query cache. Setting this variable to 1 causes acquisition of a WRITE lock for a table to invalidate any queries in the query cache that refer to the table. This forces other clients that attempt to access the table to wait while the lock is in effect.

  • query_prealloc_size

    Command-Line Format--query_prealloc_size=#
    Option-File Formatquery_prealloc_size
    System Variable Namequery_prealloc_size
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default8192
    Range8192 .. 4294967295
    Block Size1024
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default8192
    Range8192 .. 18446744073709547520
    Block Size1024

    The size of the persistent buffer used for statement parsing and execution. This buffer is not freed between statements. If you are running complex queries, a larger query_prealloc_size value might be helpful in improving performance, because it can reduce the need for the server to perform memory allocation during query execution operations.

  • rand_seed1

    The rand_seed1 and rand_seed2 variables exist as session variables only, and can be set but not read. The variables—but not their values—are shown in the output of SHOW VARIABLES.

    The purpose of these variables is to support replication of the RAND() function. For statements that invoke RAND(), the master passes two values to the slave, where they are used to seed the random number generator. The slave uses these values to set the session variables rand_seed1 and rand_seed2 so that RAND() on the slave generates the same value as on the master.

  • rand_seed2

    See the description for rand_seed1.

  • range_alloc_block_size

    Command-Line Format--range_alloc_block_size=#
    Option-File Formatrange_alloc_block_size
    System Variable Namerange_alloc_block_size
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default4096
    Range4096 .. 4294967295
    Block Size1024
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default4096
    Range4096 .. 18446744073709547520
    Block Size1024

    The size of blocks that are allocated when doing range optimization.

  • read_buffer_size

    Command-Line Format--read_buffer_size=#
    Option-File Formatread_buffer_size
    System Variable Nameread_buffer_size
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default131072
    Range8200 .. 2147479552

    Each thread that does a sequential scan for a MyISAM table allocates a buffer of this size (in bytes) for each table it scans. If you do many sequential scans, you might want to increase this value, which defaults to 131072. The value of this variable should be a multiple of 4KB. If it is set to a value that is not a multiple of 4KB, its value will be rounded down to the nearest multiple of 4KB.

    This option is also used in the following context for all search engines:

    • For caching the indexes in a temporary file (not a temporary table), when sorting rows for ORDER BY.

    • For bulk insert into partitions.

    • For caching results of nested queries.

    and in one other storage engine-specific way: to determine the memory block size for MEMORY tables.

    The maximum permissible setting for read_buffer_size is 2GB.

    For more information about memory use during different operations, see Section 8.11.4.1, “How MySQL Uses Memory”.

  • read_only

    Command-Line Format--read-only
    Option-File Formatread_only
    System Variable Nameread_only
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    Defaultfalse

    This variable is off by default. When it is enabled, the server permits no updates except from users that have the SUPER privilege or (on a slave server) from updates performed by slave threads. In replication setups, it can be useful to enable read_only on slave servers to ensure that slaves accept updates only from the master server and not from clients.

    read_only does not apply to TEMPORARY tables, nor does it prevent the server from inserting rows into the log tables (see Section 5.2.1, “Selecting General Query and Slow Query Log Output Destinations”). This variable does not prevent the use of ANALYZE TABLE or OPTIMIZE TABLE statements because its purpose is to prevent changes to table structure or contents. Analysis and optimization do not qualify as such changes. This means, for example, that consistency checks on read-only slaves can be performed with mysqlcheck --all-databases --analyze.

    read_only exists only as a GLOBAL variable, so changes to its value require the SUPER privilege. Changes to read_only on a master server are not replicated to slave servers. The value can be set on a slave server independent of the setting on the master.

    Important

    In MySQL 5.7, enabling read_only prevents the use of the SET PASSWORD statement by any user not having the SUPER privilege. This is not necessarily the case for all MySQL release series. When replicating from one MySQL release series to another (for example, from a MySQL 5.0 master to a MySQL 5.1 or later slave), you should check the documentation for the versions running on both master and slave to determine whether the behavior of read_only in this regard is or is not the same, and, if it is different, whether this has an impact on your applications.

    The following conditions apply:

    • If you attempt to enable read_only while you have any explicit locks (acquired with LOCK TABLES) or have a pending transaction, an error occurs.

    • If you attempt to enable read_only while other clients hold explicit table locks or have pending transactions, the attempt blocks until the locks are released and the transactions end. While the attempt to enable read_only is pending, requests by other clients for table locks or to begin transactions also block until read_only has been set.

    • read_only can be enabled while you hold a global read lock (acquired with FLUSH TABLES WITH READ LOCK) because that does not involve table locks.

    In MySQL 5.7, attempts to set read_only block for active transactions that hold metadata locks until those transactions end.

  • read_rnd_buffer_size

    Command-Line Format--read_rnd_buffer_size=#
    Option-File Formatread_rnd_buffer_size
    System Variable Nameread_rnd_buffer_size
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default262144
    Range8200 .. 4294967295

    When reading rows from a MyISAM table in sorted order following a key-sorting operation, the rows are read through this buffer to avoid disk seeks. See Section 8.2.1.15, “ORDER BY Optimization”. Setting the variable to a large value can improve ORDER BY performance by a lot. However, this is a buffer allocated for each client, so you should not set the global variable to a large value. Instead, change the session variable only from within those clients that need to run large queries.

    The maximum permissible setting for read_rnd_buffer_size is 2GB.

    For more information about memory use during different operations, see Section 8.11.4.1, “How MySQL Uses Memory”.

  • relay_log_purge

    Command-Line Format--relay_log_purge
    Option-File Formatrelay_log_purge
    System Variable Namerelay_log_purge
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultTRUE

    Disables or enables automatic purging of relay log files as soon as they are not needed any more. The default value is 1 (ON).

  • relay_log_space_limit

    Command-Line Format--relay_log_space_limit=#
    Option-File Formatrelay_log_space_limit
    System Variable Namerelay_log_space_limit
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default0
    Range0 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default0
    Range0 .. 18446744073709547520

    The maximum amount of space to use for all relay logs.

  • report_host

    Command-Line Format--report-host=host_name
    Option-File Formatreport-host
    System Variable Namereport_host
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typestring

    The value of the --report-host option.

  • report_password

    Command-Line Format--report-password=name
    Option-File Formatreport-password
    System Variable Namereport_password
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typestring

    The value of the --report-password option. Not the same as the password used for the MySQL replication user account.

  • report_port

    Command-Line Format--report-port=#
    Option-File Formatreport-port
    System Variable Namereport_port
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typenumeric
    Default[slave_port]
    Range0 .. 65535

    The value of the --report-port option.

  • report_user

    Command-Line Format--report-user=name
    Option-File Formatreport-user
    System Variable Namereport_user
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typestring

    The value of the --report-user option. Not the same as the name for the MySQL replication user account.

  • rpl_semi_sync_master_enabled

    System Variable Namerpl_semi_sync_master_enabled
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultOFF

    Controls whether semisynchronous replication is enabled on the master. To enable or disable the plugin, set this variable to ON or OFF (or 1 or 0), respectively. The default is OFF.

    This variable is available only if the master-side semisynchronous replication plugin is installed.

  • rpl_semi_sync_master_timeout

    System Variable Namerpl_semi_sync_master_timeout
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default10000

    A value in milliseconds that controls how long the master waits on a commit for acknowledgment from a slave before timing out and reverting to asynchronous replication. The default value is 10000 (10 seconds).

    This variable is available only if the master-side semisynchronous replication plugin is installed.

  • rpl_semi_sync_master_trace_level

    System Variable Namerpl_semi_sync_master_trace_level
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default32

    The semisynchronous replication debug trace level on the master. Currently, four levels are defined:

    • 1 = general level (for example, time function failures)

    • 16 = detail level (more verbose information)

    • 32 = net wait level (more information about network waits)

    • 64 = function level (information about function entry and exit)

    This variable is available only if the master-side semisynchronous replication plugin is installed.

  • rpl_semi_sync_master_wait_no_slave

    System Variable Namerpl_semi_sync_master_wait_no_slave
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultON

    With semisynchronous replication, for each transaction, the master waits until timeout for acknowledgment of receipt from some semisynchronous slave. If no response occurs during this period, the master reverts to normal replication. This variable controls whether the master waits for the timeout to expire before reverting to normal replication even if the slave count drops to zero during the timeout period.

    If the value is ON (the default), it is permissible for the slave count to drop to zero during the timeout period (for example, if slaves disconnect). The master still waits for the timeout, so as long as some slave reconnects and acknowledges the transaction within the timeout interval, semisynchronous replication continues.

    If the value is OFF, the master reverts to normal replication if the slave count drops to zero during the timeout period.

    This variable is available only if the master-side semisynchronous replication plugin is installed.

  • rpl_semi_sync_master_wait_point

    Introduced5.7.2
    System Variable Namerpl_semi_sync_master_wait_point
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeenumeration
    DefaultAFTER_SYNC
    Valid ValuesAFTER_SYNC
    AFTER_COMMIT

    This variable controls the point at which a semisynchronous replication master waits for slave acknowledgment of transaction receipt before returning a status to the client that committed the transaction. These values are permitted:

    • AFTER_SYNC (the default): The master writes each transaction to its binary log and the slave, and syncs the binary log to disk. The master waits for slave acknowledgment of transaction receipt after the sync. Upon receiving acknowledgment, the master commits the transaction to the storage engine and returns a result to the client, which then can proceed.

    • AFTER_COMMIT: The master writes each transaction to its binary log and the slave, syncs the binary log, and commits the transaction to the storage engine. The master waits for slave acknowledgment of transaction receipt after the commit. Upon receiving acknowledgment, the master returns a result to the client, which then can proceed.

    The replication characteristics of these settings differ as follows:

    • With AFTER_SYNC, all clients see the committed transaction at the same time: After it has been acknowledged by the slave and committed to the storage engine on the master. Thus, all clients see the same data on the master.

      In the event of master failure, all transactions committed on the master have been replicated to the slave (saved to its relay log). A crash of the master and failover to the slave is lossless because the slave is up to date.

    • With AFTER_COMMIT, the client issuing the transaction gets a return status only after the server commits to the storage engine and receives slave acknowledgement. After the commit and before slave acknowledgment, other clients can see the committed transaction before the committing client.

      If something goes wrong such that the slave does not process the transaction, then in the event of a master crash and failover to the slave, it is possible that such clients will see a loss of data relative to what they saw on the master.

    This variable is available only if the master-side semisynchronous replication plugin is installed.

    rpl_semi_sync_master_wait_point was added in MySQL 5.7.2. For older versions, semisynchronous master behavior is equivalent to a setting of AFTER_COMMIT.

    This change introduces a version compatibility constraint because it increments the semisynchronous interface version: Servers for MySQL 5.7.2 and up do not work with semisynchronous replication plugins from older versions, nor do servers from older versions work with semisynchronous replication plugins for MySQL 5.7.2 and up.

  • rpl_semi_sync_slave_enabled

    System Variable Namerpl_semi_sync_slave_enabled
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultOFF

    Controls whether semisynchronous replication is enabled on the slave. To enable or disable the plugin, set this variable to ON or OFF (or 1 or 0), respectively. The default is OFF.

    This variable is available only if the slave-side semisynchronous replication plugin is installed.

  • rpl_semi_sync_slave_trace_level

    System Variable Namerpl_semi_sync_slave_trace_level
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default32

    The semisynchronous replication debug trace level on the slave. See rpl_semi_sync_master_trace_level for the permissible values.

    This variable is available only if the slave-side semisynchronous replication plugin is installed.

  • secure_auth

    Command-Line Format--secure-auth
    Option-File Formatsecure-auth
    System Variable Namesecure_auth
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultON

    If this variable is enabled, the server blocks connections by clients that attempt to use accounts that have passwords stored in the old (pre-4.1) format.

    Enable this variable to prevent all use of passwords employing the old format (and hence insecure communication over the network). This variable enabled by default.

    Server startup fails with an error if this variable is enabled and the privilege tables are in pre-4.1 format. See Section C.5.2.4, “Client does not support authentication protocol.

    Note

    Passwords that use the pre-4.1 hashing method are less secure than passwords that use the native password hashing method and should be avoided. Pre-4.1 passwords are deprecated and support for them will be removed in a future MySQL release.

  • secure_file_priv

    Command-Line Format--secure-file-priv=path
    Option-File Formatsecure-file-priv
    System Variable Namesecure_file_priv
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typestring

    By default, this variable is empty. If set to the name of a directory, it limits the effect of the LOAD_FILE() function and the LOAD DATA and SELECT ... INTO OUTFILE statements to work only with files in that directory.

  • server_id

    Command-Line Format--server-id=#
    Option-File Formatserver-id
    System Variable Nameserver_id
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default0
    Range0 .. 4294967295

    The server ID, used in replication to give each master and slave a unique identity. This variable is set by the --server-id option. For each server participating in replication, you should pick a positive integer in the range from 1 to 232 – 1 to act as that server's ID.

  • sha256_password_private_key_path

    System Variable Namesha256_password_private_key_path
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typefile name
    Defaultprivate_key.pem

    The path name of the RSA private key file for the sha256_password authentication plugin. If the file is named as a relative path, it is interpreted relative to the server data directory. The file must be in PEM format. Because this file stores a private key, its access mode should be restricted so that only the MySQL server can read it.

    For information about sha256_password, including instructions for creating the RSA key files, see Section 6.3.7.4, “The SHA-256 Authentication Plugin”.

    This variable is available only if MySQL was built using OpenSSL.

  • sha256_password_public_key_path

    System Variable Namesha256_password_public_key_path
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typefile name
    Defaultpublic_key.pem

    The path name of the RSA public key file for the sha256_password authentication plugin. If the file is named as a relative path, it is interpreted relative to the server data directory. The file must be in PEM format. Because this file stores a public key, copies can be freely distributed to client users. (Clients that explicitly specify a public key when connecting to the server using RSA password encryption must use the same public key as that used by the server.)

    For information about sha256_password, including instructions for creating the RSA key files and how clients specify the RSA public key, see Section 6.3.7.4, “The SHA-256 Authentication Plugin”.

    This variable is available only if MySQL was built using OpenSSL.

  • shared_memory

    System Variable Nameshared_memory
    Variable ScopeGlobal
    Dynamic VariableNo
    Platform Specificwindows

    (Windows only.) Whether the server permits shared-memory connections.

  • shared_memory_base_name

    System Variable Nameshared_memory_base_name
    Variable ScopeGlobal
    Dynamic VariableNo
    Platform Specificwindows

    (Windows only.) The name of shared memory to use for shared-memory connections. This is useful when running multiple MySQL instances on a single physical machine. The default name is MYSQL. The name is case sensitive.

  • skip_external_locking

    Command-Line Format--skip-external-locking
    Option-File Formatskip_external_locking
    System Variable Nameskip_external_locking
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typeboolean
    DefaultON

    This is OFF if mysqld uses external locking (system locking), ON if external locking is disabled. This affects only MyISAM table access.

    This variable is set by the --external-locking or --skip-external-locking option. External locking has been disabled by default as of MySQL 4.0.

    External locking affects only MyISAM table access. For more information, including conditions under which it can and cannot be used, see Section 8.10.5, “External Locking”.

  • skip_name_resolve

    Command-Line Format--skip-name-resolve
    Option-File Formatskip-name-resolve
    System Variable Nameskip_name_resolve
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typeboolean
    DefaultOFF

    This variable is set from the value of the --skip-name-resolve option. If it is ON, mysqld resolves host names when checking client connections. If OFF, mysqld uses only IP numbers and all Host column values in the grant tables must be IP addresses or localhost. See Section 8.11.5.2, “DNS Lookup Optimization and the Host Cache”.

  • skip_networking

    Command-Line Format--skip-networking
    Option-File Formatskip-networking
    System Variable Nameskip_networking
    Variable ScopeGlobal
    Dynamic VariableNo

    This is ON if the server permits only local (non-TCP/IP) connections. On Unix, local connections use a Unix socket file. On Windows, local connections use a named pipe or shared memory. This variable can be set to ON with the --skip-networking option.

  • skip_show_database

    Command-Line Format--skip-show-database
    Option-File Formatskip-show-database
    System Variable Nameskip_show_database
    Variable ScopeGlobal
    Dynamic VariableNo

    This prevents people from using the SHOW DATABASES statement if they do not have the SHOW DATABASES privilege. This can improve security if you have concerns about users being able to see databases belonging to other users. Its effect depends on the SHOW DATABASES privilege: If the variable value is ON, the SHOW DATABASES statement is permitted only to users who have the SHOW DATABASES privilege, and the statement displays all database names. If the value is OFF, SHOW DATABASES is permitted to all users, but displays the names of only those databases for which the user has the SHOW DATABASES or other privilege. (Note that any global privilege is considered a privilege for the database.)

  • slow_launch_time

    Command-Line Format--slow_launch_time=#
    Option-File Formatslow_launch_time
    System Variable Nameslow_launch_time
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default2

    If creating a thread takes longer than this many seconds, the server increments the Slow_launch_threads status variable.

  • slow_query_log

    Command-Line Format--slow-query-log
    Option-File Formatslow-query-log
    System Variable Nameslow_query_log
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultOFF

    Whether the slow query log is enabled. The value can be 0 (or OFF) to disable the log or 1 (or ON) to enable the log. The default value depends on whether the --slow_query_log option is given. The destination for log output is controlled by the log_output system variable; if that value is NONE, no log entries are written even if the log is enabled.

    Slow is determined by the value of the long_query_time variable. See Section 5.2.5, “The Slow Query Log”.

  • slow_query_log_file

    Command-Line Format--slow-query-log-file=file_name
    Option-File Formatslow_query_log_file
    System Variable Nameslow_query_log_file
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typefile name

    The name of the slow query log file. The default value is host_name-slow.log, but the initial value can be changed with the --slow_query_log_file option.

  • socket

    Command-Line Format--socket=name
    Option-File Formatsocket
    System Variable Namesocket
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typefile name
    Default/tmp/mysql.sock

    On Unix platforms, this variable is the name of the socket file that is used for local client connections. The default is /tmp/mysql.sock. (For some distribution formats, the directory might be different, such as /var/lib/mysql for RPMs.)

    On Windows, this variable is the name of the named pipe that is used for local client connections. The default value is MySQL (not case sensitive).

  • sort_buffer_size

    Command-Line Format--sort_buffer_size=#
    Option-File Formatsort_buffer_size
    System Variable Namesort_buffer_size
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default262144
    Range32768 .. 4294967295
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default262144
    Range32768 .. 18446744073709547520

    Each session that needs to do a sort allocates a buffer of this size. sort_buffer_size is not specific to any storage engine and applies in a general manner for optimization. See Section 8.2.1.15, “ORDER BY Optimization”, for example.

    If you see many Sort_merge_passes per second in SHOW GLOBAL STATUS output, you can consider increasing the sort_buffer_size value to speed up ORDER BY or GROUP BY operations that cannot be improved with query optimization or improved indexing.

    The optimizer tries to work out how much space is needed but can allocate more, up to the limit. Setting it larger than required globally will slow down most queries that sort. It is best to increase it as a session setting, and only for the sessions that need a larger size. On Linux, there are thresholds of 256KB and 2MB where larger values may significantly slow down memory allocation, so you should consider staying below one of those values. Experiment to find the best value for your workload. See Section C.5.4.4, “Where MySQL Stores Temporary Files”.

    The maximum permissible setting for sort_buffer_size is 4GB. Values larger than 4GB are permitted for 64-bit platforms (except 64-bit Windows, for which large values are truncated to 4GB with a warning).

  • sql_auto_is_null

    System Variable Namesql_auto_is_null
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    Default0

    If this variable is set to 1, then after a statement that successfully inserts an automatically generated AUTO_INCREMENT value, you can find that value by issuing a statement of the following form:

    SELECT * FROM tbl_name WHERE auto_col IS NULL
    

    If the statement returns a row, the value returned is the same as if you invoked the LAST_INSERT_ID() function. For details, including the return value after a multiple-row insert, see Section 12.14, “Information Functions”. If no AUTO_INCREMENT value was successfully inserted, the SELECT statement returns no row.

    The behavior of retrieving an AUTO_INCREMENT value by using an IS NULL comparison is used by some ODBC programs, such as Access. See Section 21.1.7.1.1, “Obtaining Auto-Increment Values”. This behavior can be disabled by setting sql_auto_is_null to 0.

    The default value of sql_auto_is_null is 0 in MySQL 5.7.

  • sql_big_selects

    System Variable Namesql_big_selects
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    Default1

    If set to 0, MySQL aborts SELECT statements that are likely to take a very long time to execute (that is, statements for which the optimizer estimates that the number of examined rows exceeds the value of max_join_size). This is useful when an inadvisable WHERE statement has been issued. The default value for a new connection is 1, which permits all SELECT statements.

    If you set the max_join_size system variable to a value other than DEFAULT, sql_big_selects is set to 0.

  • sql_buffer_result

    System Variable Namesql_buffer_result
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    Default0

    If set to 1, sql_buffer_result forces results from SELECT statements to be put into temporary tables. This helps MySQL free the table locks early and can be beneficial in cases where it takes a long time to send results to the client. The default value is 0.

  • sql_log_bin

    System Variable Namesql_log_bin
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeboolean

    This variable controls whether logging to the binary log is done. The default value is 1 (do logging). To change logging for the current session, change the session value of this variable. The session user must have the SUPER privilege to set this variable.

    In MySQL 5.7, it is not possible to set @@session.sql_log_bin within a transaction or subquery. (Bug #53437)

  • sql_log_off

    System Variable Namesql_log_off
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    Default0

    This variable controls whether logging to the general query log is done. The default value is 0 (do logging). To change logging for the current session, change the session value of this variable. The session user must have the SUPER privilege to set this option. The default value is 0.

  • sql_mode

    Command-Line Format--sql-mode=name
    Option-File Formatsql-mode
    System Variable Namesql_mode
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeset
    DefaultNO_ENGINE_SUBSTITUTION
    Valid ValuesALLOW_INVALID_DATES
    ANSI_QUOTES
    ERROR_FOR_DIVISION_BY_ZERO
    HIGH_NOT_PRECEDENCE
    IGNORE_SPACE
    NO_AUTO_CREATE_USER
    NO_AUTO_VALUE_ON_ZERO
    NO_BACKSLASH_ESCAPES
    NO_DIR_IN_CREATE
    NO_ENGINE_SUBSTITUTION
    NO_FIELD_OPTIONS
    NO_KEY_OPTIONS
    NO_TABLE_OPTIONS
    NO_UNSIGNED_SUBTRACTION
    NO_ZERO_DATE
    NO_ZERO_IN_DATE
    ONLY_FULL_GROUP_BY
    PAD_CHAR_TO_FULL_LENGTH
    PIPES_AS_CONCAT
    REAL_AS_FLOAT
    STRICT_ALL_TABLES
    STRICT_TRANS_TABLES

    The current server SQL mode, which can be set dynamically. See Section 5.1.7, “Server SQL Modes”.

    Note

    MySQL installation programs may configure the SQL mode during the installation process. For example, mysql_install_db creates a default option file named my.cnf in the base installation directory. This file contains a line that sets the SQL mode; see Section 4.4.3, “mysql_install_db — Initialize MySQL Data Directory”.

    If the SQL mode differs from the default or from what you expect, check for a setting in an option file that the server reads at startup.

  • sql_notes

    If set to 1 (the default), warnings of Note level increment warning_count and the server records them. If set to 0, Note warnings do not increment warning_count and the server does not record them. mysqldump includes output to set this variable to 0 so that reloading the dump file does not produce warnings for events that do not affect the integrity of the reload operation.

  • sql_quote_show_create

    If set to 1 (the default), the server quotes identifiers for SHOW CREATE TABLE and SHOW CREATE DATABASE statements. If set to 0, quoting is disabled. This option is enabled by default so that replication works for identifiers that require quoting. See Section 13.7.5.10, “SHOW CREATE TABLE Syntax”, and Section 13.7.5.6, “SHOW CREATE DATABASE Syntax”.

  • sql_safe_updates

    If set to 1, MySQL aborts UPDATE or DELETE statements that do not use a key in the WHERE clause or a LIMIT clause. This makes it possible to catch UPDATE or DELETE statements where keys are not used properly and that would probably change or delete a large number of rows. The default value is 0.

  • sql_select_limit

    System Variable Namesql_select_limit
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric

    The maximum number of rows to return from SELECT statements. The default value for a new connection is the maximum number of rows that the server permits per table. Typical default values are (232)–1 or (264)–1. If you have changed the limit, the default value can be restored by assigning a value of DEFAULT.

    If a SELECT has a LIMIT clause, the LIMIT takes precedence over the value of sql_select_limit.

  • sql_warnings

    This variable controls whether single-row INSERT statements produce an information string if warnings occur. The default is 0. Set the value to 1 to produce an information string.

  • ssl_ca

    Command-Line Format--ssl-ca=name
    Option-File Formatssl-ca
    System Variable Namessl_ca
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typefile name

    The path to a file with a list of trusted SSL CAs.

  • ssl_capath

    Command-Line Format--ssl-capath=name
    Option-File Formatssl-capath
    System Variable Namessl_capath
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typedirectory name

    The path to a directory that contains trusted SSL CA certificates in PEM format.

  • ssl_cert

    Command-Line Format--ssl-cert=name
    Option-File Formatssl-cert
    System Variable Namessl_cert
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typefile name

    The name of the SSL certificate file to use for establishing a secure connection.

  • ssl_cipher

    Command-Line Format--ssl-cipher=name
    Option-File Formatssl-cipher
    System Variable Namessl_cipher
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typefile name

    A list of permissible ciphers to use for SSL encryption.

  • ssl_crl

    Command-Line Format--ssl-crl=name
    Option-File Formatssl-crl
    System Variable Namessl_crl
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typefile name

    The path to a file containing certificate revocation lists in PEM format. Revocation lists work for MySQL distributions compiled against OpenSSL (but not yaSSL).

  • ssl_crlpath

    Command-Line Format--ssl-crlpath=name
    Option-File Formatssl-crlpath
    System Variable Namessl_crlpath
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typedirectory name

    The path to a directory that contains files containing certificate revocation lists in PEM format. Revocation lists work for MySQL distributions compiled against OpenSSL (but not yaSSL).

  • ssl_key

    Command-Line Format--ssl-key=name
    Option-File Formatssl-key
    System Variable Namessl_key
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typestring

    The name of the SSL key file to use for establishing a secure connection.

  • storage_engine

    System Variable Namestorage_engine
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeenumeration
    DefaultInnoDB

    The default storage engine (table type). To set the storage engine at server startup, use the --default-storage-engine option. See Section 5.1.3, “Server Command Options”.

    This variable is deprecated. Use default_storage_engine instead.

  • stored_program_cache

    Command-Line Format--stored-program-cache=#
    Option-File Formatstored_program_cache
    System Variable Namestored_program_cache
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default256
    Range256 .. 524288

    Sets a soft upper limit for the number of cached stored routines per connection. The value of this variable is specified in terms of the number of stored routines held in each of the two caches maintained by the MySQL Server for, respectively, stored procedures and stored functions.

    Whenever a stored routine is executed this cache size is checked before the first or top-level statement in the routine is parsed; if the number of routines of the same type (stored procedures or stored functions according to which is being executed) exceeds the limit specified by this variable, the corresponding cache is flushed and memory previously allocated for cached objects is freed. This allows the cache to be flushed safely, even when there are dependencies between stored routines.

  • sync_frm

    Command-Line Format--sync-frm
    Option-File Formatsync_frm
    System Variable Namesync_frm
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultTRUE

    If this variable is set to 1, when any nontemporary table is created its .frm file is synchronized to disk (using fdatasync()). This is slower but safer in case of a crash. The default is 1.

  • system_time_zone

    System Variable Namesystem_time_zone
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typestring

    The server system time zone. When the server begins executing, it inherits a time zone setting from the machine defaults, possibly modified by the environment of the account used for running the server or the startup script. The value is used to set system_time_zone. Typically the time zone is specified by the TZ environment variable. It also can be specified using the --timezone option of the mysqld_safe script.

    The system_time_zone variable differs from time_zone. Although they might have the same value, the latter variable is used to initialize the time zone for each client that connects. See Section 10.6, “MySQL Server Time Zone Support”.

  • table_definition_cache

    System Variable Nametable_definition_cache
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default-1 (autosized)
    Range400 .. 524288

    The number of table definitions (from .frm files) that can be stored in the definition cache. If you use a large number of tables, you can create a large table definition cache to speed up opening of tables. The table definition cache takes less space and does not use file descriptors, unlike the normal table cache. The minimum value is 400. The default value is based on the following formula, capped to a limit of 2000:

    400 + (table_open_cache / 2)

    For InnoDB, table_definition_cache acts as a soft limit for the number of open tables instances in the InnoDB data dictionary cache. If the number of open table instances exceeds the table_definition_cache setting, the LRU mechanism begins to mark table instances for eviction and eventually removes them from the data dictionary cache. The limit helps address situations in which significant amounts of memory would be used to cache rarely used table instances until the next server restart. Table instances with foreign key relationships are not placed on the LRU list and are not subject to eviction.

    Additionally, table_definition_cache defines a soft limit for the number of InnoDB file-per-table tablespaces that can be open at one time, which is also controlled by innodb_open_files. If both table_definition_cache and innodb_open_files are set, the highest setting is used. If neither variable is set, table_definition_cache, which has a higher default value, is used. If the number of open tablespace file handles exceeds the limit defined by table_definition_cache or innodb_open_files, the LRU mechanism searches the tablespace file LRU list for files that are fully flushed and are not currently being extended. This process is performed each time a new tablespace is opened. If there are no inactive tablespaces, no tablespace files are closed.

  • table_open_cache

    System Variable Nametable_open_cache
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default2000
    Range400 .. 524288

    The number of open tables for all threads. Increasing this value increases the number of file descriptors that mysqld requires. You can check whether you need to increase the table cache by checking the Opened_tables status variable. See Section 5.1.6, “Server Status Variables”. If the value of Opened_tables is large and you do not use FLUSH TABLES often (which just forces all tables to be closed and reopened), then you should increase the value of the table_open_cache variable. For more information about the table cache, see Section 8.4.3.1, “How MySQL Opens and Closes Tables”.

  • table_open_cache_instances

    System Variable Nametable_open_cache_instances
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typenumeric
    Default1

    The number of open tables cache instances (default 1). To improve scalability by reducing contention among sessions, the open tables cache can be partitioned into several smaller cache instances of size table_open_cache / table_open_cache_instances . A session need lock only one instance to access it for DML statements. This segments cache access among instances, permitting higher performance for operations that need to use the cache when many there are many sessions accessing tables. (DDL statements still require a lock on the entire cache, but such statements are much less frequent than DML statements.)

    A value of 8 or 16 is recommended on systems that routinely use 16 or more cores.

  • thread_cache_size

    Command-Line Format--thread_cache_size=#
    Option-File Formatthread_cache_size
    System Variable Namethread_cache_size
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default-1 (autosized)
    Range0 .. 16384

    How many threads the server should cache for reuse. When a client disconnects, the client's threads are put in the cache if there are fewer than thread_cache_size threads there. Requests for threads are satisfied by reusing threads taken from the cache if possible, and only when the cache is empty is a new thread created. This variable can be increased to improve performance if you have a lot of new connections. Normally, this does not provide a notable performance improvement if you have a good thread implementation. However, if your server sees hundreds of connections per second you should normally set thread_cache_size high enough so that most new connections use cached threads. By examining the difference between the Connections and Threads_created status variables, you can see how efficient the thread cache is. For details, see Section 5.1.6, “Server Status Variables”.

    The default value is based on the following formula, capped to a limit of 100:

    8 + (max_connections / 100)

    This variable has no effect for the embedded server (libmysqld) and as of MySQL 5.7.2 is no longer visible within the embedded server.

  • thread_concurrency

    Deprecated5.6.1
    Removed5.7.2
    Command-Line Format--thread_concurrency=#
    Option-File Formatthread_concurrency
    System Variable Namethread_concurrency
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typenumeric
    Default10
    Range1 .. 512

    This variable is specific to Solaris 8 and earlier systems, for which mysqld invokes the thr_setconcurrency() function with the variable value. This function enables applications to give the threads system a hint about the desired number of threads that should be run at the same time. Current Solaris versions document this as having no effect.

    This variable was removed in MySQL 5.7.2.

  • thread_handling

    Command-Line Format--thread_handling=name
    Option-File Formatthread_handling
    System Variable Namethread_handling
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typeenumeration
    Valid Valuesno-threads
    one-thread-per-connection
    dynamically-loaded

    The thread-handling model used by the server for connection threads. The permissible values are no-threads (the server uses a single thread) and one-thread-per-connection (the server uses one thread to handle each client connection). no-threads is useful for debugging under Linux; see Section 22.4, “Debugging and Porting MySQL”.

    This variable has no effect for the embedded server (libmysqld) and as of MySQL 5.7.2 is no longer visible within the embedded server.

  • thread_stack

    Command-Line Format--thread_stack=#
    Option-File Formatthread_stack
    System Variable Namethread_stack
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default196608
    Range131072 .. 4294967295
    Block Size1024
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default262144
    Range131072 .. 18446744073709547520
    Block Size1024

    The stack size for each thread. Many of the limits detected by the crash-me test are dependent on this value. See Section 8.12.2, “The MySQL Benchmark Suite”. The default of 192KB (256KB for 64-bit systems) is large enough for normal operation. If the thread stack size is too small, it limits the complexity of the SQL statements that the server can handle, the recursion depth of stored procedures, and other memory-consuming actions.

  • time_format

    This variable is unused. It is deprecated and will be removed in a future MySQL release.

  • time_zone

    System Variable Nametime_zone
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typestring

    The current time zone. This variable is used to initialize the time zone for each client that connects. By default, the initial value of this is 'SYSTEM' (which means, use the value of system_time_zone). The value can be specified explicitly at server startup with the --default-time-zone option. See Section 10.6, “MySQL Server Time Zone Support”.

  • timed_mutexes

    Command-Line Format--timed_mutexes
    Option-File Formattimed_mutexes
    System Variable Nametimed_mutexes
    Variable ScopeGlobal
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultOFF

    This variable controls whether InnoDB mutexes are timed. If this variable is set to 0 or OFF (the default), mutex timing is disabled. If the variable is set to 1 or ON, mutex timing is enabled. With timing enabled, the os_wait_times value in the output from SHOW ENGINE INNODB MUTEX indicates the amount of time (in ms) spent in operating system waits. Otherwise, the value is 0.

  • timestamp = {timestamp_value | DEFAULT}

    Set the time for this client. This is used to get the original timestamp if you use the binary log to restore rows. timestamp_value should be a Unix epoch timestamp, not a MySQL timestamp.

    In MySQL 5.7, timestamp is a DOUBLE rather than BIGINT because its value includes a microseconds part.

    SET timestamp affects the value returned by NOW() but not by SYSDATE(). This means that timestamp settings in the binary log have no effect on invocations of SYSDATE(). The server can be started with the --sysdate-is-now option to cause SYSDATE() to be an alias for NOW(), in which case SET timestamp affects both functions.

  • tmp_table_size

    Command-Line Format--tmp_table_size=#
    Option-File Formattmp_table_size
    System Variable Nametmp_table_size
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Defaultsystem dependent
    Range1024 .. 4294967295

    The maximum size of internal in-memory temporary tables. (The actual limit is determined as the minimum of tmp_table_size and max_heap_table_size.) If an in-memory temporary table exceeds the limit, MySQL automatically converts it to an on-disk MyISAM table. Increase the value of tmp_table_size (and max_heap_table_size if necessary) if you do many advanced GROUP BY queries and you have lots of memory. This variable does not apply to user-created MEMORY tables.

    You can compare the number of internal on-disk temporary tables created to the total number of internal temporary tables created by comparing the values of the Created_tmp_disk_tables and Created_tmp_tables variables.

    See also Section 8.4.3.3, “How MySQL Uses Internal Temporary Tables”.

  • tmpdir

    Command-Line Format--tmpdir=path
     -t
    Option-File Formattmpdir
    System Variable Nametmpdir
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typefile name

    The directory used for temporary files and temporary tables. This variable can be set to a list of several paths that are used in round-robin fashion. Paths should be separated by colon characters (:) on Unix and semicolon characters (;) on Windows.

    The multiple-directory feature can be used to spread the load between several physical disks. If the MySQL server is acting as a replication slave, you should not set tmpdir to point to a directory on a memory-based file system or to a directory that is cleared when the server host restarts. A replication slave needs some of its temporary files to survive a machine restart so that it can replicate temporary tables or LOAD DATA INFILE operations. If files in the temporary file directory are lost when the server restarts, replication fails. You can set the slave's temporary directory using the slave_load_tmpdir variable. In that case, the slave will not use the general tmpdir value and you can set tmpdir to a nonpermanent location.

  • transaction_alloc_block_size

    Command-Line Format--transaction_alloc_block_size=#
    Option-File Formattransaction_alloc_block_size
    System Variable Nametransaction_alloc_block_size
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default8192
    Range1024 .. 4294967295
    Block Size1024
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default8192
    Range1024 .. 18446744073709547520
    Block Size1024

    The amount in bytes by which to increase a per-transaction memory pool which needs memory. See the description of transaction_prealloc_size.

  • transaction_prealloc_size

    Command-Line Format--transaction_prealloc_size=#
    Option-File Formattransaction_prealloc_size
    System Variable Nametransaction_prealloc_size
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Platform Bit Size32
    Typenumeric
    Default4096
    Range1024 .. 4294967295
    Block Size1024
     Permitted Values
    Platform Bit Size64
    Typenumeric
    Default4096
    Range1024 .. 18446744073709547520
    Block Size1024

    There is a per-transaction memory pool from which various transaction-related allocations take memory. The initial size of the pool in bytes is transaction_prealloc_size. For every allocation that cannot be satisfied from the pool because it has insufficient memory available, the pool is increased by transaction_alloc_block_size bytes. When the transaction ends, the pool is truncated to transaction_prealloc_size bytes.

    By making transaction_prealloc_size sufficiently large to contain all statements within a single transaction, you can avoid many malloc() calls.

  • tx_isolation

    System Variable Nametx_isolation
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeenumeration
    DefaultREPEATABLE-READ
    Valid ValuesREAD-UNCOMMITTED
    READ-COMMITTED
    REPEATABLE-READ
    SERIALIZABLE

    The default transaction isolation level. Defaults to REPEATABLE-READ.

    This variable can be set directly, or indirectly using the SET TRANSACTION statement. See Section 13.3.6, “SET TRANSACTION Syntax”. If you set tx_isolation directly to an isolation level name that contains a space, the name should be enclosed within quotation marks, with the space replaced by a dash. For example:

    SET tx_isolation = 'READ-COMMITTED';

    Any unique prefix of a valid value may be used to set the value of this variable.

    The default transaction isolation level can also be set at startup using the --transaction-isolation server option.

  • tx_read_only

    System Variable Nametx_read_only
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    DefaultOFF

    The default transaction access mode. The value can be OFF (read/write, the default) or ON (read only).

    This variable can be set directly, or indirectly using the SET TRANSACTION statement. See Section 13.3.6, “SET TRANSACTION Syntax”.

    To set the default transaction access mode at startup, use the --transaction-read-only server option.

  • unique_checks

    System Variable Nameunique_checks
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    Default1

    If set to 1 (the default), uniqueness checks for secondary indexes in InnoDB tables are performed. If set to 0, storage engines are permitted to assume that duplicate keys are not present in input data. If you know for certain that your data does not contain uniqueness violations, you can set this to 0 to speed up large table imports to InnoDB.

    Note that setting this variable to 0 does not require storage engines to ignore duplicate keys. An engine is still permitted to check for them and issue duplicate-key errors if it detects them.

  • updatable_views_with_limit

    Command-Line Format--updatable_views_with_limit=#
    Option-File Formatupdatable_views_with_limit
    System Variable Nameupdatable_views_with_limit
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typeboolean
    Default1

    This variable controls whether updates to a view can be made when the view does not contain all columns of the primary key defined in the underlying table, if the update statement contains a LIMIT clause. (Such updates often are generated by GUI tools.) An update is an UPDATE or DELETE statement. Primary key here means a PRIMARY KEY, or a UNIQUE index in which no column can contain NULL.

    The variable can have two values:

    • 1 or YES: Issue a warning only (not an error message). This is the default value.

    • 0 or NO: Prohibit the update.

  • validate_password_xxx

    The validate_password plugin implements a set of system variables having names of the form validate_password_xxx. These variables affect password testing by that plugin; see Section 6.1.2.6.2, “Password Validation Plugin Options and Variables”.

  • validate_user_plugins

    System Variable Namevalidate_user_plugins
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typeboolean
    DefaultON

    If this variable is enabled (the default), the server checks each user account and produces a warning if conditions are found that would make the account unusable:

    • The account requires an authentication plugin that is not loaded.

    • The account requires the sha256_password authentication plugin but the server was started with neither SSL nor RSA enabled as required by this plugin.

    Enabling validate_user_plugins slows down server initialization and FLUSH PRIVILEGES. If you do not require the additional checking, you can disable this variable at startup to avoid the performance decrement.

    This variable was added in MySQL 5.7.1.

  • version

    The version number for the server. The value might also include a suffix indicating server build or configuration information. -log indicates that one or more of the general log, slow query log, or binary log are enabled. -debug indicates that the server was built with debugging support enabled.

  • version_comment

    System Variable Nameversion_comment
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typestring

    The CMake configuration program has a COMPILATION_COMMENT option that permits a comment to be specified when building MySQL. This variable contains the value of that comment. See Section 2.9.4, “MySQL Source-Configuration Options”.

  • version_compile_machine

    System Variable Nameversion_compile_machine
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typestring

    The type of the server binary.

  • version_compile_os

    System Variable Nameversion_compile_os
    Variable ScopeGlobal
    Dynamic VariableNo
     Permitted Values
    Typestring

    The type of operating system on which MySQL was built.

  • wait_timeout

    Command-Line Format--wait_timeout=#
    Option-File Formatwait_timeout
    System Variable Namewait_timeout
    Variable ScopeGlobal, Session
    Dynamic VariableYes
     Permitted Values
    Typenumeric
    Default28800
    Range1 .. 31536000
     Permitted Values
    Type (windows)numeric
    Default28800
    Range1 .. 2147483

    The number of seconds the server waits for activity on a noninteractive connection before closing it.

    On thread startup, the session wait_timeout value is initialized from the global wait_timeout value or from the global interactive_timeout value, depending on the type of client (as defined by the CLIENT_INTERACTIVE connect option to mysql_real_connect()). See also interactive_timeout.

  • warning_count

    The number of errors, warnings, and notes that resulted from the last statement that generated messages. This variable is read only. See Section 13.7.5.39, “SHOW WARNINGS Syntax”.

5.1.5. Using System Variables

The MySQL server maintains many system variables that indicate how it is configured. Section 5.1.4, “Server System Variables”, describes the meaning of these variables. Each system variable has a default value. System variables can be set at server startup using options on the command line or in an option file. Most of them can be changed dynamically while the server is running by means of the SET statement, which enables you to modify operation of the server without having to stop and restart it. You can refer to system variable values in expressions.

The server maintains two kinds of system variables. Global variables affect the overall operation of the server. Session variables affect its operation for individual client connections. A given system variable can have both a global and a session value. Global and session system variables are related as follows:

  • When the server starts, it initializes all global variables to their default values. These defaults can be changed by options specified on the command line or in an option file. (See Section 4.2.3, “Specifying Program Options”.)

  • The server also maintains a set of session variables for each client that connects. The client's session variables are initialized at connect time using the current values of the corresponding global variables. For example, the client's SQL mode is controlled by the session sql_mode value, which is initialized when the client connects to the value of the global sql_mode value.

System variable values can be set globally at server startup by using options on the command line or in an option file. When you use a startup option to set a variable that takes a numeric value, the value can be given with a suffix of K, M, or G (either uppercase or lowercase) to indicate a multiplier of 1024, 10242 or 10243; that is, units of kilobytes, megabytes, or gigabytes, respectively. Thus, the following command starts the server with a query cache size of 16 megabytes and a maximum packet size of one gigabyte:

mysqld --query_cache_size=16M --max_allowed_packet=1G

Within an option file, those variables are set like this:

[mysqld]
query_cache_size=16M
max_allowed_packet=1G

The lettercase of suffix letters does not matter; 16M and 16m are equivalent, as are 1G and 1g.

If you want to restrict the maximum value to which a system variable can be set at runtime with the SET statement, you can specify this maximum by using an option of the form --maximum-var_name=value at server startup. For example, to prevent the value of query_cache_size from being increased to more than 32MB at runtime, use the option --maximum-query_cache_size=32M.

Many system variables are dynamic and can be changed while the server runs by using the SET statement. For a list, see Section 5.1.5.2, “Dynamic System Variables”. To change a system variable with SET, refer to it as var_name, optionally preceded by a modifier:

  • To indicate explicitly that a variable is a global variable, precede its name by GLOBAL or @@global.. The SUPER privilege is required to set global variables.

  • To indicate explicitly that a variable is a session variable, precede its name by SESSION, @@session., or @@. Setting a session variable requires no special privilege, but a client can change only its own session variables, not those of any other client.

  • LOCAL and @@local. are synonyms for SESSION and @@session..

  • If no modifier is present, SET changes the session variable.

A SET statement can contain multiple variable assignments, separated by commas. If you set several system variables, the most recent GLOBAL or SESSION modifier in the statement is used for following variables that have no modifier specified.

Examples:

SET sort_buffer_size=10000;
SET @@local.sort_buffer_size=10000;
SET GLOBAL sort_buffer_size=1000000, SESSION sort_buffer_size=1000000;
SET @@sort_buffer_size=1000000;
SET @@global.sort_buffer_size=1000000, @@local.sort_buffer_size=1000000;

The @@var_name syntax for system variables is supported for compatibility with some other database systems.

If you change a session system variable, the value remains in effect until your session ends or until you change the variable to a different value. The change is not visible to other clients.

If you change a global system variable, the value is remembered and used for new connections until the server restarts. (To make a global system variable setting permanent, you should set it in an option file.) The change is visible to any client that accesses that global variable. However, the change affects the corresponding session variable only for clients that connect after the change. The global variable change does not affect the session variable for any client that is currently connected (not even that of the client that issues the SET GLOBAL statement).

To prevent incorrect usage, MySQL produces an error if you use SET GLOBAL with a variable that can only be used with SET SESSION or if you do not specify GLOBAL (or @@global.) when setting a global variable.

To set a SESSION variable to the GLOBAL value or a GLOBAL value to the compiled-in MySQL default value, use the DEFAULT keyword. For example, the following two statements are identical in setting the session value of max_join_size to the global value:

SET max_join_size=DEFAULT;
SET @@session.max_join_size=@@global.max_join_size;

Not all system variables can be set to DEFAULT. In such cases, use of DEFAULT results in an error.

You can refer to the values of specific global or session system variables in expressions by using one of the @@-modifiers. For example, you can retrieve values in a SELECT statement like this:

SELECT @@global.sql_mode, @@session.sql_mode, @@sql_mode;

When you refer to a system variable in an expression as @@var_name (that is, when you do not specify @@global. or @@session.), MySQL returns the session value if it exists and the global value otherwise. (This differs from SET @@var_name = value, which always refers to the session value.)

Note

Some variables displayed by SHOW VARIABLES may not be available using SELECT @@var_name syntax; an Unknown system variable occurs. As a workaround in such cases, you can use SHOW VARIABLES LIKE 'var_name'.

Suffixes for specifying a value multiplier can be used when setting a variable at server startup, but not to set the value with SET at runtime. On the other hand, with SET you can assign a variable's value using an expression, which is not true when you set a variable at server startup. For example, the first of the following lines is legal at server startup, but the second is not:

shell> mysql --max_allowed_packet=16M
shell> mysql --max_allowed_packet=16*1024*1024

Conversely, the second of the following lines is legal at runtime, but the first is not:

mysql> SET GLOBAL max_allowed_packet=16M;
mysql> SET GLOBAL max_allowed_packet=16*1024*1024;
Note

Some system variables can be enabled with the SET statement by setting them to ON or 1, or disabled by setting them to OFF or 0. However, to set such a variable on the command line or in an option file, you must set it to 1 or 0; setting it to ON or OFF will not work. For example, on the command line, --delay_key_write=1 works but --delay_key_write=ON does not.

To display system variable names and values, use the SHOW VARIABLES statement:

mysql> SHOW VARIABLES;
+---------------------------------+-----------------------------------+
| Variable_name                   | Value                             |
+---------------------------------+-----------------------------------+
| auto_increment_increment        | 1                                 |
| auto_increment_offset           | 1                                 |
| automatic_sp_privileges         | ON                                |
| back_log                        | 50                                |
| basedir                         | /home/mysql/                      |
| binlog_cache_size               | 32768                             |
| bulk_insert_buffer_size         | 8388608                           |
| character_set_client            | latin1                            |
| character_set_connection        | latin1                            |
| character_set_database          | latin1                            |
| character_set_results           | latin1                            |
| character_set_server            | latin1                            |
| character_set_system            | utf8                              |
| character_sets_dir              | /home/mysql/share/mysql/charsets/ |
| collation_connection            | latin1_swedish_ci                 |
| collation_database              | latin1_swedish_ci                 |
| collation_server                | latin1_swedish_ci                 |
...
| innodb_additional_mem_pool_size | 1048576                           |
| innodb_autoextend_increment     | 8                                 |
| innodb_buffer_pool_size         | 8388608                           |
| innodb_checksums                | ON                                |
| innodb_commit_concurrency       | 0                                 |
| innodb_concurrency_tickets      | 500                               |
| innodb_data_file_path           | ibdata1:10M:autoextend            |
| innodb_data_home_dir            |                                   |
...
| version                         | 5.1.6-alpha-log                   |
| version_comment                 | Source distribution               |
| version_compile_machine         | i686                              |
| version_compile_os              | suse-linux                        |
| wait_timeout                    | 28800                             |
+---------------------------------+-----------------------------------+

With a LIKE clause, the statement displays only those variables that match the pattern. To obtain a specific variable name, use a LIKE clause as shown:

SHOW VARIABLES LIKE 'max_join_size';
SHOW SESSION VARIABLES LIKE 'max_join_size';

To get a list of variables whose name match a pattern, use the % wildcard character in a LIKE clause:

SHOW VARIABLES LIKE '%size%';
SHOW GLOBAL VARIABLES LIKE '%size%';

Wildcard characters can be used in any position within the pattern to be matched. Strictly speaking, because _ is a wildcard that matches any single character, you should escape it as \_ to match it literally. In practice, this is rarely necessary.

For SHOW VARIABLES, if you specify neither GLOBAL nor SESSION, MySQL returns SESSION values.

The reason for requiring the GLOBAL keyword when setting GLOBAL-only variables but not when retrieving them is to prevent problems in the future. If we were to remove a SESSION variable that has the same name as a GLOBAL variable, a client with the SUPER privilege might accidentally change the GLOBAL variable rather than just the SESSION variable for its own connection. If we add a SESSION variable with the same name as a GLOBAL variable, a client that intends to change the GLOBAL variable might find only its own SESSION variable changed.

5.1.5.1. Structured System Variables

A structured variable differs from a regular system variable in two respects:

  • Its value is a structure with components that specify server parameters considered to be closely related.

  • There might be several instances of a given type of structured variable. Each one has a different name and refers to a different resource maintained by the server.

MySQL 5.7 supports one structured variable type, which specifies parameters governing the operation of key caches. A key cache structured variable has these components:

This section describes the syntax for referring to structured variables. Key cache variables are used for syntax examples, but specific details about how key caches operate are found elsewhere, in Section 8.9.2, “The MyISAM Key Cache”.

To refer to a component of a structured variable instance, you can use a compound name in instance_name.component_name format. Examples:

hot_cache.key_buffer_size
hot_cache.key_cache_block_size
cold_cache.key_cache_block_size

For each structured system variable, an instance with the name of default is always predefined. If you refer to a component of a structured variable without any instance name, the default instance is used. Thus, default.key_buffer_size and key_buffer_size both refer to the same system variable.

Structured variable instances and components follow these naming rules:

  • For a given type of structured variable, each instance must have a name that is unique within variables of that type. However, instance names need not be unique across structured variable types. For example, each structured variable has an instance named default, so default is not unique across variable types.

  • The names of the components of each structured variable type must be unique across all system variable names. If this were not true (that is, if two different types of structured variables could share component member names), it would not be clear which default structured variable to use for references to member names that are not qualified by an instance name.

  • If a structured variable instance name is not legal as an unquoted identifier, refer to it as a quoted identifier using backticks. For example, hot-cache is not legal, but `hot-cache` is.

  • global, session, and local are not legal instance names. This avoids a conflict with notation such as @@global.var_name for referring to nonstructured system variables.

Currently, the first two rules have no possibility of being violated because the only structured variable type is the one for key caches. These rules will assume greater significance if some other type of structured variable is created in the future.

With one exception, you can refer to structured variable components using compound names in any context where simple variable names can occur. For example, you can assign a value to a structured variable using a command-line option:

shell> mysqld --hot_cache.key_buffer_size=64K

In an option file, use this syntax:

[mysqld]
hot_cache.key_buffer_size=64K

If you start the server with this option, it creates a key cache named hot_cache with a size of 64KB in addition to the default key cache that has a default size of 8MB.

Suppose that you start the server as follows:

shell> mysqld --key_buffer_size=256K \
         --extra_cache.key_buffer_size=128K \
         --extra_cache.key_cache_block_size=2048

In this case, the server sets the size of the default key cache to 256KB. (You could also have written --default.key_buffer_size=256K.) In addition, the server creates a second key cache named extra_cache that has a size of 128KB, with the size of block buffers for caching table index blocks set to 2048 bytes.

The following example starts the server with three different key caches having sizes in a 3:1:1 ratio:

shell> mysqld --key_buffer_size=6M \
         --hot_cache.key_buffer_size=2M \
         --cold_cache.key_buffer_size=2M

Structured variable values may be set and retrieved at runtime as well. For example, to set a key cache named hot_cache to a size of 10MB, use either of these statements:

mysql> SET GLOBAL hot_cache.key_buffer_size = 10*1024*1024;
mysql> SET @@global.hot_cache.key_buffer_size = 10*1024*1024;

To retrieve the cache size, do this:

mysql> SELECT @@global.hot_cache.key_buffer_size;

However, the following statement does not work. The variable is not interpreted as a compound name, but as a simple string for a LIKE pattern-matching operation:

mysql> SHOW GLOBAL VARIABLES LIKE 'hot_cache.key_buffer_size';

This is the exception to being able to use structured variable names anywhere a simple variable name may occur.

5.1.5.2. Dynamic System Variables

Many server system variables are dynamic and can be set at runtime using SET GLOBAL or SET SESSION. You can also obtain their values using SELECT. See Section 5.1.5, “Using System Variables”.

The following table shows the full list of all dynamic system variables. The last column indicates for each variable whether GLOBAL or SESSION (or both) apply. The table also lists session options that can be set with the SET statement. Section 5.1.4, “Server System Variables”, discusses these options.

Variables that have a type of string take a string value. Variables that have a type of numeric take a numeric value. Variables that have a type of boolean can be set to 0, 1, ON or OFF. (If you set them on the command line or in an option file, use the numeric values.) Variables that are marked as enumeration normally should be set to one of the available values for the variable, but can also be set to the number that corresponds to the desired enumeration value. For enumerated system variables, the first enumeration value corresponds to 0. This differs from ENUM columns, for which the first enumeration value corresponds to 1.

Table 5.3. Dynamic Variable Summary

Variable NameVariable TypeVariable Scope
auto_increment_incrementnumericGLOBAL | SESSION
auto_increment_offsetnumericGLOBAL | SESSION
autocommitbooleanGLOBAL | SESSION
automatic_sp_privilegesbooleanGLOBAL
binlog_cache_sizenumericGLOBAL
binlog_checksumstringGLOBAL
binlog_direct_non_transactional_updatesbooleanGLOBAL | SESSION
binlog_formatenumerationGLOBAL | SESSION
binlog_max_flush_queue_timenumericGLOBAL
binlog_order_commitsbooleanGLOBAL
binlog_row_image=image_typeenumerationGLOBAL | SESSION
binlog_rows_query_log_eventsbooleanGLOBAL | SESSION
binlog_stmt_cache_sizenumericGLOBAL
bulk_insert_buffer_sizenumericGLOBAL | SESSION
character_set_clientstringGLOBAL | SESSION
character_set_connectionstringGLOBAL | SESSION
character_set_databasestringGLOBAL | SESSION
character_set_filesystemstringGLOBAL | SESSION
character_set_resultsstringGLOBAL | SESSION
character_set_serverstringGLOBAL | SESSION
collation_connectionstringGLOBAL | SESSION
collation_databasestringGLOBAL | SESSION
collation_serverstringGLOBAL | SESSION
completion_typenumericGLOBAL | SESSION
concurrent_insertbooleanGLOBAL
connect_timeoutnumericGLOBAL
debugstringGLOBAL | SESSION
debug_syncstringSESSION
default_storage_engineenumerationGLOBAL | SESSION
default_tmp_storage_engineenumerationGLOBAL | SESSION
default_week_formatnumericGLOBAL | SESSION
delay_key_writeenumerationGLOBAL
delayed_insert_limitnumericGLOBAL
delayed_insert_timeoutnumericGLOBAL
delayed_queue_sizenumericGLOBAL
div_precision_incrementnumericGLOBAL | SESSION
end_markers_in_jsonbooleanGLOBAL | SESSION
eq_range_index_dive_limitnumericGLOBAL | SESSION
event_schedulerenumerationGLOBAL
expire_logs_daysnumericGLOBAL
flushbooleanGLOBAL
flush_timenumericGLOBAL
foreign_key_checksbooleanGLOBAL | SESSION
ft_boolean_syntaxstringGLOBAL
general_logbooleanGLOBAL
general_log_filefilenameGLOBAL
group_concat_max_lennumericGLOBAL | SESSION
gtid_nextenumerationSESSION
gtid_purgedstringGLOBAL
host_cache_sizenumericGLOBAL
identitynumericSESSION
init_connectstringGLOBAL
init_slavestringGLOBAL
innodb_adaptive_flushingbooleanGLOBAL
innodb_adaptive_flushing_lwmnumericGLOBAL
innodb_adaptive_hash_indexbooleanGLOBAL
innodb_adaptive_max_sleep_delaynumericGLOBAL
innodb_api_bk_commit_intervalnumericGLOBAL
innodb_api_trx_levelnumericGLOBAL
innodb_autoextend_incrementnumericGLOBAL
innodb_buffer_pool_dump_at_shutdownbooleanGLOBAL
innodb_buffer_pool_dump_nowbooleanGLOBAL
innodb_buffer_pool_dump_pctnumericGLOBAL
innodb_buffer_pool_filenamestringGLOBAL
innodb_buffer_pool_load_abortbooleanGLOBAL
innodb_buffer_pool_load_nowbooleanGLOBAL
innodb_change_buffer_max_sizenumericGLOBAL
innodb_change_bufferingenumerationGLOBAL
innodb_checksum_algorithmenumerationGLOBAL
innodb_cmp_per_index_enabledbooleanGLOBAL
innodb_commit_concurrencynumericGLOBAL
innodb_compression_failure_threshold_pctnumericGLOBAL
innodb_compression_levelnumericGLOBAL
innodb_compression_pad_pct_maxnumericGLOBAL
innodb_concurrency_ticketsnumericGLOBAL
innodb_disable_sort_file_cachebooleanGLOBAL
innodb_fast_shutdownnumericGLOBAL
innodb_file_formatstringGLOBAL
innodb_file_format_maxstringGLOBAL
innodb_file_per_tablebooleanGLOBAL
innodb_flush_log_at_timeoutnumericGLOBAL
innodb_flush_log_at_trx_commitenumerationGLOBAL
innodb_flush_neighborsenumerationGLOBAL
innodb_flushing_avg_loopsnumericGLOBAL
innodb_ft_aux_tablestringGLOBAL
innodb_ft_enable_diag_printbooleanGLOBAL
innodb_ft_enable_stopwordbooleanGLOBAL
innodb_ft_num_word_optimizenumericGLOBAL
innodb_ft_server_stopword_tablestringGLOBAL
innodb_ft_user_stopword_tablestringGLOBAL | SESSION
innodb_io_capacitynumericGLOBAL
innodb_io_capacity_maxnumericGLOBAL
innodb_large_prefixbooleanGLOBAL
innodb_lock_wait_timeoutnumericGLOBAL | SESSION
innodb_log_compressed_pagesbooleanGLOBAL
innodb_lru_scan_depthnumericGLOBAL
innodb_max_dirty_pages_pctnumericGLOBAL
innodb_max_dirty_pages_pct_lwmnumericGLOBAL
innodb_max_purge_lagnumericGLOBAL
innodb_max_purge_lag_delaynumericGLOBAL
innodb_monitor_disablestringGLOBAL
innodb_monitor_enablestringGLOBAL
innodb_monitor_resetstringGLOBAL
innodb_monitor_reset_allstringGLOBAL
innodb_old_blocks_pctnumericGLOBAL
innodb_old_blocks_timenumericGLOBAL
innodb_online_alter_log_max_sizenumericGLOBAL
innodb_optimize_fulltext_onlybooleanGLOBAL
innodb_print_all_deadlocksbooleanGLOBAL
innodb_purge_batch_sizenumericGLOBAL
innodb_random_read_aheadbooleanGLOBAL
innodb_read_ahead_thresholdnumericGLOBAL
innodb_replication_delaynumericGLOBAL
innodb_rollback_segmentsnumericGLOBAL
innodb_spin_wait_delaynumericGLOBAL
innodb_stats_auto_recalcbooleanGLOBAL
innodb_stats_methodenumerationGLOBAL
innodb_stats_on_metadatabooleanGLOBAL
innodb_stats_persistentbooleanGLOBAL
innodb_stats_persistent_sample_pagesnumericGLOBAL
innodb_stats_sample_pagesnumericGLOBAL
innodb_stats_transient_sample_pagesnumericGLOBAL
innodb_strict_modebooleanGLOBAL | SESSION
innodb_support_xabooleanGLOBAL | SESSION
innodb_sync_spin_loopsnumericGLOBAL
innodb_table_locksbooleanGLOBAL | SESSION
innodb_thread_concurrencynumericGLOBAL
innodb_thread_sleep_delaynumericGLOBAL
innodb_undo_logsnumericGLOBAL
insert_idnumericSESSION
interactive_timeoutnumericGLOBAL | SESSION
join_buffer_sizenumericGLOBAL | SESSION
keep_files_on_createbooleanGLOBAL | SESSION
key_buffer_sizenumericGLOBAL
key_cache_age_thresholdnumericGLOBAL
key_cache_block_sizenumericGLOBAL
key_cache_division_limitnumericGLOBAL
last_insert_idnumericSESSION
lc_messagesstringGLOBAL | SESSION
lc_time_namesstringGLOBAL | SESSION
local_infilebooleanGLOBAL
lock_wait_timeoutnumericGLOBAL | SESSION
log_outputsetGLOBAL
log_queries_not_using_indexesbooleanGLOBAL
log_slow_admin_statementsbooleanGLOBAL
log_slow_slave_statementsbooleanGLOBAL
log_throttle_queries_not_using_indexesnumericGLOBAL
log_warningsnumericGLOBAL
long_query_timenumericGLOBAL | SESSION
low_priority_updatesbooleanGLOBAL | SESSION
master_info_repositorystringGLOBAL
master_verify_checksumbooleanGLOBAL
max_allowed_packetnumericGLOBAL
max_binlog_cache_sizenumericGLOBAL
max_binlog_sizenumericGLOBAL
max_binlog_stmt_cache_sizenumericGLOBAL
max_connect_errorsnumericGLOBAL
max_connectionsnumericGLOBAL
max_delayed_threadsnumericGLOBAL | SESSION
max_error_countnumericGLOBAL | SESSION
max_heap_table_sizenumericGLOBAL | SESSION
max_insert_delayed_threadsnumericGLOBAL | SESSION
max_join_sizenumericGLOBAL | SESSION
max_length_for_sort_datanumericGLOBAL | SESSION
max_prepared_stmt_countnumericGLOBAL
max_relay_log_sizenumericGLOBAL
max_seeks_for_keynumericGLOBAL | SESSION
max_sort_lengthnumericGLOBAL | SESSION
max_sp_recursion_depthnumericGLOBAL | SESSION
max_user_connectionsnumericGLOBAL | SESSION
max_write_lock_countnumericGLOBAL
min_examined_row_limitnumericGLOBAL | SESSION
myisam_data_pointer_sizenumericGLOBAL
myisam_max_sort_file_sizenumericGLOBAL
myisam_repair_threadsnumericGLOBAL | SESSION
myisam_sort_buffer_sizenumericGLOBAL | SESSION
myisam_stats_methodenumerationGLOBAL | SESSION
myisam_use_mmapbooleanGLOBAL
net_buffer_lengthnumericGLOBAL | SESSION
net_read_timeoutnumericGLOBAL | SESSION
net_retry_countnumericGLOBAL | SESSION
net_write_timeoutnumericGLOBAL | SESSION
newbooleanGLOBAL | SESSION
old_alter_tablebooleanGLOBAL | SESSION
old_passwordsbooleanGLOBAL | SESSION
optimizer_prune_levelbooleanGLOBAL | SESSION
optimizer_search_depthnumericGLOBAL | SESSION
optimizer_switchsetGLOBAL | SESSION
optimizer_tracestringGLOBAL | SESSION
optimizer_trace_featuresstringGLOBAL | SESSION
optimizer_trace_limitnumericGLOBAL | SESSION
optimizer_trace_max_mem_sizenumericGLOBAL | SESSION
optimizer_trace_offsetnumericGLOBAL | SESSION
preload_buffer_sizenumericGLOBAL | SESSION
profilingbooleanGLOBAL | SESSION
profiling_history_sizenumericGLOBAL | SESSION
pseudo_slave_modenumericSESSION
pseudo_thread_idnumericSESSION
query_alloc_block_sizenumericGLOBAL | SESSION
query_cache_limitnumericGLOBAL
query_cache_min_res_unitnumericGLOBAL
query_cache_sizenumericGLOBAL
query_cache_typeenumerationGLOBAL | SESSION
query_cache_wlock_invalidatebooleanGLOBAL | SESSION
query_prealloc_sizenumericGLOBAL | SESSION
rand_seed1numericSESSION
rand_seed2numericSESSION
range_alloc_block_sizenumericGLOBAL | SESSION
read_buffer_sizenumericGLOBAL | SESSION
read_onlybooleanGLOBAL
read_rnd_buffer_sizenumericGLOBAL | SESSION
relay_log_info_repositorystringGLOBAL
relay_log_purgebooleanGLOBAL
relay_log_recoverybooleanGLOBAL
rpl_semi_sync_master_enabledbooleanGLOBAL
rpl_semi_sync_master_timeoutnumericGLOBAL
rpl_semi_sync_master_trace_levelnumericGLOBAL
rpl_semi_sync_master_wait_no_slavebooleanGLOBAL
rpl_semi_sync_master_wait_pointenumerationGLOBAL
rpl_semi_sync_slave_enabledbooleanGLOBAL
rpl_semi_sync_slave_trace_levelnumericGLOBAL
rpl_stop_slave_timeoutintegerGLOBAL
secure_authbooleanGLOBAL
server_idnumericGLOBAL
slave_allow_batchingbooleanGLOBAL
slave_checkpoint_group=#numericGLOBAL
slave_checkpoint_period=#numericGLOBAL
slave_compressed_protocolbooleanGLOBAL
slave_exec_modeenumerationGLOBAL
slave_max_allowed_packetnumericGLOBAL
slave_net_timeoutnumericGLOBAL
slave_parallel_workersnumericGLOBAL
slave_pending_jobs_size_maxnumericGLOBAL
slave_rows_search_algorithms=listsetGLOBAL
slave_sql_verify_checksumbooleanGLOBAL
slave_transaction_retriesnumericGLOBAL
slow_launch_timenumericGLOBAL
slow_query_logbooleanGLOBAL
slow_query_log_filefilenameGLOBAL
sort_buffer_sizenumericGLOBAL | SESSION
sql_auto_is_nullbooleanGLOBAL | SESSION
sql_big_selectsbooleanGLOBAL | SESSION
sql_big_tablesbooleanGLOBAL | SESSION
sql_buffer_resultbooleanGLOBAL | SESSION
sql_log_binbooleanGLOBAL | SESSION
sql_log_offbooleanGLOBAL | SESSION
sql_modesetGLOBAL | SESSION
sql_notesbooleanGLOBAL | SESSION
sql_quote_show_createbooleanGLOBAL | SESSION
sql_safe_updatesbooleanGLOBAL | SESSION
sql_select_limitnumericGLOBAL | SESSION
sql_slave_skip_counternumericGLOBAL
sql_warningsbooleanGLOBAL | SESSION
storage_engineenumerationGLOBAL | SESSION
stored_program_cachenumericGLOBAL
sync_binlognumericGLOBAL
sync_frmbooleanGLOBAL
sync_master_infonumericGLOBAL
sync_relay_lognumericGLOBAL
sync_relay_log_infonumericGLOBAL
table_definition_cachenumericGLOBAL
table_open_cachenumericGLOBAL
thread_cache_sizenumericGLOBAL
time_zonestringGLOBAL | SESSION
timed_mutexesbooleanGLOBAL
timestampnumericSESSION
tmp_table_sizenumericGLOBAL | SESSION
transaction_alloc_block_sizenumericGLOBAL | SESSION
transaction_prealloc_sizenumericGLOBAL | SESSION
tx_isolationenumerationGLOBAL | SESSION
tx_read_onlybooleanGLOBAL | SESSION
unique_checksbooleanGLOBAL | SESSION
updatable_views_with_limitbooleanGLOBAL | SESSION
validate_password_lengthnumericGLOBAL
validate_password_mixed_case_countnumericGLOBAL
validate_password_number_countnumericGLOBAL
validate_password_policyenumerationGLOBAL
validate_password_special_char_countnumericGLOBAL
wait_timeoutnumericGLOBAL | SESSION

5.1.6. Server Status Variables

The server maintains many status variables that provide information about its operation. You can view these variables and their values by using the SHOW [GLOBAL | SESSION] STATUS statement (see Section 13.7.5.34, “SHOW STATUS Syntax”). The optional GLOBAL keyword aggregates the values over all connections, and SESSION shows the values for the current connection.

mysql> SHOW GLOBAL STATUS;
+-----------------------------------+------------+
| Variable_name                     | Value      |
+-----------------------------------+------------+
| Aborted_clients                   | 0          |
| Aborted_connects                  | 0          |
| Bytes_received                    | 155372598  |
| Bytes_sent                        | 1176560426 |
...
| Connections                       | 30023      |
| Created_tmp_disk_tables           | 0          |
| Created_tmp_files                 | 3          |
| Created_tmp_tables                | 2          |
...
| Threads_created                   | 217        |
| Threads_running                   | 88         |
| Uptime                            | 1389872    |
+-----------------------------------+------------+

Many status variables are reset to 0 by the FLUSH STATUS statement.

The following table lists all available server status variables:

Table 5.4. Status Variable Summary

Variable NameVariable TypeVariable Scope
Aborted_clientsnumericGLOBAL
Aborted_connectsnumericGLOBAL
Binlog_cache_disk_usenumericGLOBAL
Binlog_cache_usenumericGLOBAL
Binlog_stmt_cache_disk_usenumericGLOBAL
Binlog_stmt_cache_usenumericGLOBAL
Bytes_receivednumericGLOBAL | SESSION
Bytes_sentnumericGLOBAL | SESSION
Com_admin_commandsnumericGLOBAL | SESSION
Com_alter_dbnumericGLOBAL | SESSION
Com_alter_db_upgradenumericGLOBAL | SESSION
Com_alter_eventnumericGLOBAL | SESSION
Com_alter_functionnumericGLOBAL | SESSION
Com_alter_procedurenumericGLOBAL | SESSION
Com_alter_servernumericGLOBAL | SESSION
Com_alter_tablenumericGLOBAL | SESSION
Com_alter_tablespacenumericGLOBAL | SESSION
Com_alter_usernumericGLOBAL | SESSION
Com_analyzenumericGLOBAL | SESSION
Com_assign_to_keycachenumericGLOBAL | SESSION
Com_beginnumericGLOBAL | SESSION
Com_binlognumericGLOBAL | SESSION
Com_call_procedurenumericGLOBAL | SESSION
Com_change_dbnumericGLOBAL | SESSION
Com_change_masternumericGLOBAL | SESSION
Com_checknumericGLOBAL | SESSION
Com_checksumnumericGLOBAL | SESSION
Com_commitnumericGLOBAL | SESSION
Com_create_dbnumericGLOBAL | SESSION
Com_create_eventnumericGLOBAL | SESSION
Com_create_functionnumericGLOBAL | SESSION
Com_create_indexnumericGLOBAL | SESSION
Com_create_procedurenumericGLOBAL | SESSION
Com_create_servernumericGLOBAL | SESSION
Com_create_tablenumericGLOBAL | SESSION
Com_create_triggernumericGLOBAL | SESSION
Com_create_udfnumericGLOBAL | SESSION
Com_create_usernumericGLOBAL | SESSION
Com_create_viewnumericGLOBAL | SESSION
Com_dealloc_sqlnumericGLOBAL | SESSION
Com_deletenumericGLOBAL | SESSION
Com_delete_multinumericGLOBAL | SESSION
Com_donumericGLOBAL | SESSION
Com_drop_dbnumericGLOBAL | SESSION
Com_drop_eventnumericGLOBAL | SESSION
Com_drop_functionnumericGLOBAL | SESSION
Com_drop_indexnumericGLOBAL | SESSION
Com_drop_procedurenumericGLOBAL | SESSION
Com_drop_servernumericGLOBAL | SESSION
Com_drop_tablenumericGLOBAL | SESSION
Com_drop_triggernumericGLOBAL | SESSION
Com_drop_usernumericGLOBAL | SESSION
Com_drop_viewnumericGLOBAL | SESSION
Com_empty_querynumericGLOBAL | SESSION
Com_execute_sqlnumericGLOBAL | SESSION
Com_flushnumericGLOBAL | SESSION
Com_get_diagnosticsnumericGLOBAL | SESSION
Com_grantnumericGLOBAL | SESSION
Com_ha_closenumericGLOBAL | SESSION
Com_ha_opennumericGLOBAL | SESSION
Com_ha_readnumericGLOBAL | SESSION
Com_helpnumericGLOBAL | SESSION
Com_insertnumericGLOBAL | SESSION
Com_insert_selectnumericGLOBAL | SESSION
Com_install_pluginnumericGLOBAL | SESSION
Com_killnumericGLOBAL | SESSION
Com_loadnumericGLOBAL | SESSION
Com_lock_tablesnumericGLOBAL | SESSION
Com_optimizenumericGLOBAL | SESSION
Com_preload_keysnumericGLOBAL | SESSION
Com_prepare_sqlnumericGLOBAL | SESSION
Com_purgenumericGLOBAL | SESSION
Com_purge_before_datenumericGLOBAL | SESSION
Com_release_savepointnumericGLOBAL | SESSION
Com_rename_tablenumericGLOBAL | SESSION
Com_rename_usernumericGLOBAL | SESSION
Com_repairnumericGLOBAL | SESSION
Com_replacenumericGLOBAL | SESSION
Com_replace_selectnumericGLOBAL | SESSION
Com_resetnumericGLOBAL | SESSION
Com_resignalnumericGLOBAL | SESSION
Com_revokenumericGLOBAL | SESSION
Com_revoke_allnumericGLOBAL | SESSION
Com_rollbacknumericGLOBAL | SESSION
Com_rollback_to_savepointnumericGLOBAL | SESSION
Com_savepointnumericGLOBAL | SESSION
Com_selectnumericGLOBAL | SESSION
Com_set_optionnumericGLOBAL | SESSION
Com_show_authorsnumericGLOBAL | SESSION
Com_show_binlog_eventsnumericGLOBAL | SESSION
Com_show_binlogsnumericGLOBAL | SESSION
Com_show_charsetsnumericGLOBAL | SESSION
Com_show_collationsnumericGLOBAL | SESSION
Com_show_contributorsnumericGLOBAL | SESSION
Com_show_create_dbnumericGLOBAL | SESSION
Com_show_create_eventnumericGLOBAL | SESSION
Com_show_create_funcnumericGLOBAL | SESSION
Com_show_create_procnumericGLOBAL | SESSION
Com_show_create_tablenumericGLOBAL | SESSION
Com_show_create_triggernumericGLOBAL | SESSION
Com_show_databasesnumericGLOBAL | SESSION
Com_show_engine_logsnumericGLOBAL | SESSION
Com_show_engine_mutexnumericGLOBAL | SESSION
Com_show_engine_statusnumericGLOBAL | SESSION
Com_show_errorsnumericGLOBAL | SESSION
Com_show_eventsnumericGLOBAL | SESSION
Com_show_fieldsnumericGLOBAL | SESSION
Com_show_function_codenumericGLOBAL | SESSION
Com_show_function_statusnumericGLOBAL | SESSION
Com_show_grantsnumericGLOBAL | SESSION
Com_show_keysnumericGLOBAL | SESSION
Com_show_master_statusnumericGLOBAL | SESSION
Com_show_new_masternumericGLOBAL | SESSION
Com_show_open_tablesnumericGLOBAL | SESSION
Com_show_pluginsnumericGLOBAL | SESSION
Com_show_privilegesnumericGLOBAL | SESSION
Com_show_procedure_codenumericGLOBAL | SESSION
Com_show_procedure_statusnumericGLOBAL | SESSION
Com_show_processlistnumericGLOBAL | SESSION
Com_show_profilenumericGLOBAL | SESSION
Com_show_profilesnumericGLOBAL | SESSION
Com_show_relaylog_eventsnumericGLOBAL | SESSION
Com_show_slave_hostsnumericGLOBAL | SESSION
Com_show_slave_statusnumericGLOBAL | SESSION
Com_show_statusnumericGLOBAL | SESSION
Com_show_storage_enginesnumericGLOBAL | SESSION
Com_show_table_statusnumericGLOBAL | SESSION
Com_show_tablesnumericGLOBAL | SESSION
Com_show_triggersnumericGLOBAL | SESSION
Com_show_variablesnumericGLOBAL | SESSION
Com_show_warningsnumericGLOBAL | SESSION
Com_signalnumericGLOBAL | SESSION
Com_slave_startnumericGLOBAL | SESSION
Com_slave_stopnumericGLOBAL | SESSION
Com_stmt_closenumericGLOBAL | SESSION
Com_stmt_executenumericGLOBAL | SESSION
Com_stmt_fetchnumericGLOBAL | SESSION
Com_stmt_preparenumericGLOBAL | SESSION
Com_stmt_repreparenumericGLOBAL | SESSION
Com_stmt_resetnumericGLOBAL | SESSION
Com_stmt_send_long_datanumericGLOBAL | SESSION
Com_truncatenumericGLOBAL | SESSION
Com_uninstall_pluginnumericGLOBAL | SESSION
Com_unlock_tablesnumericGLOBAL | SESSION
Com_updatenumericGLOBAL | SESSION
Com_update_multinumericGLOBAL | SESSION
Com_xa_commitnumericGLOBAL | SESSION
Com_xa_endnumericGLOBAL | SESSION
Com_xa_preparenumericGLOBAL | SESSION
Com_xa_recovernumericGLOBAL | SESSION
Com_xa_rollbacknumericGLOBAL | SESSION
Com_xa_startnumericGLOBAL | SESSION
CompressionnumericSESSION
Connection_errors_acceptnumericGLOBAL
Connection_errors_internalnumericGLOBAL
Connection_errors_max_connectionsnumericGLOBAL
Connection_errors_peer_addrnumericGLOBAL
Connection_errors_selectnumericGLOBAL
Connection_errors_tcpwrapnumericGLOBAL
ConnectionsnumericGLOBAL
Created_tmp_disk_tablesnumericGLOBAL | SESSION
Created_tmp_filesnumericGLOBAL
Created_tmp_tablesnumericGLOBAL | SESSION
Delayed_errorsnumericGLOBAL
Delayed_insert_threadsnumericGLOBAL
Delayed_writesnumericGLOBAL
Flush_commandsnumericGLOBAL
Handler_commitnumericGLOBAL | SESSION
Handler_deletenumericGLOBAL | SESSION
Handler_discovernumericGLOBAL | SESSION
Handler_external_locknumericGLOBAL | SESSION
Handler_mrr_initnumericGLOBAL | SESSION
Handler_preparenumericGLOBAL | SESSION
Handler_read_firstnumericGLOBAL | SESSION
Handler_read_keynumericGLOBAL | SESSION
Handler_read_lastnumericGLOBAL | SESSION
Handler_read_nextnumericGLOBAL | SESSION
Handler_read_prevnumericGLOBAL | SESSION
Handler_read_rndnumericGLOBAL | SESSION
Handler_read_rnd_nextnumericGLOBAL | SESSION
Handler_rollbacknumericGLOBAL | SESSION
Handler_savepointnumericGLOBAL | SESSION
Handler_savepoint_rollbacknumericGLOBAL | SESSION
Handler_updatenumericGLOBAL | SESSION
Handler_writenumericGLOBAL | SESSION
Innodb_available_undo_logsnumericGLOBAL
Innodb_buffer_pool_bytes_datanumericGLOBAL
Innodb_buffer_pool_bytes_dirtynumericGLOBAL
Innodb_buffer_pool_dump_statusnumericGLOBAL
Innodb_buffer_pool_load_statusnumericGLOBAL
Innodb_buffer_pool_pages_datanumericGLOBAL
Innodb_buffer_pool_pages_dirtynumericGLOBAL
Innodb_buffer_pool_pages_flushednumericGLOBAL
Innodb_buffer_pool_pages_freenumericGLOBAL
Innodb_buffer_pool_pages_latchednumericGLOBAL
Innodb_buffer_pool_pages_miscnumericGLOBAL
Innodb_buffer_pool_pages_totalnumericGLOBAL
Innodb_buffer_pool_read_aheadnumericGLOBAL
Innodb_buffer_pool_read_ahead_evictednumericGLOBAL
Innodb_buffer_pool_read_requestsnumericGLOBAL
Innodb_buffer_pool_readsnumericGLOBAL
Innodb_buffer_pool_wait_freenumericGLOBAL
Innodb_buffer_pool_write_requestsnumericGLOBAL
Innodb_data_fsyncsnumericGLOBAL
Innodb_data_pending_fsyncsnumericGLOBAL
Innodb_data_pending_readsnumericGLOBAL
Innodb_data_pending_writesnumericGLOBAL
Innodb_data_readnumericGLOBAL
Innodb_data_readsnumericGLOBAL
Innodb_data_writesnumericGLOBAL
Innodb_data_writtennumericGLOBAL
Innodb_dblwr_pages_writtennumericGLOBAL
Innodb_dblwr_writesnumericGLOBAL
Innodb_have_atomic_builtinsnumericGLOBAL
Innodb_log_waitsnumericGLOBAL
Innodb_log_write_requestsnumericGLOBAL
Innodb_log_writesnumericGLOBAL
Innodb_num_open_filesnumericGLOBAL
Innodb_os_log_fsyncsnumericGLOBAL
Innodb_os_log_pending_fsyncsnumericGLOBAL
Innodb_os_log_pending_writesnumericGLOBAL
Innodb_os_log_writtennumericGLOBAL
Innodb_page_sizenumericGLOBAL
Innodb_pages_creatednumericGLOBAL
Innodb_pages_readnumericGLOBAL
Innodb_pages_writtennumericGLOBAL
Innodb_row_lock_current_waitsnumericGLOBAL
Innodb_row_lock_timenumericGLOBAL
Innodb_row_lock_time_avgnumericGLOBAL
Innodb_row_lock_time_maxnumericGLOBAL
Innodb_row_lock_waitsnumericGLOBAL
Innodb_rows_deletednumericGLOBAL
Innodb_rows_insertednumericGLOBAL
Innodb_rows_readnumericGLOBAL
Innodb_rows_updatednumericGLOBAL
Innodb_truncated_status_writesnumericGLOBAL
Key_blocks_not_flushednumericGLOBAL
Key_blocks_unusednumericGLOBAL
Key_blocks_usednumericGLOBAL
Key_read_requestsnumericGLOBAL
Key_readsnumericGLOBAL
Key_write_requestsnumericGLOBAL
Key_writesnumericGLOBAL
Last_query_costnumericSESSION
Last_query_partial_plansnumericSESSION
Max_used_connectionsnumericGLOBAL
Ndb_conflict_fn_maxnumericGLOBAL
Ndb_conflict_fn_oldnumericGLOBAL
Ndb_number_of_data_nodesnumericGLOBAL
Not_flushed_delayed_rowsnumericGLOBAL
Open_filesnumericGLOBAL
Open_streamsnumericGLOBAL
Open_table_definitionsnumericGLOBAL
Open_tablesnumericGLOBAL | SESSION
Opened_filesnumericGLOBAL
Opened_table_definitionsnumericGLOBAL | SESSION
Opened_tablesnumericGLOBAL | SESSION
Performance_schema_accounts_lostnumericGLOBAL
Performance_schema_cond_classes_lostnumericGLOBAL
Performance_schema_cond_instances_lostnumericGLOBAL
Performance_schema_file_classes_lostnumericGLOBAL
Performance_schema_file_handles_lostnumericGLOBAL
Performance_schema_file_instances_lostnumericGLOBAL
Performance_schema_hosts_lostnumericGLOBAL
Performance_schema_locker_lostnumericGLOBAL
Performance_schema_memory_classes_lostnumericGLOBAL
Performance_schema_mutex_classes_lostnumericGLOBAL
Performance_schema_mutex_instances_lostnumericGLOBAL
Performance_schema_nested_statement_lostnumericGLOBAL
Performance_schema_program_lostnumericGLOBAL
Performance_schema_rwlock_classes_lostnumericGLOBAL
Performance_schema_rwlock_instances_lostnumericGLOBAL
Performance_schema_session_connect_attrs_lostnumericGLOBAL
Performance_schema_socket_classes_lostnumericGLOBAL
Performance_schema_socket_instances_lostnumericGLOBAL
Performance_schema_stage_classes_lostnumericGLOBAL
Performance_schema_statement_classes_lostnumericGLOBAL
Performance_schema_table_handles_lostnumericGLOBAL
Performance_schema_table_instances_lostnumericGLOBAL
Performance_schema_thread_classes_lostnumericGLOBAL
Performance_schema_thread_instances_lostnumericGLOBAL
Performance_schema_users_lostnumericGLOBAL
Prepared_stmt_countnumericGLOBAL
Qcache_free_blocksnumericGLOBAL
Qcache_free_memorynumericGLOBAL
Qcache_hitsnumericGLOBAL
Qcache_insertsnumericGLOBAL
Qcache_lowmem_prunesnumericGLOBAL
Qcache_not_cachednumericGLOBAL
Qcache_queries_in_cachenumericGLOBAL
Qcache_total_blocksnumericGLOBAL
QueriesnumericGLOBAL | SESSION
QuestionsnumericGLOBAL | SESSION
Rpl_semi_sync_master_clientsnumericGLOBAL
Rpl_semi_sync_master_net_avg_wait_timenumericGLOBAL
Rpl_semi_sync_master_net_wait_timenumericGLOBAL
Rpl_semi_sync_master_net_waitsnumericGLOBAL
Rpl_semi_sync_master_no_timesnumericGLOBAL
Rpl_semi_sync_master_no_txnumericGLOBAL
Rpl_semi_sync_master_statusbooleanGLOBAL
Rpl_semi_sync_master_timefunc_failuresnumericGLOBAL
Rpl_semi_sync_master_tx_avg_wait_timenumericGLOBAL
Rpl_semi_sync_master_tx_wait_timenumericGLOBAL
Rpl_semi_sync_master_tx_waitsnumericGLOBAL
Rpl_semi_sync_master_wait_pos_backtraversenumericGLOBAL
Rpl_semi_sync_master_wait_sessionsnumericGLOBAL
Rpl_semi_sync_master_yes_txnumericGLOBAL
Rpl_semi_sync_slave_statusbooleanGLOBAL
Rsa_public_keystringGLOBAL
Select_full_joinnumericGLOBAL | SESSION
Select_full_range_joinnumericGLOBAL | SESSION
Select_rangenumericGLOBAL | SESSION
Select_range_checknumericGLOBAL | SESSION
Select_scannumericGLOBAL | SESSION
Slave_heartbeat_period GLOBAL
Slave_last_heartbeat GLOBAL
Slave_open_temp_tablesnumericGLOBAL
Slave_received_heartbeats GLOBAL
Slave_retried_transactionsnumericGLOBAL
Slave_runningbooleanGLOBAL
Slow_launch_threadsnumericGLOBAL | SESSION
Slow_queriesnumericGLOBAL | SESSION
Sort_merge_passesnumericGLOBAL | SESSION
Sort_rangenumericGLOBAL | SESSION
Sort_rowsnumericGLOBAL | SESSION
Sort_scannumericGLOBAL | SESSION
Ssl_accept_renegotiatesnumericGLOBAL
Ssl_acceptsnumericGLOBAL
Ssl_callback_cache_hitsnumericGLOBAL
Ssl_cipherstringGLOBAL | SESSION
Ssl_cipher_liststringGLOBAL | SESSION
Ssl_client_connectsnumericGLOBAL
Ssl_connect_renegotiatesnumericGLOBAL
Ssl_ctx_verify_depthnumericGLOBAL
Ssl_ctx_verify_modenumericGLOBAL
Ssl_default_timeoutnumericGLOBAL | SESSION
Ssl_finished_acceptsnumericGLOBAL
Ssl_finished_connectsnumericGLOBAL
Ssl_server_not_afternumericGLOBAL | SESSION
Ssl_server_not_beforenumericGLOBAL | SESSION
Ssl_session_cache_hitsnumericGLOBAL
Ssl_session_cache_missesnumericGLOBAL
Ssl_session_cache_modestringGLOBAL
Ssl_session_cache_overflowsnumericGLOBAL
Ssl_session_cache_sizenumericGLOBAL
Ssl_session_cache_timeoutsnumericGLOBAL
Ssl_sessions_reusednumericGLOBAL | SESSION
Ssl_used_session_cache_entriesnumericGLOBAL
Ssl_verify_depthnumericGLOBAL | SESSION
Ssl_verify_modenumericGLOBAL | SESSION
Ssl_versionstringGLOBAL | SESSION
Table_locks_immediatenumericGLOBAL
Table_locks_waitednumericGLOBAL
Table_open_cache_hitsnumericGLOBAL | SESSION
Table_open_cache_missesnumericGLOBAL | SESSION
Table_open_cache_overflowsnumericGLOBAL | SESSION
Tc_log_max_pages_usednumericGLOBAL
Tc_log_page_sizenumericGLOBAL
Tc_log_page_waitsnumericGLOBAL
Threads_cachednumericGLOBAL
Threads_connectednumericGLOBAL
Threads_creatednumericGLOBAL
Threads_runningnumericGLOBAL
UptimenumericGLOBAL
Uptime_since_flush_statusnumericGLOBAL

The status variables have the following meanings.

5.1.7. Server SQL Modes

The MySQL server can operate in different SQL modes, and can apply these modes differently for different clients, depending on the value of the sql_mode system variable. This capability enables each application to tailor the server's operating mode to its own requirements.

For answers to some questions that are often asked about server SQL modes in MySQL, see Section B.3, “MySQL 5.7 FAQ: Server SQL Mode”.

Modes define what SQL syntax MySQL should support and what kind of data validation checks it should perform. This makes it easier to use MySQL in different environments and to use MySQL together with other database servers.

When working with InnoDB tables, consider also the innodb_strict_mode configuration option. It enables additional error checks for InnoDB tables, as listed in Section 14.2.5.7, “InnoDB Strict Mode”.

Setting the SQL Mode

The default SQL mode in MySQL 5.7 is NO_ENGINE_SUBSTITUTION.

To set the SQL mode at server startup, use the --sql-mode="modes" option on the command line, or sql-mode="modes" in an option file such as my.cnf (Unix operating systems) or my.ini (Windows). modes is a list of different modes separated by commas. To clear the SQL mode explicitly, set it to an empty string using --sql-mode="" on the command line, or sql-mode="" in an option file.

Note

MySQL installation programs may configure the SQL mode during the installation process. For example, mysql_install_db creates a default option file named my.cnf in the base installation directory. This file contains a line that sets the SQL mode; see Section 4.4.3, “mysql_install_db — Initialize MySQL Data Directory”.

If the SQL mode differs from the default or from what you expect, check for a setting in an option file that the server reads at startup.

To change the SQL mode at runtime, use a SET [GLOBAL|SESSION] sql_mode='modes' statement to set the sql_mode system variable. Setting the GLOBAL variable requires the SUPER privilege and affects the operation of all clients that connect from that time on. Setting the SESSION variable affects only the current client. Any client can change its own session sql_mode value at any time.

To determine the current global or session sql_mode value, use the following statements:

SELECT @@GLOBAL.sql_mode;
SELECT @@SESSION.sql_mode;
Important

SQL mode and user-defined partitioning.  Changing the server SQL mode after creating and inserting data into partitioned tables can cause major changes in the behavior of such tables, and could lead to loss or corruption of data. It is strongly recommended that you never change the SQL mode once you have created tables employing user-defined partitioning.

When replicating partitioned tables, differing SQL modes on master and slave can also lead to problems. For best results, you should always use the same server SQL mode on the master and on the slave.

See Section 17.6, “Restrictions and Limitations on Partitioning”, for more information.

Most Important SQL Modes

The most important sql_mode values are probably these:

  • ANSI

    This mode changes syntax and behavior to conform more closely to standard SQL. It is one of the special combination modes listed at the end of this section.

  • STRICT_TRANS_TABLES

    If a value could not be inserted as given into a transactional table, abort the statement. For a nontransactional table, abort the statement if the value occurs in a single-row statement or the first row of a multiple-row statement. More detail is given later in this section.

  • TRADITIONAL

    Make MySQL behave like a traditional SQL database system. A simple description of this mode is give an error instead of a warning when inserting an incorrect value into a column. It is one of the special combination modes listed at the end of this section.

    Note

    The INSERT or UPDATE aborts as soon as the error is noticed. This may not be what you want if you are using a nontransactional storage engine, because data changes made prior to the error may not be rolled back, resulting in a partially done update.

When this manual refers to strict mode, it means a mode where at least one of STRICT_TRANS_TABLES or STRICT_ALL_TABLES is enabled.

Full List of SQL Modes

The following list describes all supported modes:

  • ALLOW_INVALID_DATES

    Do not perform full checking of dates. Check only that the month is in the range from 1 to 12 and the day is in the range from 1 to 31. This is very convenient for Web applications where you obtain year, month, and day in three different fields and you want to store exactly what the user inserted (without date validation). This mode applies to DATE and DATETIME columns. It does not apply TIMESTAMP columns, which always require a valid date.

    The server requires that month and day values be legal, and not merely in the range 1 to 12 and 1 to 31, respectively. With strict mode disabled, invalid dates such as '2004-04-31' are converted to '0000-00-00' and a warning is generated. With strict mode enabled, invalid dates generate an error. To permit such dates, enable ALLOW_INVALID_DATES.

  • ANSI_QUOTES

    Treat " as an identifier quote character (like the ` quote character) and not as a string quote character. You can still use ` to quote identifiers with this mode enabled. With ANSI_QUOTES enabled, you cannot use double quotation marks to quote literal strings, because it is interpreted as an identifier.

  • ERROR_FOR_DIVISION_BY_ZERO

    Produce an error in strict mode (otherwise a warning) when a division by zero (or MOD(X,0)) occurs during an INSERT or UPDATE. If this mode is not enabled, MySQL instead returns NULL for divisions by zero. For INSERT IGNORE or UPDATE IGNORE, MySQL generates a warning for divisions by zero, but the result of the operation is NULL.

    For SELECT, division by zero returns NULL. Enabling this mode causes a warning to be generated as well.

  • HIGH_NOT_PRECEDENCE

    The precedence of the NOT operator is such that expressions such as NOT a BETWEEN b AND c are parsed as NOT (a BETWEEN b AND c). In some older versions of MySQL, the expression was parsed as (NOT a) BETWEEN b AND c. The old higher-precedence behavior can be obtained by enabling the HIGH_NOT_PRECEDENCE SQL mode.

    mysql> SET sql_mode = '';
    mysql> SELECT NOT 1 BETWEEN -5 AND 5;
            -> 0
    mysql> SET sql_mode = 'HIGH_NOT_PRECEDENCE';
    mysql> SELECT NOT 1 BETWEEN -5 AND 5;
            -> 1
    
  • IGNORE_SPACE

    Permit spaces between a function name and the ( character. This causes built-in function names to be treated as reserved words. As a result, identifiers that are the same as function names must be quoted as described in Section 9.2, “Schema Object Names”. For example, because there is a COUNT() function, the use of count as a table name in the following statement causes an error:

    mysql> CREATE TABLE count (i INT);
    ERROR 1064 (42000): You have an error in your SQL syntax
    

    The table name should be quoted:

    mysql> CREATE TABLE `count` (i INT);
    Query OK, 0 rows affected (0.00 sec)
    

    The IGNORE_SPACE SQL mode applies to built-in functions, not to user-defined functions or stored functions. It is always permissible to have spaces after a UDF or stored function name, regardless of whether IGNORE_SPACE is enabled.

    For further discussion of IGNORE_SPACE, see Section 9.2.4, “Function Name Parsing and Resolution”.

  • NO_AUTO_CREATE_USER

    Prevent the GRANT statement from automatically creating new users if it would otherwise do so, unless authentication information is specified. The statement must specify a nonempty password using IDENTIFIED BY or an authentication plugin using IDENTIFIED WITH.

  • NO_AUTO_VALUE_ON_ZERO

    NO_AUTO_VALUE_ON_ZERO affects handling of AUTO_INCREMENT columns. Normally, you generate the next sequence number for the column by inserting either NULL or 0 into it. NO_AUTO_VALUE_ON_ZERO suppresses this behavior for 0 so that only NULL generates the next sequence number.

    This mode can be useful if 0 has been stored in a table's AUTO_INCREMENT column. (Storing 0 is not a recommended practice, by the way.) For example, if you dump the table with mysqldump and then reload it, MySQL normally generates new sequence numbers when it encounters the 0 values, resulting in a table with contents different from the one that was dumped. Enabling NO_AUTO_VALUE_ON_ZERO before reloading the dump file solves this problem. mysqldump now automatically includes in its output a statement that enables NO_AUTO_VALUE_ON_ZERO, to avoid this problem.

  • NO_BACKSLASH_ESCAPES

    Disable the use of the backslash character (\) as an escape character within strings. With this mode enabled, backslash becomes an ordinary character like any other.

  • NO_DIR_IN_CREATE

    When creating a table, ignore all INDEX DIRECTORY and DATA DIRECTORY directives. This option is useful on slave replication servers.

  • NO_ENGINE_SUBSTITUTION

    Control automatic substitution of the default storage engine when a statement such as CREATE TABLE or ALTER TABLE specifies a storage engine that is disabled or not compiled in.

    Because storage engines can be pluggable at runtime, unavailable engines are treated the same way:

    With NO_ENGINE_SUBSTITUTION disabled, for CREATE TABLE the default engine is used and a warning occurs if the desired engine is unavailable. For ALTER TABLE, a warning occurs and the table is not altered.

    With NO_ENGINE_SUBSTITUTION enabled, an error occurs and the table is not created or altered if the desired engine is unavailable.

  • NO_FIELD_OPTIONS

    Do not print MySQL-specific column options in the output of SHOW CREATE TABLE. This mode is used by mysqldump in portability mode.

  • NO_KEY_OPTIONS

    Do not print MySQL-specific index options in the output of SHOW CREATE TABLE. This mode is used by mysqldump in portability mode.

  • NO_TABLE_OPTIONS

    Do not print MySQL-specific table options (such as ENGINE) in the output of SHOW CREATE TABLE. This mode is used by mysqldump in portability mode.

  • NO_UNSIGNED_SUBTRACTION

    By default, subtraction between integer operands produces an UNSIGNED result if any operand isUNSIGNED. When NO_UNSIGNED_SUBTRACTION is enabled, the subtraction result is signed, even if any operand is unsigned. For example, compare the type of column c2 in table t1 with that of column c2 in table t2:

    mysql> SET sql_mode='';
    mysql> CREATE TABLE test (c1 BIGINT UNSIGNED NOT NULL);
    mysql> CREATE TABLE t1 SELECT c1 - 1 AS c2 FROM test;
    mysql> DESCRIBE t1;
    +-------+---------------------+------+-----+---------+-------+
    | Field | Type                | Null | Key | Default | Extra |
    +-------+---------------------+------+-----+---------+-------+
    | c2    | bigint(21) unsigned |      |     | 0       |       |
    +-------+---------------------+------+-----+---------+-------+
    
    mysql> SET sql_mode='NO_UNSIGNED_SUBTRACTION';
    mysql> CREATE TABLE t2 SELECT c1 - 1 AS c2 FROM test;
    mysql> DESCRIBE t2;
    +-------+------------+------+-----+---------+-------+
    | Field | Type       | Null | Key | Default | Extra |
    +-------+------------+------+-----+---------+-------+
    | c2    | bigint(21) |      |     | 0       |       |
    +-------+------------+------+-----+---------+-------+
    

    Note that this means that BIGINT UNSIGNED is not 100% usable in all contexts. See Section 12.10, “Cast Functions and Operators”.

    mysql> SET sql_mode = '';
    mysql> SELECT CAST(0 AS UNSIGNED) - 1;
    +-------------------------+
    | CAST(0 AS UNSIGNED) - 1 |
    +-------------------------+
    |    18446744073709551615 |
    +-------------------------+
    
    mysql> SET sql_mode = 'NO_UNSIGNED_SUBTRACTION';
    mysql> SELECT CAST(0 AS UNSIGNED) - 1;
    +-------------------------+
    | CAST(0 AS UNSIGNED) - 1 |
    +-------------------------+
    |                      -1 |
    +-------------------------+
    
  • NO_ZERO_DATE

    In strict mode, do not permit '0000-00-00' as a valid date. You can still insert zero dates with the IGNORE option. When not in strict mode, the date is accepted but a warning is generated.

  • NO_ZERO_IN_DATE

    In strict mode, do not accept dates where the year part is nonzero but the month or day part is 0 (for example, '0000-00-00' is legal but '2010-00-01' and '2010-01-00' are not). If used with the IGNORE option, MySQL inserts a '0000-00-00' date for any such date. When not in strict mode, the date is accepted but a warning is generated.

  • ONLY_FULL_GROUP_BY

    Do not permit queries for which the select list or HAVING list or ORDER BY list refers to nonaggregated columns that are not named in the GROUP BY clause.

    The following queries are invalid with ONLY_FULL_GROUP_BY enabled. The first is invalid because address in the select list is not named in the GROUP BY clause, and the second because max_age in the HAVING clause is not named in the GROUP BY clause:

    mysql> SELECT name, address, MAX(age) FROM t GROUP BY name;
    ERROR 1055 (42000): 't.address' isn't in GROUP BY
    
    mysql> SELECT name, MAX(age) AS max_age FROM t GROUP BY name
        -> HAVING max_age < 30;
    Empty set (0.00 sec)
    ERROR 1463 (42000): Non-grouping field 'max_age' is used in HAVING clause
    

    In the second example, the query could be rewritten to use HAVING MAX(age) instead, so that the reference is to a column named in an aggregate function. (max_age fails because it is an aggregate function.)

    In addition, if a query has aggregate functions and no GROUP BY clause, it cannot have nonaggregated columns in the select list or ORDER BY list:

    mysql> SELECT name, MAX(age) FROM t;
    ERROR 1140 (42000): Mixing of GROUP columns (MIN(),MAX(),COUNT(),...)
    with no GROUP columns is illegal if there is no GROUP BY clause
    

    For more information, see Section 12.17.3, “MySQL Extensions to GROUP BY.

  • PAD_CHAR_TO_FULL_LENGTH

    By default, trailing spaces are trimmed from CHAR column values on retrieval. If PAD_CHAR_TO_FULL_LENGTH is enabled, trimming does not occur and retrieved CHAR values are padded to their full length. This mode does not apply to VARCHAR columns, for which trailing spaces are retained on retrieval.

    mysql> CREATE TABLE t1 (c1 CHAR(10));
    Query OK, 0 rows affected (0.37 sec)
    
    mysql> INSERT INTO t1 (c1) VALUES('xy');
    Query OK, 1 row affected (0.01 sec)
    
    mysql> SET sql_mode = '';
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> SELECT c1, CHAR_LENGTH(c1) FROM t1;
    +------+-----------------+
    | c1   | CHAR_LENGTH(c1) |
    +------+-----------------+
    | xy   |               2 |
    +------+-----------------+
    1 row in set (0.00 sec)
    
    mysql> SET sql_mode = 'PAD_CHAR_TO_FULL_LENGTH';
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> SELECT c1, CHAR_LENGTH(c1) FROM t1;
    +------------+-----------------+
    | c1         | CHAR_LENGTH(c1) |
    +------------+-----------------+
    | xy         |              10 |
    +------------+-----------------+
    1 row in set (0.00 sec)
    
  • PIPES_AS_CONCAT

    Treat || as a string concatenation operator (same as CONCAT()) rather than as a synonym for OR.

  • REAL_AS_FLOAT

    Treat REAL as a synonym for FLOAT. By default, MySQL treats REAL as a synonym for DOUBLE.

  • STRICT_ALL_TABLES

    Enable strict mode for all storage engines. Invalid data values are rejected. Additional detail follows.

  • STRICT_TRANS_TABLES

    Enable strict mode for transactional storage engines, and when possible for nontransactional storage engines. Additional details follow.

Strict mode controls how MySQL handles invalid or missing values in data-change statements such as INSERT or UPDATE. A value can be invalid for several reasons. For example, it might have the wrong data type for the column, or it might be out of range. A value is missing when a new row to be inserted does not contain a value for a non-NULL column that has no explicit DEFAULT clause in its definition. (For a NULL column, NULL is inserted if the value is missing.)

For statements that do not change data, such as SELECT, invalid values generate a warning in strict mode, not an error.

For transactional tables, an error occurs for invalid or missing values in a data-change statement when either of the STRICT_ALL_TABLES or STRICT_TRANS_TABLES modes are enabled. The statement is aborted and rolled back.

For nontransactional tables, the behavior is the same for either mode, if the bad value occurs in the first row to be inserted or updated. The statement is aborted and the table remains unchanged. If the statement inserts or modifies multiple rows and the bad value occurs in the second or later row, the result depends on which strict option is enabled:

  • For STRICT_ALL_TABLES, MySQL returns an error and ignores the rest of the rows. However, in this case, the earlier rows still have been inserted or updated. This means that you might get a partial update, which might not be what you want. To avoid this, it is best to use single-row statements because these can be aborted without changing the table.

  • For STRICT_TRANS_TABLES, MySQL converts an invalid value to the closest valid value for the column and insert the adjusted value. If a value is missing, MySQL inserts the implicit default value for the column data type. In either case, MySQL generates a warning rather than an error and continues processing the statement. Implicit defaults are described in Section 11.5, “Data Type Default Values”.

Strict mode disallows invalid date values such as '2004-04-31'. It does not disallow dates with zero month or day parts such as '2004-04-00' or zero dates. To disallow these as well, enable the NO_ZERO_IN_DATE and NO_ZERO_DATE SQL modes in addition to strict mode.

If you are not using strict mode (that is, neither STRICT_TRANS_TABLES nor STRICT_ALL_TABLES is enabled), MySQL inserts adjusted values for invalid or missing values and produces warnings. In strict mode, you can produce this behavior by using INSERT IGNORE or UPDATE IGNORE. See Section 13.7.5.39, “SHOW WARNINGS Syntax”.

Strict mode does not affect whether foreign key constraints are checked. foreign_key_checks can be used for that. (See Section 5.1.4, “Server System Variables”.)

Combination Modes

The following special modes are provided as shorthand for combinations of mode values from the preceding list.

The descriptions include all mode values that are available in the most recent version of MySQL. For older versions, a combination mode does not include individual mode values that are not available except in newer versions.

5.1.8. Server Plugins

MySQL supports a plugin API that enables creation of server components. Plugins can be loaded at server startup, or loaded and unloaded at runtime without restarting the server. The components supported by this interface include, but are not limited to, storage engines, full-text parser plugins, partitioning support, and server extensions.

5.1.8.1. Installing and Uninstalling Plugins

Server plugins must be loaded in to the server before they can be used. MySQL enables you to load a plugin at server startup or at runtime. It is also possible to control the activation of loaded plugins at startup, and to unload them at runtime.

Installing Plugins

Server plugins must be known to the server before they can be used. A plugin can be made known several ways, as described here. In the following descriptions, plugin_name stands for a plugin name such as innodb or csv.

Built-in plugins:

A plugin that is built in to the server is known by the server automatically. Normally, the server enables the plugin at startup, although this can be changed with the --plugin_name option.

Plugins registered in the mysql.plugin table:

The mysql.plugin table serves as a registry of plugins. The server normally enables each plugin listed in the table at startup, although whether a given plugin is enabled can be changed with the --plugin_name option. If the server is started with the --skip-grant-tables option, it does not consult this table and does not load the plugins listed there.

Plugins named with command-line options:

A plugin that is located in a plugin library file can be loaded at server startup with the --plugin-load option. Normally, the server enables the plugin at startup, although this can be changed with the --plugin_name option.

The option value is a semicolon-separated list of name=plugin_library pairs. Each name is the name of the plugin, and plugin_library is the name of the shared library that contains the plugin code. If a plugin library is named without any preceding plugin name, the server loads all plugins in the library. Each library file must be located in the directory named by the plugin_dir system variable.

This option does not register any plugin in the mysql.plugin table. For subsequent restarts, the server loads the plugin again only if --plugin-load is given again. That is, this option effects a one-time installation that persists only for one server invocation.

--plugin-load enables plugins to be loaded even when --skip-grant-tables is given (which causes the server to ignore the mysql.plugin table). --plugin-load also enables plugins to be loaded at startup under configurations when plugins cannot be loaded at runtime.

The --plugin-load-add option complements the --plugin-load option. --plugin-load-add adds a plugin or plugins to the set of plugins to be loaded at startup. The argument format is the same as for --plugin-load. --plugin-load-add can be used to avoid specifying a large set of plugins as a single long unwieldy --plugin-load. argument. --plugin-load-add can be given in the absence of --plugin-load, but any instance of --plugin-load-add that appears before --plugin-load. has no effect because --plugin-load resets the set of plugins to load. In other words, these options:

--plugin-load=x --plugin-load-add=y

are equivalent to this option:

--plugin-load="x;y"

But these options:

--plugin-load-add=y --plugin-load=x

are equivalent to this option:

--plugin-load=x

Plugins installed with the INSTALL PLUGIN statement:

A plugin that is located in a plugin library file can be loaded at runtime with the INSTALL PLUGIN statement. The statement also registers the plugin in the mysql.plugin table to cause the server to load it on subsequent restarts. For this reason, INSTALL PLUGIN requires the INSERT privilege for the mysql.plugin table.

If a plugin is named both using a --plugin-load option and in the mysql.plugin table, the server starts but writes these messages to the error log:

100310 19:15:44 [ERROR] Function 'plugin_name' already exists
100310 19:15:44 [Warning] Couldn't load plugin named 'plugin_name'
with soname 'plugin_object_file'.

Example: The --plugin-load option installs a plugin at server startup. To install a plugin named myplugin in a plugin library file named somepluglib.so, use these lines in a my.cnf file:

[mysqld]
plugin-load=myplugin=somepluglib.so

In this case, the plugin is not registered in mysql.plugin. Restarting the server without the --plugin-load option causes the plugin not to be loaded at startup.

Alternatively, the INSTALL PLUGIN statement causes the server to load the plugin code from the library file at runtime:

mysql> INSTALL PLUGIN myplugin SONAME 'somepluglib.so';

INSTALL PLUGIN also causes permanent plugin registration: The server lists the plugin in the mysql.plugin table to ensure that it is loaded on subsequent server restarts.

Many plugins can be loaded either at server startup or at runtime. However, if a plugin is designed such that it must be loaded and initialized during server startup, use --plugin-load rather than INSTALL PLUGIN.

While a plugin is loaded, information about it is available at runtime from several sources, such as the INFORMATION_SCHEMA.PLUGINS table and the SHOW PLUGINS statement. For more information, see Section 5.1.8.2, “Obtaining Server Plugin Information”.

Controlling Plugin Activation

If the server knows about a plugin when it starts (for example, because the plugin is named using a --plugin-load option or registered in the mysql.plugin table), the server loads and enables the plugin by default. It is possible to control activation for such a plugin using a --plugin_name[=value] startup option named after the plugin. In the following descriptions, plugin_name stands for a plugin name such as innodb or csv. As with other options, dashes and underscores are interchangeable in option names. For example, --my_plugin=ON and --my-plugin=ON are equivalent.

  • --plugin_name=OFF

    Tells the server to disable the plugin.

  • --plugin_name[=ON]

    Tells the server to enable the plugin. (Specifying the option as --plugin_name without a value has the same effect.) If the plugin fails to initialize, the server runs with the plugin disabled.

  • --plugin_name=FORCE

    Tells the server to enable the plugin, but if plugin initialization fails, the server does not start. In other words, this option forces the server to run with the plugin enabled or not at all.

  • --plugin_name=FORCE_PLUS_PERMANENT

    Like FORCE, but in addition prevents the plugin from being unloaded at runtime. If a user attempts to do so with UNINSTALL PLUGIN, an error occurs.

The values OFF, ON, FORCE, and FORCE_PLUS_PERMANENT are not case sensitive.

The activation state for plugins is visible in the LOAD_OPTION column of the INFORMATION_SCHEMA.PLUGINS table.

Suppose that CSV, BLACKHOLE, and ARCHIVE are built-in pluggable storage engines and that you want the server to load them at startup, subject to these conditions: The server is permitted to run if CSV initialization fails, but must require that BLACKHOLE initialization succeeds, and ARCHIVE should be disabled. To accomplish that, use these lines in an option file:

[mysqld]
csv=ON
blackhole=FORCE
archive=OFF

The --enable-plugin_name option format is supported as a synonym for --plugin_name=ON. The --disable-plugin_name and --skip-plugin_name option formats are supported as synonyms for --plugin_name=OFF.

If a plugin is disabled, either explicitly with OFF or implicitly because it was enabled with ON but failed to initialize, aspects of server operation that require the plugin will change. For example, if the plugin implements a storage engine, existing tables for the storage engine become inaccessible, and attempts to create new tables for the storage engine result in tables that use the default storage engine unless the NO_ENGINE_SUBSTITUTION SQL mode has been enabled to cause an error to occur instead.

Disabling a plugin may require adjustment to other options. For example, if you start the server using --skip-innodb to disable InnoDB, other innodb_xxx options likely will need to be omitted from the startup command. In addition, because InnoDB is the default storage engine, it will not start unless you specify another available storage engine with --default_storage_engine. You must also set --default_tmp_storage_engine.

Uninstalling Plugins

A plugin known to the server can be uninstalled to disable it at runtime with the UNINSTALL PLUGIN statement. The statement unloads the plugin and removes it from the mysql.plugin table if it is registered there. For this reason, UNINSTALL PLUGIN statement requires the DELETE privilege for the mysql.plugin table. With the plugin no longer registered in the table, the server will not load the plugin automatically for subsequent restarts.

UNINSTALL PLUGIN can unload plugins regardless of whether they were loaded with INSTALL PLUGIN or --plugin-load.

UNINSTALL PLUGIN is subject to these exceptions:

  • It cannot unload plugins that are built in to the server. These can be identified as those that have a library name of NULL in the output from INFORMATION_SCHEMA.PLUGINS or SHOW PLUGINS.

  • It cannot unload plugins for which the server was started with --plugin_name=FORCE_PLUS_PERMANENT, which prevents plugin unloading at runtime. These can be identified from the LOAD_OPTION column of the INFORMATION_SCHEMA.PLUGINS table.

5.1.8.2. Obtaining Server Plugin Information

There are several ways to determine which plugins are installed in the server:

  • The INFORMATION_SCHEMA.PLUGINS table contains a row for each loaded plugin. Any that have a PLUGIN_LIBRARY value of NULL are built in and cannot be unloaded.

    mysql> SELECT * FROM information_schema.PLUGINS\G
    *************************** 1. row ***************************
               PLUGIN_NAME: binlog
            PLUGIN_VERSION: 1.0
             PLUGIN_STATUS: ACTIVE
               PLUGIN_TYPE: STORAGE ENGINE
       PLUGIN_TYPE_VERSION: 50158.0
            PLUGIN_LIBRARY: NULL
    PLUGIN_LIBRARY_VERSION: NULL
             PLUGIN_AUTHOR: MySQL AB
        PLUGIN_DESCRIPTION: This is a pseudo storage engine to represent the binlog in a transaction
            PLUGIN_LICENSE: GPL
               LOAD_OPTION: FORCE
    ...
    *************************** 10. row ***************************
               PLUGIN_NAME: InnoDB
            PLUGIN_VERSION: 1.0
             PLUGIN_STATUS: ACTIVE
               PLUGIN_TYPE: STORAGE ENGINE
       PLUGIN_TYPE_VERSION: 50158.0
            PLUGIN_LIBRARY: ha_innodb_plugin.so
    PLUGIN_LIBRARY_VERSION: 1.0
             PLUGIN_AUTHOR: Innobase Oy
        PLUGIN_DESCRIPTION: Supports transactions, row-level locking,
                            and foreign keys
            PLUGIN_LICENSE: GPL
               LOAD_OPTION: ON
    ...
    
  • The SHOW PLUGINS statement displays a row for each loaded plugin. Any that have a Library value of NULL are built in and cannot be unloaded.

    mysql> SHOW PLUGINS\G
    *************************** 1. row ***************************
       Name: binlog
     Status: ACTIVE
       Type: STORAGE ENGINE
    Library: NULL
    License: GPL
    ...
    *************************** 10. row ***************************
       Name: InnoDB
     Status: ACTIVE
       Type: STORAGE ENGINE
    Library: ha_innodb_plugin.so
    License: GPL
    ...
    
  • The mysql.plugin table shows which plugins have been registered with INSTALL PLUGIN. The table contains only plugin names and library file names, so it does not provide as much information as the PLUGINS table or the SHOW PLUGINS statement.

5.1.9. IPv6 Support

Support for IPv6 in MySQL includes these capabilities:

  • MySQL Server can accept TCP/IP connections from clients connecting over IPv6. For example, this command connects over IPv6 to the MySQL server on the local host:

    shell> mysql -h ::1
    

    To use this capability, two things must be true:

  • MySQL account names permit IPv6 addresses to enable DBAs to specify privileges for clients that connect to the server over IPv6. See Section 6.2.3, “Specifying Account Names”. IPv6 addresses can be specified in account names in statements such as CREATE USER, GRANT, and REVOKE. For example:

    mysql> CREATE USER 'bill'@'::1' IDENTIFIED BY 'secret';
    mysql> GRANT SELECT ON mydb.* TO 'bill'@'::1';
    
  • IPv6 functions enable conversion between string and internal format IPv6 address formats, and checking whether values represent valid IPv6 addresses. For example, INET6_ATON() and INET6_NTOA() are similar to INET_ATON() and INET_NTOA(), but handle IPv6 addresses in addition to IPv4 addresses. See Section 12.16, “Miscellaneous Functions”.

The following sections describe how to set up MySQL so that clients can connect to the server over IPv6.

5.1.9.1. Verifying System Support for IPv6

Before MySQL Server can accept IPv6 connections, the operating system on your server host must support IPv6. As a simple test to determine whether that is true, try this command:

shell> ping6 ::1
16 bytes from ::1, icmp_seq=0 hlim=64 time=0.171 ms
16 bytes from ::1, icmp_seq=1 hlim=64 time=0.077 ms
...

To produce a description of your system's network interfaces, invoke ifconfig -a and look for IPv6 addresses in the output.

If your host does not support IPv6, consult your system documentation for instructions on enabling it. It might be that you need only reconfigure an existing network interface to add an IPv6 address. Or a more extensive change might be needed, such as rebuilding the kernel with IPv6 options enabled.

These links may be helpful in setting up IPv6 on various platforms:

5.1.9.2. Configuring the MySQL Server to Permit IPv6 Connections

The MySQL server listens on a single network socket for TCP/IP connections. This socket is bound to a single address, but it is possible for an address to map onto multiple network interfaces. To specify an address, use the --bind-address=addr option at server startup, where addr is an IPv4 or IPv6 address or a host name. (IPv6 addresses are not supported before MySQL 5.5.3.) If addr is a host name, the server resolves the name to an IP address and binds to that address.

The server treats different types of addresses as follows:

  • If the address is *, the server accepts TCP/IP connections on all server host IPv6 and IPv4 interfaces if the server host supports IPv6, or accepts TCP/IP connections on all IPv4 addresses otherwise. Use this address to permit both IPv4 and IPv6 connections on all server interfaces. This value is the default.

  • If the address is 0.0.0.0, the server accepts TCP/IP connections on all server host IPv4 interfaces.

  • If the address is ::, the server accepts TCP/IP connections on all server host IPv4 and IPv6 interfaces. Use this address to permit both IPv4 and IPv6 connections on all server interfaces.

  • If the address is an IPv4-mapped address, the server accepts TCP/IP connections for that address, in either IPv4 or IPv6 format. For example, if the server is bound to ::ffff:127.0.0.1, clients can connect using --host=127.0.0.1 or --host=::ffff:127.0.0.1.

  • If the address is a regular IPv4 or IPv6 address (such as 127.0.0.1 or ::1), the server accepts TCP/IP connections only for that IPv4 or IPv6 address.

If you intend to bind the server to a specific address, be sure that the mysql.user grant table contains an account with administrative privileges that you can use to connect to that address. Otherwise, you will not be able to shut down the server. For example, if you bind the server to *, you can connect to it using all existing accounts. But if you bind the server to ::1, it accepts connections only on that address. In that case, first make sure that the 'root'@'::1' account is present in the mysql.user table so you can still connect to the server to shut it down.

5.1.9.3. Connecting Using the IPv6 Local Host Address

The following procedure shows how to configure MySQL to permit IPv6 connections by clients that connect to the local server using the ::1 local host address. The instructions given here assume that your system supports IPv6.

  1. Start the MySQL server with an appropriate --bind-address option to permit it to accept IPv6 connections. For example, put the following lines in your server option file and restart the server:

    [mysqld]
    bind-address = *

    Alternatively, you can bind the server to ::1, but that makes the server more restrictive for TCP/IP connections. It accepts only IPv6 connections for that single address and rejects IPv4 connections. For more information, see Section 5.1.9.2, “Configuring the MySQL Server to Permit IPv6 Connections”.

  2. As an administrator, connect to the server and create an account for a local user who will connect from the ::1 local IPv6 host address:

    mysql> CREATE USER 'ipv6user'@'::1' IDENTIFIED BY 'ipv6pass';
    

    For the permitted syntax of IPv6 addresses in account names, see Section 6.2.3, “Specifying Account Names”. In addition to the CREATE USER statement, you can issue GRANT statements that give specific privileges to the account, although that is not necessary for the remaining steps in this procedure.

  3. Invoke the mysql client to connect to the server using the new account:

    shell> mysql -h ::1 -u ipv6user -pipv6pass
    
  4. Try some simple statements that show connection information:

    mysql> STATUS
    ...
    Connection:   ::1 via TCP/IP
    ...
    
    mysql> SELECT CURRENT_USER(), @@bind_address;
    +----------------+----------------+
    | CURRENT_USER() | @@bind_address |
    +----------------+----------------+
    | ipv6user@::1   | ::             |
    +----------------+----------------+
    

5.1.9.4. Connecting Using IPv6 Nonlocal Host Addresses

The following procedure shows how to configure MySQL to permit IPv6 connections by remote clients. It is similar to the preceding procedure for local clients, but the server and client hosts are distinct and each has its own nonlocal IPv6 address. The example uses these addresses:

Server host: 2001:db8:0:f101::1
Client host: 2001:db8:0:f101::2

These addresses are chosen from the nonroutable address range recommended by IANA for documentation purposes and suffice for testing on your local network. To accept IPv6 connections from clients outside the local network, the server host must have a public address. If your network provider assigns you an IPv6 address, you can use that. Otherwise, another way to obtain an address is to use an IPv6 broker; see Section 5.1.9.5, “Obtaining an IPv6 Address from a Broker”.

  1. Start the MySQL server with an appropriate --bind-address option to permit it to accept IPv6 connections. For example, put the following lines in your server option file and restart the server:

    [mysqld]
    bind-address = *

    Alternatively, you can bind the server to 2001:db8:0:f101::1, but that makes the server more restrictive for TCP/IP connections. It accepts only IPv6 connections for that single address and rejects IPv4 connections. For more information, see Section 5.1.9.2, “Configuring the MySQL Server to Permit IPv6 Connections”.

  2. On the server host (2001:db8:0:f101::1), create an account for a user who will connect from the client host (2001:db8:0:f101::2):

    mysql> CREATE USER 'remoteipv6user'@'2001:db8:0:f101::2' IDENTIFIED BY 'remoteipv6pass';
    
  3. On the client host (2001:db8:0:f101::2), invoke the mysql client to connect to the server using the new account:

    shell> mysql -h 2001:db8:0:f101::1 -u remoteipv6user -premoteipv6pass
    
  4. Try some simple statements that show connection information:

    mysql> STATUS
    ...
    Connection:   2001:db8:0:f101::1 via TCP/IP
    ...
    
    mysql> SELECT CURRENT_USER(), @@bind_address;
    +-----------------------------------+----------------+
    | CURRENT_USER()                    | @@bind_address |
    +-----------------------------------+----------------+
    | remoteipv6user@2001:db8:0:f101::2 | ::             |
    +-----------------------------------+----------------+
    

5.1.9.5. Obtaining an IPv6 Address from a Broker

If you do not have a public IPv6 address that enables your system to communicate over IPv6 outside your local network, you can obtain one from an IPv6 broker. The Wikipedia IPv6 Tunnel Broker page lists several brokers and their features, such as whether they provide static addresses and the supported routing protocols.

After configuring your server host to use a broker-supplied IPv6 address, start the MySQL server with an appropriate --bind-address option to permit the server to accept IPv6 connections. For example, put the following lines in the server option file and restart the server:

[mysqld]
bind-address = *

Alternatively, you can bind the server to to the specific IPv6 address provided by the broker, but that makes the server more restrictive for TCP/IP connections. It accepts only IPv6 connections for that single address and rejects IPv4 connections. For more information, see Section 5.1.9.2, “Configuring the MySQL Server to Permit IPv6 Connections”. In addition, if the broker allocates dynamic addresses, the address provided for your system might change the next time you connect to the broker. If so, any accounts you create that name the original address become invalid. To bind to a specific address but avoid this change-of-address problem, you may be able to arrange with the broker for a static IPv6 address.

The following example shows how to use Freenet6 as the broker and the gogoc IPv6 client package on Gentoo Linux.

  1. Create a account at Freenet6 by visiting this URL and signing up:

    http://gogonet.gogo6.com
    
  2. After creating the account, go to this URL, sign in, and create a user ID and password for the IPv6 broker:

    http://gogonet.gogo6.com/page/freenet6-registration
    
  3. As root, install gogoc:

    shell> emerge gogoc
    
  4. Edit /etc/gogoc/gogoc.conf to set the userid and password values. For example:

    userid=gogouser
    passwd=gogopass
  5. Start gogoc:

    shell> /etc/init.d/gogoc start
    

    To start gogoc each time your system boots, execute this command:

    shell> rc-update add gogoc default
    
  6. Use ping6 to try to ping a host:

    shell> ping6 ipv6.google.com
    
  7. To see your IPv6 address:

    shell> ifconfig tun
    

5.1.10. Server-Side Help

MySQL Server supports a HELP statement that returns online information from the MySQL Reference manual (see Section 13.8.3, “HELP Syntax”). The proper operation of this statement requires that the help tables in the mysql database be initialized with help topic information, which is done by processing the contents of the fill_help_tables.sql script.

If you install MySQL using a binary or source distribution on Unix, help table setup occurs when you run mysql_install_db. For an RPM distribution on Linux or binary distribution on Windows, help table setup occurs as part of the MySQL installation process.

If you upgrade MySQL using a binary distribution, the help tables are not upgraded automatically, but you can upgrade them manually. Locate the fill_help_tables.sql file in the share or share/mysql directory. Change location into that directory and process the file with the mysql client as follows:

shell> mysql -u root mysql < fill_help_tables.sql

You can also obtain the latest fill_help_tables.sql at any time to upgrade your help tables. Download the proper file for your version of MySQL from http://dev.mysql.com/doc/index-other.html. After downloading and uncompressing the file, process it with mysql as described previously.

If you are working with Bazaar and a MySQL development source tree, you will need to download the fill_help_tables.sql file because the tree contains only a stub version.

5.1.11. Server Response to Signals

On Unix, signals can be sent to processes. mysqld responds to signals sent to it as follows:

  • SIGTERM causes the server to shut down.

  • SIGHUP causes the server to reload the grant tables and to flush tables, logs, the thread cache, and the host cache. These actions are like various forms of the FLUSH statement. The server also writes a status report to the error log that has this format:

    Status information:
    
    Current dir: /var/mysql/data/
    Running threads: 0  Stack size: 196608
    Current locks:
    
    Key caches:
    default
    Buffer_size:       8388600
    Block_size:           1024
    Division_limit:        100
    Age_limit:             300
    blocks used:             0
    not flushed:             0
    w_requests:              0
    writes:                  0
    r_requests:              0
    reads:                   0
    
    handler status:
    read_key:            0
    read_next:           0
    read_rnd             0
    read_first:          1
    write:               0
    delete               0
    update:              0
    
    Table status:
    Opened tables:          5
    Open tables:            0
    Open files:             7
    Open streams:           0
    
    Alarm status:
    Active alarms:   1
    Max used alarms: 2
    Next alarm time: 67

On some Mac OS X 10.3 versions, mysqld ignores SIGHUP and SIGQUIT.

5.1.12. The Shutdown Process

The server shutdown process takes place as follows:

  1. The shutdown process is initiated.

    This can occur initiated several ways. For example, a user with the SHUTDOWN privilege can execute a mysqladmin shutdown command. mysqladmin can be used on any platform supported by MySQL. Other operating system-specific shutdown initiation methods are possible as well: The server shuts down on Unix when it receives a SIGTERM signal. A server running as a service on Windows shuts down when the services manager tells it to.

  2. The server creates a shutdown thread if necessary.

    Depending on how shutdown was initiated, the server might create a thread to handle the shutdown process. If shutdown was requested by a client, a shutdown thread is created. If shutdown is the result of receiving a SIGTERM signal, the signal thread might handle shutdown itself, or it might create a separate thread to do so. If the server tries to create a shutdown thread and cannot (for example, if memory is exhausted), it issues a diagnostic message that appears in the error log:

    Error: Can't create thread to kill server
  3. The server stops accepting new connections.

    To prevent new activity from being initiated during shutdown, the server stops accepting new client connections by closing the handlers for the network interfaces to which it normally listens for connections: the TCP/IP port, the Unix socket file, the Windows named pipe, and shared memory on Windows.

  4. The server terminates current activity.

    For each thread associated with a client connection, the server breaks the connection to the client and marks the thread as killed. Threads die when they notice that they are so marked. Threads for idle connections die quickly. Threads that currently are processing statements check their state periodically and take longer to die. For additional information about thread termination, see Section 13.7.6.4, “KILL Syntax”, in particular for the instructions about killed REPAIR TABLE or OPTIMIZE TABLE operations on MyISAM tables.

    For threads that have an open transaction, the transaction is rolled back. Note that if a thread is updating a nontransactional table, an operation such as a multiple-row UPDATE or INSERT may leave the table partially updated because the operation can terminate before completion.

    If the server is a master replication server, it treats threads associated with currently connected slaves like other client threads. That is, each one is marked as killed and exits when it next checks its state.

    If the server is a slave replication server, it stops the I/O and SQL threads, if they are active, before marking client threads as killed. The SQL thread is permitted to finish its current statement (to avoid causing replication problems), and then stops. If the SQL thread is in the middle of a transaction at this point, the server waits until the current replication event group (if any) has finished executing, or until the user issues a KILL QUERY or KILL CONNECTION statement. See also Section 13.4.2.6, “STOP SLAVE Syntax”. Since nontransactional statements cannot be rolled back, in order to guarantee crash-safe replication, only transactional tables should be used.

    Note

    In order to guarantee crash safety on the slave, you must also run the slave with --relay-log-recovery enabled.

    See also Section 16.2.2, “Replication Relay and Status Logs”).

  5. The server shuts down or closes storage engines.

    At this stage, the server flushes the table cache and closes all open tables.

    Each storage engine performs any actions necessary for tables that it manages. InnoDB flushes its buffer pool to disk (unless innodb_fast_shutdown is 2), writes the current LSN to the tablespace, and terminates its own internal threads. MyISAM flushes any pending index writes for a table.

  6. The server exits.

5.2. MySQL Server Logs

MySQL Server has several logs that can help you find out what activity is taking place.

Log TypeInformation Written to Log
Error logProblems encountered starting, running, or stopping mysqld
General query logEstablished client connections and statements received from clients
Binary logStatements that change data (also used for replication)
Relay logData changes received from a replication master server
Slow query logQueries that took more than long_query_time seconds to execute

By default, no logs are enabled (except the error log on Windows). The following log-specific sections provide information about the server options that enable logging.

By default, the server writes files for all enabled logs in the data directory. You can force the server to close and reopen the log files (or in some cases switch to a new log file) by flushing the logs. Log flushing occurs when you issue a FLUSH LOGS statement; execute mysqladmin with a flush-logs or refresh argument; or execute mysqldump with a --flush-logs or --master-data option. See Section 13.7.6.3, “FLUSH Syntax”, Section 4.5.2, “mysqladmin — Client for Administering a MySQL Server”, and Section 4.5.4, “mysqldump — A Database Backup Program”. In addition, the binary log is flushed when its size reaches the value of the max_binlog_size system variable.

You can control the general query and slow query logs during runtime. You can enable or disable logging, or change the log file name. You can tell the server to write general query and slow query entries to log tables, log files, or both. For details, see Section 5.2.1, “Selecting General Query and Slow Query Log Output Destinations”, Section 5.2.3, “The General Query Log”, and Section 5.2.5, “The Slow Query Log”.

The relay log is used only on slave replication servers, to hold data changes from the master server that must also be made on the slave. For discussion of relay log contents and configuration, see Section 16.2.2.1, “The Slave Relay Log”.

For information about log maintenance operations such as expiration of old log files, see Section 5.2.6, “Server Log Maintenance”.

For information about keeping logs secure, see Section 6.1.2.3, “Passwords and Logging”.

5.2.1. Selecting General Query and Slow Query Log Output Destinations

MySQL Server provides flexible control over the destination of output to the general query log and the slow query log, if those logs are enabled. Possible destinations for log entries are log files or the general_log and slow_log tables in the mysql database. Either or both destinations can be selected.

Log control at server startup. The --log-output option specifies the destination for log output. This option does not in itself enable the logs. Its syntax is --log-output[=value,...]:

  • If --log-output is given with a value, the value should be a comma-separated list of one or more of the words TABLE (log to tables), FILE (log to files), or NONE (do not log to tables or files). NONE, if present, takes precedence over any other specifiers.

  • If --log-output is omitted, the default logging destination is FILE.

The general_log system variable controls logging to the general query log for the selected log destinations. If specified at server startup, general_log takes an optional argument of 1 or 0 to enable or disable the log. To specify a file name other than the default for file logging, set the general_log_file variable. Similarly, the slow_query_log variable controls logging to the slow query log for the selected destinations and setting slow_query_log_file specifies a file name for file logging. If either log is enabled, the server opens the corresponding log file and writes startup messages to it. However, further logging of queries to the file does not occur unless the FILE log destination is selected.

Examples:

  • To write general query log entries to the log table and the log file, use --log-output=TABLE,FILE to select both log destinations and --general_log to enable the general query log.

  • To write general and slow query log entries only to the log tables, use --log-output=TABLE to select tables as the log destination and --general_log and --slow_query_log to enable both logs.

  • To write slow query log entries only to the log file, use --log-output=FILE to select files as the log destination and --slow_query_log to enable the slow query log. (In this case, because the default log destination is FILE, you could omit the --log-output option.)

Log control at runtime. The system variables associated with log tables and files enable runtime control over logging:

  • The global log_output system variable indicates the current logging destination. It can be modified at runtime to change the destination.

  • The global general_log and slow_query_log variables indicate whether the general query log and slow query log are enabled (ON) or disabled (OFF). You can set these variables at runtime to control whether the logs are enabled.

  • The global general_log_file and slow_query_log_file variables indicate the names of the general query log and slow query log files. You can set these variables at server startup or at runtime to change the names of the log files.

  • To disable or enable general query logging for the current connection, set the session sql_log_off variable to ON or OFF.

The use of tables for log output offers the following benefits:

  • Log entries have a standard format. To display the current structure of the log tables, use these statements:

    SHOW CREATE TABLE mysql.general_log;
    SHOW CREATE TABLE mysql.slow_log;
  • Log contents are accessible through SQL statements. This enables the use of queries that select only those log entries that satisfy specific criteria. For example, to select log contents associated with a particular client (which can be useful for identifying problematic queries from that client), it is easier to do this using a log table than a log file.

  • Logs are accessible remotely through any client that can connect to the server and issue queries (if the client has the appropriate log table privileges). It is not necessary to log in to the server host and directly access the file system.

The log table implementation has the following characteristics:

  • In general, the primary purpose of log tables is to provide an interface for users to observe the runtime execution of the server, not to interfere with its runtime execution.

  • CREATE TABLE, ALTER TABLE, and DROP TABLE are valid operations on a log table. For ALTER TABLE and DROP TABLE, the log table cannot be in use and must be disabled, as described later.

  • By default, the log tables use the CSV storage engine that writes data in comma-separated values format. For users who have access to the .CSV files that contain log table data, the files are easy to import into other programs such as spreadsheets that can process CSV input.

    The log tables can be altered to use the MyISAM storage engine. You cannot use ALTER TABLE to alter a log table that is in use. The log must be disabled first. No engines other than CSV or MyISAM are legal for the log tables.

  • To disable logging so that you can alter (or drop) a log table, you can use the following strategy. The example uses the general query log; the procedure for the slow query log is similar but uses the slow_log table and slow_query_log system variable.

    SET @old_log_state = @@global.general_log;
    SET GLOBAL general_log = 'OFF';
    ALTER TABLE mysql.general_log ENGINE = MyISAM;
    SET GLOBAL general_log = @old_log_state;
  • TRUNCATE TABLE is a valid operation on a log table. It can be used to expire log entries.

  • RENAME TABLE is a valid operation on a log table. You can atomically rename a log table (to perform log rotation, for example) using the following strategy:

    USE mysql;
    DROP TABLE IF EXISTS general_log2;
    CREATE TABLE general_log2 LIKE general_log;
    RENAME TABLE general_log TO general_log_backup, general_log2 TO general_log;
  • CHECK TABLE is a valid operation on a log table.

  • LOCK TABLES cannot be used on a log table.

  • INSERT, DELETE, and UPDATE cannot be used on a log table. These operations are permitted only internally to the server itself.

  • FLUSH TABLES WITH READ LOCK and the state of the global read_only system variable have no effect on log tables. The server can always write to the log tables.

  • Entries written to the log tables are not written to the binary log and thus are not replicated to slave servers.

  • To flush the log tables or log files, use FLUSH TABLES or FLUSH LOGS, respectively.

  • Partitioning of log tables is not permitted.

  • A mysqldump dump includes statements to recreate those tables so that they are not missing after reloading the dump file. Log table contents are not dumped.

5.2.2. The Error Log

The error log contains information indicating when mysqld was started and stopped and also any critical errors that occur while the server is running. If mysqld notices a table that needs to be automatically checked or repaired, it writes a message to the error log.

On some operating systems, the error log contains a stack trace if mysqld dies. The trace can be used to determine where mysqld died. See Section 22.4, “Debugging and Porting MySQL”.

In the following discussion, console means stderr, the standard error output; this is your terminal or console window unless the standard error output has been redirected. (For example, if invoked with the --syslog option, mysqld_safe arranges for the server's stderr to be sent to the syslog facility, as described later.)

On Windows, the --log-error and --console options both affect error logging:

  • Without --log-error, mysqld writes error messages to host_name.err in the data directory.

  • With --log-error[=file_name], mysqld writes error messages to an error log file. The server uses the named file if present, creating it in in the data directory unless an absolute path name is given to specify a different directory. If no file is named, the default name is host_name.err in the data directory.

  • If --console is given, mysqld writes error messages to the console. --log-error, if given, is ignored and has no effect. If both options are present, their order does not matter: --console takes precedence and error messages go to the console. (In MySQL 5.5 and 5.6, the precedence is reversed: --log-error causes --console to be ignored.)

In addition, on Windows, events and error messages are written to the Windows Event Log within the Application log. Entries marked as Warning and Note are written to the Event Log, but not informational messages such as information statements from individual storage engines. These log entries have a source of MySQL. You cannot disable writing information to the Windows Event Log.

On Unix and Unix-like systems, mysqld writes error log messages as follows:

  • Without --log-error, mysqld writes error messages to the console.

  • With --log-error[=file_name], mysqld writes error messages to an error log file. The server uses the named file if present, creating it in the data directory unless an absolute path name is given to specify a different directory. If no file is named, the default name is host_name.err in the data directory.

At runtime, if the server writes error messages to the console, it sets the log_error system variable to stderr. Otherwise, log_error indicates the error log file name. In particular, on Windows, --console overrides use of an error log file and sends error messages to the console, so log_error is set to stderr. This occurs even if --log-error is also given.

If you flush the logs using FLUSH LOGS or mysqladmin flush-logs and mysqld is writing the error log to a file (for example, if it was started with the --log-error option), the server closes and reopens the log file. To rename the file, do so manually before flushing. Then flushing the logs reopens a new file with the original file name. For example, you can rename the file and create a new one using the following commands:

shell> mv host_name.err host_name.err-old
shell> mysqladmin flush-logs
shell> mv host_name.err-old backup-directory

On Windows, use rename rather than mv.

No error log renaming occurs when the logs are flushed if the server is not writing to a named file.

If you use mysqld_safe to start mysqld, mysqld_safe arranges for mysqld to write error messages to a log file or to syslog. mysqld_safe has three error-logging options, --syslog, --skip-syslog, and --log-error. The default with no logging options or with --skip-syslog is to use the default log file. To explicitly specify use of an error log file, specify --log-error=file_name to mysqld_safe, and mysqld_safe will arrange for mysqld to write messages to a log file. To use syslog instead, specify the --syslog option.

If you specify --log-error in an option file in a [mysqld], [server], or [mysqld_safe] section, mysqld_safe will find and use the option.

If mysqld_safe is used to start mysqld and mysqld dies unexpectedly, mysqld_safe notices that it needs to restart mysqld and writes a restarted mysqld message to the error log.

The --log-warnings option or log_warnings system variable can be used to control warning logging to the error log. The default value is enabled (1). Warning logging can be disabled using a value of 0. If the value is greater than 1, aborted connections are written to the error log, and access-denied errors for new connection attempts are written. See Section C.5.2.11, “Communication Errors and Aborted Connections”.

5.2.3. The General Query Log

The general query log is a general record of what mysqld is doing. The server writes information to this log when clients connect or disconnect, and it logs each SQL statement received from clients. The general query log can be very useful when you suspect an error in a client and want to know exactly what the client sent to mysqld.

mysqld writes statements to the query log in the order that it receives them, which might differ from the order in which they are executed. This logging order is in contrast with that of the binary log, for which statements are written after they are executed but before any locks are released. In addition, the query log may contain statements that only select data while such statements are never written to the binary log.

When using statement-based logging all statements are written to the query log, but when using row-based logging, updates are sent as row changes rather than SQL statements, and thus these statements are never written to the query log when binlog_format is ROW. A given update also might not be written to the query log when this variable is set to MIXED, depending on the statement used. See Section 16.1.2.1, “Advantages and Disadvantages of Statement-Based and Row-Based Replication”, for more information.

By default, the general query log is disabled. To specify the initial general query log state explicitly, use --general_log[={0|1}]. With no argument or an argument of 1, --general_log enables the log. With an argument of 0, this option disables the log. To specify a log file name, use --general_log_file=file_name. To specify the log destination, use --log-output (as described in Section 5.2.1, “Selecting General Query and Slow Query Log Output Destinations”).

If you specify no name for the general query log file, the default name is host_name.log. The server creates the file in the data directory unless an absolute path name is given to specify a different directory.

To disable or enable the general query log or change the log file name at runtime, use the global general_log and general_log_file system variables. Set general_log to 0 (or OFF) to disable the log or to 1 (or ON) to enable it. Set general_log_file to specify the name of the log file. If a log file already is open, it is closed and the new file is opened.

When the general query log is enabled, the server writes output to any destinations specified by the --log-output option or log_output system variable. If you enable the log, the server opens the log file and writes startup messages to it. However, further logging of queries to the file does not occur unless the FILE log destination is selected. If the destination is NONE, the server writes no queries even if the general log is enabled. Setting the log file name has no effect on logging if the log destination value does not contain FILE.

Server restarts and log flushing do not cause a new general query log file to be generated (although flushing closes and reopens it). To rename the file and create a new one, use the following commands:

shell> mv host_name.log host_name-old.log
shell> mysqladmin flush-logs
shell> mv host_name-old.log backup-directory

On Windows, use rename rather than mv.

You can also rename the general query log file at runtime by disabling the log:

SET GLOBAL general_log = 'OFF';

With the log disabled, rename the log file externally; for example, from the command line. Then enable the log again:

SET GLOBAL general_log = 'ON';

This method works on any platform and does not require a server restart.

The session sql_log_off variable can be set to ON or OFF to disable or enable general query logging for the current connection.

Passwords in statements written to the general query log are rewritten by the server not to occur literally in plain text. Password rewriting can be suppressed for the general query log by starting the server with the --log-raw option. This option may be useful for diagnostic purposes, to see the exact text of statements as received by the server, but for security reasons is not recommended for production use. See also Section 6.1.2.3, “Passwords and Logging”.

5.2.4. The Binary Log

The binary log contains events that describe database changes such as table creation operations or changes to table data. It also contains events for statements that potentially could have made changes (for example, a DELETE which matched no rows), unless row-based logging is used. The binary log also contains information about how long each statement took that updated data. The binary log has two important purposes:

  • For replication, the binary log on a master replication server provides a record of the data changes to be sent to slave servers. The master server sends the events contained in its binary log to its slaves, which execute those events to make the same data changes that were made on the master. See Section 16.2, “Replication Implementation”.

  • Certain data recovery operations require use of the binary log. After a backup has been restored, the events in the binary log that were recorded after the backup was made are re-executed. These events bring databases up to date from the point of the backup. See Section 7.5, “Point-in-Time (Incremental) Recovery Using the Binary Log”.

The binary log is not used for statements such as SELECT or SHOW that do not modify data. To log all statements (for example, to identify a problem query), use the general query log. See Section 5.2.3, “The General Query Log”.

Running a server with binary logging enabled makes performance slightly slower. However, the benefits of the binary log in enabling you to set up replication and for restore operations generally outweigh this minor performance decrement.

The binary log is crash-safe. Only complete events or transactions are logged or read back.

Passwords in statements written to the binary log are rewritten by the server not to occur literally in plain text. See also Section 6.1.2.3, “Passwords and Logging”.

The following discussion describes some of the server options and variables that affect the operation of binary logging. For a complete list, see Section 16.1.4.4, “Binary Log Options and Variables”.

To enable the binary log, start the server with the --log-bin[=base_name] option. If no base_name value is given, the default name is the value of the pid-file option (which by default is the name of host machine) followed by -bin. If the basename is given, the server writes the file in the data directory unless the basename is given with a leading absolute path name to specify a different directory. It is recommended that you specify a basename explicitly rather than using the default of the host name; see Section C.5.8, “Known Issues in MySQL”, for the reason.

If you supply an extension in the log name (for example, --log-bin=base_name.extension), the extension is silently removed and ignored.

mysqld appends a numeric extension to the binary log basename to generate binary log file names. The number increases each time the server creates a new log file, thus creating an ordered series of files. The server creates a new file in the series each time it starts or flushes the logs. The server also creates a new binary log file automatically after the current log's size reaches max_binlog_size. A binary log file may become larger than max_binlog_size if you are using large transactions because a transaction is written to the file in one piece, never split between files.

To keep track of which binary log files have been used, mysqld also creates a binary log index file that contains the names of all used binary log files. By default, this has the same basename as the binary log file, with the extension '.index'. You can change the name of the binary log index file with the --log-bin-index[=file_name] option. You should not manually edit this file while mysqld is running; doing so would confuse mysqld.

The term binary log file generally denotes an individual numbered file containing database events. The term binary log collectively denotes the set of numbered binary log files plus the index file.

A client that has the SUPER privilege can disable binary logging of its own statements by using a SET sql_log_bin=0 statement. See Section 5.1.4, “Server System Variables”.

By default, the server logs the length of the event as well as the event itself and uses this to verify that the event was written correctly. You can also cause the server to write checksums for the events by setting the binlog_checksum system variable. When reading back from the binary log, the master uses the event length by default, but can be made to use checksums if available by enabling the master_verify_checksum system variable. The slave I/O thread also verifies events received from the master. You can cause the slave SQL thread to use checksums if available when reading from the relay log by enabling the slave_sql_verify_checksum system variable.

The format of the events recorded in the binary log is dependent on the binary logging format. Three format types are supported, row-based logging, statement-based logging and mixed-base logging. The binary logging format used depends on the MySQL version. For general descriptions of the logging formats, see Section 5.2.4.1, “Binary Logging Formats”. For detailed information about the format of the binary log, see MySQL Internals: The Binary Log.

The server evaluates the --binlog-do-db and --binlog-ignore-db options in the same way as it does the --replicate-do-db and --replicate-ignore-db options. For information about how this is done, see Section 16.2.3.1, “Evaluation of Database-Level Replication and Binary Logging Options”.

A replication slave server by default does not write to its own binary log any data modifications that are received from the replication master. To log these modifications, start the slave with the --log-slave-updates option in addition to the --log-bin option (see Section 16.1.4.3, “Replication Slave Options and Variables”). This is done when a slave is also to act as a master to other slaves in chained replication.

You can delete all binary log files with the RESET MASTER statement, or a subset of them with PURGE BINARY LOGS. See Section 13.7.6.6, “RESET Syntax”, and Section 13.4.1.1, “PURGE BINARY LOGS Syntax”.

If you are using replication, you should not delete old binary log files on the master until you are sure that no slave still needs to use them. For example, if your slaves never run more than three days behind, once a day you can execute mysqladmin flush-logs on the master and then remove any logs that are more than three days old. You can remove the files manually, but it is preferable to use PURGE BINARY LOGS, which also safely updates the binary log index file for you (and which can take a date argument). See Section 13.4.1.1, “PURGE BINARY LOGS Syntax”.

You can display the contents of binary log files with the mysqlbinlog utility. This can be useful when you want to reprocess statements in the log for a recovery operation. For example, you can update a MySQL server from the binary log as follows:

shell> mysqlbinlog log_file | mysql -h server_name

mysqlbinlog also can be used to display replication slave relay log file contents because they are written using the same format as binary log files. For more information on the mysqlbinlog utility and how to use it, see Section 4.6.8, “mysqlbinlog — Utility for Processing Binary Log Files”. For more information about the binary log and recovery operations, see Section 7.5, “Point-in-Time (Incremental) Recovery Using the Binary Log”.

Binary logging is done immediately after a statement or transaction completes but before any locks are released or any commit is done. This ensures that the log is logged in commit order.

Updates to nontransactional tables are stored in the binary log immediately after execution.

Within an uncommitted transaction, all updates (UPDATE, DELETE, or INSERT) that change transactional tables such as InnoDB tables are cached until a COMMIT statement is received by the server. At that point, mysqld writes the entire transaction to the binary log before the COMMIT is executed.

Modifications to nontransactional tables cannot be rolled back. If a transaction that is rolled back includes modifications to nontransactional tables, the entire transaction is logged with a ROLLBACK statement at the end to ensure that the modifications to those tables are replicated.

When a thread that handles the transaction starts, it allocates a buffer of binlog_cache_size to buffer statements. If a statement is bigger than this, the thread opens a temporary file to store the transaction. The temporary file is deleted when the thread ends.

The Binlog_cache_use status variable shows the number of transactions that used this buffer (and possibly a temporary file) for storing statements. The Binlog_cache_disk_use status variable shows how many of those transactions actually had to use a temporary file. These two variables can be used for tuning binlog_cache_size to a large enough value that avoids the use of temporary files.

The max_binlog_cache_size system variable (default 4GB, which is also the maximum) can be used to restrict the total size used to cache a multiple-statement transaction. If a transaction is larger than this many bytes, it fails and rolls back. The minimum value is 4096.

If you are using the binary log and row based logging, concurrent inserts are converted to normal inserts for CREATE ... SELECT or INSERT ... SELECT statements. This is done to ensure that you can re-create an exact copy of your tables by applying the log during a backup operation. If you are using statement-based logging, the original statement is written to the log.

The binary log format has some known limitations that can affect recovery from backups. See Section 16.4.1, “Replication Features and Issues”.

Binary logging for stored programs is done as described in Section 18.7, “Binary Logging of Stored Programs”.

Note that the binary log format differs in MySQL 5.7 from previous versions of MySQL, due to enhancements in replication. See Section 16.4.2, “Replication Compatibility Between MySQL Versions”.

Writes to the binary log file and binary log index file are handled in the same way as writes to MyISAM tables. See Section C.5.4.3, “How MySQL Handles a Full Disk”.

By default, the binary log is not synchronized to disk at each write. So if the operating system or machine (not only the MySQL server) crashes, there is a chance that the last statements of the binary log are lost. To prevent this, you can make the binary log be synchronized to disk after every N writes to the binary log, with the sync_binlog system variable. See Section 5.1.4, “Server System Variables”. 1 is the safest value for sync_binlog, but also the slowest. Even with sync_binlog set to 1, there is still the chance of an inconsistency between the table content and binary log content in case of a crash. For example, if you are using InnoDB tables and the MySQL server processes a COMMIT statement, it writes the whole transaction to the binary log and then commits this transaction into InnoDB. If the server crashes between those two operations, the transaction is rolled back by InnoDB at restart but still exists in the binary log. To resolve this, you should set --innodb_support_xa to 1. Although this option is related to the support of XA transactions in InnoDB, it also ensures that the binary log and InnoDB data files are synchronized.

For this option to provide a greater degree of safety, the MySQL server should also be configured to synchronize the binary log and the InnoDB logs to disk before committing the transaction. The InnoDB logs are synchronized by default, and sync_binlog=1 can be used to synchronize the binary log. The effect of this option is that at restart after a crash, after doing a rollback of transactions, the MySQL server cuts rolled back InnoDB transactions from the binary log. This ensures that the binary log reflects the exact data of InnoDB tables, and so, that the slave remains in synchrony with the master (not receiving a statement which has been rolled back).

If the MySQL server discovers at crash recovery that the binary log is shorter than it should have been, it lacks at least one successfully committed InnoDB transaction. This should not happen if sync_binlog=1 and the disk/file system do an actual sync when they are requested to (some do not), so the server prints an error message The binary log file_name is shorter than its expected size. In this case, this binary log is not correct and replication should be restarted from a fresh snapshot of the master's data.

The session values of the following system variables are written to the binary log and honored by the replication slave when parsing the binary log:

5.2.4.1. Binary Logging Formats

The server uses several logging formats to record information in the binary log. The exact format employed depends on the version of MySQL being used. There are three logging formats:

  • Replication capabilities in MySQL originally were based on propagation of SQL statements from master to slave. This is called statement-based logging. You can cause this format to be used by starting the server with --binlog-format=STATEMENT.

  • In row-based logging, the master writes events to the binary log that indicate how individual table rows are affected. You can cause the server to use row-based logging by starting it with --binlog-format=ROW.

  • A third option is also available: mixed logging. With mixed logging, statement-based logging is used by default, but the logging mode switches automatically to row-based in certain cases as described below. You can cause MySQL to use mixed logging explicitly by starting mysqld with the option --binlog-format=MIXED.

In MySQL 5.7, the default binary logging format is STATEMENT.

The logging format can also be set or limited by the storage engine being used. This helps to eliminate issues when replicating certain statements between a master and slave which are using different storage engines.

With statement-based replication, there may be issues with replicating nondeterministic statements. In deciding whether or not a given statement is safe for statement-based replication, MySQL determines whether it can guarantee that the statement can be replicated using statement-based logging. If MySQL cannot make this guarantee, it marks the statement as potentially unreliable and issues the warning, Statement may not be safe to log in statement format.

You can avoid these issues by using MySQL's row-based replication instead.

5.2.4.2. Setting The Binary Log Format

You can select the binary logging format explicitly by starting the MySQL server with --binlog-format=type. The supported values for type are:

  • STATEMENT causes logging to be statement based.

  • ROW causes logging to be row based.

  • MIXED causes logging to use mixed format.

In MySQL 5.7, the default binary logging format is STATEMENT.

The logging format also can be switched at runtime. To specify the format globally for all clients, set the global value of the binlog_format system variable:

mysql> SET GLOBAL binlog_format = 'STATEMENT';
mysql> SET GLOBAL binlog_format = 'ROW';
mysql> SET GLOBAL binlog_format = 'MIXED';

An individual client can control the logging format for its own statements by setting the session value of binlog_format:

mysql> SET SESSION binlog_format = 'STATEMENT';
mysql> SET SESSION binlog_format = 'ROW';
mysql> SET SESSION binlog_format = 'MIXED';
Note

Each MySQL Server can set its own and only its own binary logging format (true whether binlog_format is set with global or session scope). This means that changing the logging format on a replication master does not cause a slave to change its logging format to match. (When using STATEMENT mode, the binlog_format system variable is not replicated; when using MIXED or ROW logging mode, it is replicated but is ignored by the slave.) Changing the binary logging format on the master while replication is ongoing, or without also changing it on the slave can thus cause unexpected results, or even cause replication to fail altogether.

To change the global or session binlog_format value, you must have the SUPER privilege.

In addition to switching the logging format manually, a slave server may switch the format automatically. This happens when the server is running in either STATEMENT or MIXED format and encounters an event in the binary log that is written in ROW logging format. In that case, the slave switches to row-based replication temporarily for that event, and switches back to the previous format afterward.

There are several reasons why a client might want to set binary logging on a per-session basis:

  • A session that makes many small changes to the database might want to use row-based logging.

  • A session that performs updates that match many rows in the WHERE clause might want to use statement-based logging because it will be more efficient to log a few statements than many rows.

  • Some statements require a lot of execution time on the master, but result in just a few rows being modified. It might therefore be beneficial to replicate them using row-based logging.

There are exceptions when you cannot switch the replication format at runtime:

  • From within a stored function or a trigger

  • If the NDBCLUSTER storage engine is enabled

  • If the session is currently in row-based replication mode and has open temporary tables

Trying to switch the format in any of these cases results in an error.

If you are using InnoDB tables and the transaction isolation level is READ COMMITTED or READ UNCOMMITTED, only row-based logging can be used. It is possible to change the logging format to STATEMENT, but doing so at runtime leads very rapidly to errors because InnoDB can no longer perform inserts.

Switching the replication format at runtime is not recommended when any temporary tables exist, because temporary tables are logged only when using statement-based replication, whereas with row-based replication they are not logged. With mixed replication, temporary tables are usually logged; exceptions happen with user-defined functions (UDFs) and with the UUID() function.

With the binary log format set to ROW, many changes are written to the binary log using the row-based format. Some changes, however, still use the statement-based format. Examples include all DDL (data definition language) statements such as CREATE TABLE, ALTER TABLE, or DROP TABLE.

The --binlog-row-event-max-size option is available for servers that are capable of row-based replication. Rows are stored into the binary log in chunks having a size in bytes not exceeding the value of this option. The value must be a multiple of 256. The default value is 1024.

Warning

When using statement-based logging for replication, it is possible for the data on the master and slave to become different if a statement is designed in such a way that the data modification is nondeterministic; that is, it is left to the will of the query optimizer. In general, this is not a good practice even outside of replication. For a detailed explanation of this issue, see Section C.5.8, “Known Issues in MySQL”.

For information about logs kept by replication slaves, see Section 16.2.2, “Replication Relay and Status Logs”.

5.2.4.3. Mixed Binary Logging Format

When running in MIXED logging format, the server automatically switches from statement-based to row-based logging under the following conditions:

Note

A warning is generated if you try to execute a statement using statement-based logging that should be written using row-based logging. The warning is shown both in the client (in the output of SHOW WARNINGS) and through the mysqld error log. A warning is added to the SHOW WARNINGS table each time such a statement is executed. However, only the first statement that generated the warning for each client session is written to the error log to prevent flooding the log.

In addition to the decisions above, individual engines can also determine the logging format used when information in a table is updated. The logging capabilities of an individual engine can be defined as follows:

  • If an engine supports row-based logging, the engine is said to be row-logging capable.

  • If an engine supports statement-based logging, the engine is said to be statement-logging capable.

A given storage engine can support either or both logging formats. The following table lists the formats supported by each engine.

Storage EngineRow Logging SupportedStatement Logging Supported
ARCHIVEYesYes
BLACKHOLEYesYes
CSVYesYes
EXAMPLEYesNo
FEDERATEDYesYes
HEAPYesYes
InnoDBYesYes when the transaction isolation level is REPEATABLE READ or SERIALIZABLE; No otherwise.
MyISAMYesYes
MERGEYesYes
NDBCLUSTERYesNo

In MySQL 5.7, whether a statement is to be logged and the logging mode to be used is determined according to the type of statement (safe, unsafe, or binary injected), the binary logging format (STATEMENT, ROW, or MIXED), and the logging capabilities of the storage engine (statement capable, row capable, both, or neither). Statements may be logged with or without a warning; failed statements are not logged, but generate errors in the log. This is shown in the following decision table, where SLC stands for statement-logging capable and RLC stands for row-logging capable.

ConditionAction
Typebinlog_formatSLCRLCError / WarningLogged as
**NoNoError: Cannot execute statement: Binary logging is impossible since at least one engine is involved that is both row-incapable and statement-incapable.-
SafeSTATEMENTYesNo-STATEMENT
SafeMIXEDYesNo-STATEMENT
SafeROWYesNoError: Cannot execute statement: Binary logging is impossible since BINLOG_FORMAT = ROW and at least one table uses a storage engine that is not capable of row-based logging.-
UnsafeSTATEMENTYesNoWarning: Unsafe statement binlogged in statement format, since BINLOG_FORMAT = STATEMENTSTATEMENT
UnsafeMIXEDYesNoError: Cannot execute statement: Binary logging of an unsafe statement is impossible when the storage engine is limited to statement-based logging, even if BINLOG_FORMAT = MIXED.-
UnsafeROWYesNoError: Cannot execute statement: Binary logging is impossible since BINLOG_FORMAT = ROW and at least one table uses a storage engine that is not capable of row-based logging.-
Row InjectionSTATEMENTYesNoError: Cannot execute row injection: Binary logging is not possible since at least one table uses a storage engine that is not capable of row-based logging.-
Row InjectionMIXEDYesNoError: Cannot execute row injection: Binary logging is not possible since at least one table uses a storage engine that is not capable of row-based logging.-
Row InjectionROWYesNoError: Cannot execute row injection: Binary logging is not possible since at least one table uses a storage engine that is not capable of row-based logging.-
SafeSTATEMENTNoYesError: Cannot execute statement: Binary logging is impossible since BINLOG_FORMAT = STATEMENT and at least one table uses a storage engine that is not capable of statement-based logging.-
SafeMIXEDNoYes-ROW
SafeROWNoYes-ROW
UnsafeSTATEMENTNoYesError: Cannot execute statement: Binary logging is impossible since BINLOG_FORMAT = STATEMENT and at least one table uses a storage engine that is not capable of statement-based logging.-
UnsafeMIXEDNoYes-ROW
UnsafeROWNoYes-ROW
Row InjectionSTATEMENTNoYesError: Cannot execute row injection: Binary logging is not possible since BINLOG_FORMAT = STATEMENT.-
Row InjectionMIXEDNoYes-ROW
Row InjectionROWNoYes-ROW
SafeSTATEMENTYesYes-STATEMENT
SafeMIXEDYesYes-ROW
SafeROWYesYes-ROW
UnsafeSTATEMENTYesYesWarning: Unsafe statement binlogged in statement format since BINLOG_FORMAT = STATEMENT.STATEMENT
UnsafeMIXEDYesYes-ROW
UnsafeROWYesYes-ROW
Row InjectionSTATEMENTYesYesError: Cannot execute row injection: Binary logging is not possible because BINLOG_FORMAT = STATEMENT.-
Row InjectionMIXEDYesYes-ROW
Row InjectionROWYesYes-ROW

When a warning is produced by the determination, a standard MySQL warning is produced (and is available using SHOW WARNINGS). The information is also written to the mysqld error log. Only one error for each error instance per client connection is logged to prevent flooding the log. The log message includes the SQL statement that was attempted.

If a slave server was started with --log-warnings enabled, the slave prints messages to the error log to provide information about its status, such as the binary log and relay log coordinates where it starts its job, when it is switching to another relay log, when it reconnects after a disconnect, and so forth.

5.2.4.4. Logging Format for Changes to mysql Database Tables

The contents of the grant tables in the mysql database can be modified directly (for example, with INSERT or DELETE) or indirectly (for example, with GRANT or CREATE USER). Statements that affect mysql database tables are written to the binary log using the following rules:

CREATE TABLE ... SELECT is a combination of data definition and data manipulation. The CREATE TABLE part is logged using statement format and the SELECT part is logged according to the value of binlog_format.

5.2.5. The Slow Query Log

The slow query log consists of SQL statements that took more than long_query_time seconds to execute and required at least min_examined_row_limit rows to be examined. The minimum and default values of long_query_time are 0 and 10, respectively. The value can be specified to a resolution of microseconds. For logging to a file, times are written including the microseconds part. For logging to tables, only integer times are written; the microseconds part is ignored.

By default, administrative statements are not logged, nor are queries that do not use indexes for lookups. This behavior can be changed using --log-slow-admin-statements and log_queries_not_using_indexes, as described later.

The time to acquire the initial locks is not counted as execution time. mysqld writes a statement to the slow query log after it has been executed and after all locks have been released, so log order might differ from execution order.

By default, the slow query log is disabled. To specify the initial slow query log state explicitly, use --slow_query_log[={0|1}]. With no argument or an argument of 1, --slow_query_log enables the log. With an argument of 0, this option disables the log. To specify a log file name, use --slow_query_log_file=file_name. To specify the log destination, use --log-output (as described in Section 5.2.1, “Selecting General Query and Slow Query Log Output Destinations”).

If you specify no name for the slow query log file, the default name is host_name-slow.log. The server creates the file in the data directory unless an absolute path name is given to specify a different directory.

To disable or enable the slow query log or change the log file name at runtime, use the global slow_query_log and slow_query_log_file system variables. Set slow_query_log to 0 (or OFF) to disable the log or to 1 (or ON) to enable it. Set slow_query_log_file to specify the name of the log file. If a log file already is open, it is closed and the new file is opened.

When the slow query log is enabled, the server writes output to any destinations specified by the --log-output option or log_output system variable. If you enable the log, the server opens the log file and writes startup messages to it. However, further logging of queries to the file does not occur unless the FILE log destination is selected. If the destination is NONE, the server writes no queries even if the slow query log is enabled. Setting the log file name has no effect on logging if the log destination value does not contain FILE.

The server writes less information to the slow query log (and binary log) if you use the --log-short-format option.

To include slow administrative statements in the statements written to the slow query log, use the --log-slow-admin-statements server option. Administrative statements include ALTER TABLE, ANALYZE TABLE, CHECK TABLE, CREATE INDEX, DROP INDEX, OPTIMIZE TABLE, and REPAIR TABLE.

To include queries that do not use indexes for row lookups in the statements written to the slow query log, enable the log_queries_not_using_indexes system variable. When such queries are logged, the slow query log may grow quickly. It is possible to put a rate limit on these queries by setting the log_throttle_queries_not_using_indexes system variable. By default, this variable is 0, which means there is no limit. Positive values impose a per-minute limit on logging of queries that do not use indexes. The first such query opens a 60-second window within which the server logs queries up to the given limit, then suppresses additional queries. If there are suppressed queries when the window ends, the server logs a summary that indicates how many there were and the aggregate time spent in them. The next 60-second window begins when the server logs the next query that does not use indexes.

The server uses the controlling parameters in the following order to determine whether to write a query to the slow query log:

  1. The query must either not be an administrative statement, or --log-slow-admin-statements must have been specified.

  2. The query must have taken at least long_query_time seconds, or log_queries_not_using_indexes must be enabled and the query used no indexes for row lookups.

  3. The query must have examined at least min_examined_row_limit rows.

  4. The query must not be suppressed according to the log_throttle_queries_not_using_indexes setting.

The server does not write queries handled by the query cache to the slow query log, nor queries that would not benefit from the presence of an index because the table has zero rows or one row.

By default, a replication slave does not write replicated queries to the slow query log. To change this, use the --log-slow-slave-statements server option.

Passwords in statements written to the slow query log are rewritten by the server not to occur literally in plain text. See also Section 6.1.2.3, “Passwords and Logging”.

The slow query log can be used to find queries that take a long time to execute and are therefore candidates for optimization. However, examining a long slow query log can become a difficult task. To make this easier, you can process a slow query log file using the mysqldumpslow command to summarize the queries that appear in the log. See Section 4.6.9, “mysqldumpslow — Summarize Slow Query Log Files”.

5.2.6. Server Log Maintenance

As described in Section 5.2, “MySQL Server Logs”, MySQL Server can create several different log files to help you see what activity is taking place. However, you must clean up these files regularly to ensure that the logs do not take up too much disk space.

When using MySQL with logging enabled, you may want to back up and remove old log files from time to time and tell MySQL to start logging to new files. See Section 7.2, “Database Backup Methods”.

On a Linux (Red Hat) installation, you can use the mysql-log-rotate script for this. If you installed MySQL from an RPM distribution, this script should have been installed automatically. Be careful with this script if you are using the binary log for replication. You should not remove binary logs until you are certain that their contents have been processed by all slaves.

On other systems, you must install a short script yourself that you start from cron (or its equivalent) for handling log files.

For the binary log, you can set the expire_logs_days system variable to expire binary log files automatically after a given number of days (see Section 5.1.4, “Server System Variables”). If you are using replication, you should set the variable no lower than the maximum number of days your slaves might lag behind the master. To remove binary logs on demand, use the PURGE BINARY LOGS statement (see Section 13.4.1.1, “PURGE BINARY LOGS Syntax”).

You can force MySQL to start using new log files by flushing the logs. Log flushing occurs when you issue a FLUSH LOGS statement or execute a mysqladmin flush-logs, mysqladmin refresh, mysqldump --flush-logs, or mysqldump --master-data command. See Section 13.7.6.3, “FLUSH Syntax”, Section 4.5.2, “mysqladmin — Client for Administering a MySQL Server”, and Section 4.5.4, “mysqldump — A Database Backup Program”. In addition, the binary log is flushed when its size reaches the value of the max_binlog_size system variable.

FLUSH LOGS supports optional modifiers to enable selective flushing of individual logs (for example, FLUSH BINARY LOGS).

A log-flushing operation does the following:

  • If general query logging or slow query logging to a log file is enabled, the server closes and reopens the general query log file or slow query log file.

  • If binary logging is enabled, the server closes the current binary log file and opens a new log file with the next sequence number.

  • If the server was started with the --log-error option to cause the error log to be written to a file, the server closes and reopens the log file.

The server creates a new binary log file when you flush the logs. However, it just closes and reopens the general and slow query log files. To cause new files to be created on Unix, rename the current log files before flushing them. At flush time, the server opens new log files with the original names. For example, if the general and slow query log files are named mysql.log and mysql-slow.log, you can use a series of commands like this:

shell> cd mysql-data-directory
shell> mv mysql.log mysql.old
shell> mv mysql-slow.log mysql-slow.old
shell> mysqladmin flush-logs

On Windows, use rename rather than mv.

At this point, you can make a backup of mysql.old and mysql-slow.old and then remove them from disk.

A similar strategy can be used to back up the error log file, if there is one.

You can rename the general query log or slow query log at runtime by disabling the log:

SET GLOBAL general_log = 'OFF';
SET GLOBAL slow_query_log = 'OFF';

With the logs disabled, rename the log files externally; for example, from the command line. Then enable the logs again:

SET GLOBAL general_log = 'ON';
SET GLOBAL slow_query_log = 'ON';

This method works on any platform and does not require a server restart.

5.3. Managing Disk I/O and File Space for InnoDB Tables

As a DBA, you must manage disk I/O to keep the I/O subsystem from becoming saturated, and manage disk space to avoid filling up storage devices. The ACID design model requires a certain amount of I/O that might seem redundant, but helps to ensure data reliability. Within these constraints, InnoDB tries to optimize the database work and the organization of disk files to minimize the amount of disk I/O. Sometimes, I/O is postponed until the database is not busy, or until everything needs to be brought to a consistent state, such as during a database restart after a fast shutdown.

This section discusses the main considerations for I/O and disk space with the default kind of MySQL tables (also known as InnoDB tables):

  • Controlling the amount of background I/O used to improve query performance.

  • Enabling or disabling features that provide extra durability at the expense of additional I/O.

  • Organizing tables into many small files, a few larger files, or a combination of both.

  • Balancing the size of redo log files against the I/O activity that occurs when the log files become full.

  • How to reorganize a table for optimal query performance.

5.3.1. InnoDB Disk I/O

InnoDB uses asynchronous disk I/O where possible, by creating a number of threads to handle I/O operations, while permitting other database operations to proceed while the I/O is still in progress. On Linux and Windows platforms, InnoDB uses the available OS and library functions to perform native asynchronous I/O. On other platforms, InnoDB still uses I/O threads, but the threads may actually wait for I/O requests to complete; this technique is known as simulated asynchronous I/O.

Read-Ahead

If InnoDB can determine there is a high probability that data might be needed soon, it performs read-ahead operations to bring that data into the buffer pool so that it is available in memory. Making a few large read requests for contiguous data can be more efficient than making several small, spread-out requests. There are two read-ahead heuristics in InnoDB:

  • In sequential read-ahead, if InnoDB notices that the access pattern to a segment in the tablespace is sequential, it posts in advance a batch of reads of database pages to the I/O system.

  • In random read-ahead, if InnoDB notices that some area in a tablespace seems to be in the process of being fully read into the buffer pool, it posts the remaining reads to the I/O system.

Doublewrite Buffer

InnoDB uses a novel file flush technique involving a structure called the doublewrite buffer. It adds safety to recovery following an operating system crash or a power outage, and improves performance on most varieties of Unix by reducing the need for fsync() operations.

Before writing pages to a data file, InnoDB first writes them to a contiguous tablespace area called the doublewrite buffer. Only after the write and the flush to the doublewrite buffer has completed does InnoDB write the pages to their proper positions in the data file. If the operating system crashes in the middle of a page write (causing a torn page condition), InnoDB can later find a good copy of the page from the doublewrite buffer during recovery.

5.3.2. File Space Management

The data files that you define in the configuration file form the InnoDB system tablespace. The files are logically concatenated to form the tablespace. There is no striping in use. Currently, you cannot define where within the tablespace your tables are allocated. In a newly created tablespace, InnoDB allocates space starting from the first data file.

To avoid the issues that come with storing all tables and indexes inside the system tablespace, you can turn on the innodb_file_per_table configuration option, which stores each newly created table in a separate tablespace file (with extension .ibd). For tables stored this way, there is less fragmentation within the disk file, and when the table is truncated, the space is returned to the operating system rather than still being reserved by InnoDB within the system tablespace.

Pages, Extents, Segments, and Tablespaces

Each tablespace consists of database pages. Every tablespace in a MySQL instance has the same page size. By default, all tablespaces have a page size of 16KB; you can reduce the page size to 8KB or 4KB by specifying the innodb_page_size option when you create the MySQL instance.

The pages are grouped into extents of size 1MB (64 consecutive 16KB pages, or 128 8KB pages, or 256 4KB pages). The files inside a tablespace are called segments in InnoDB. (These segments are different from the rollback segment, which actually contains many tablespace segments.)

When a segment grows inside the tablespace, InnoDB allocates the first 32 pages to it one at a time. After that, InnoDB starts to allocate whole extents to the segment. InnoDB can add up to 4 extents at a time to a large segment to ensure good sequentiality of data.

Two segments are allocated for each index in InnoDB. One is for nonleaf nodes of the B-tree, the other is for the leaf nodes. Keeping the leaf nodes contiguous on disk enables better sequential I/O operations, because these leaf nodes contain the actual table data.

Some pages in the tablespace contain bitmaps of other pages, and therefore a few extents in an InnoDB tablespace cannot be allocated to segments as a whole, but only as individual pages.

When you ask for available free space in the tablespace by issuing a SHOW TABLE STATUS statement, InnoDB reports the extents that are definitely free in the tablespace. InnoDB always reserves some extents for cleanup and other internal purposes; these reserved extents are not included in the free space.

When you delete data from a table, InnoDB contracts the corresponding B-tree indexes. Whether the freed space becomes available for other users depends on whether the pattern of deletes frees individual pages or extents to the tablespace. Dropping a table or deleting all rows from it is guaranteed to release the space to other users, but remember that deleted rows are physically removed only by the purge operation, which happens automatically some time after they are no longer needed for transaction rollbacks or consistent reads. (See Section 14.2.3.11, “InnoDB Multi-Versioning”.)

To see information about the tablespace, use the Tablespace Monitor. See Section 14.2.4.4, “SHOW ENGINE INNODB STATUS and the InnoDB Monitors”.

How Pages Relate to Table Rows

The maximum row length, except for variable-length columns (VARBINARY, VARCHAR, BLOB and TEXT), is slightly less than half of a database page. That is, the maximum row length is about 8000 bytes. LONGBLOB and LONGTEXT columns must be less than 4GB, and the total row length, including BLOB and TEXT columns, must be less than 4GB.

If a row is less than half a page long, all of it is stored locally within the page. If it exceeds half a page, variable-length columns are chosen for external off-page storage until the row fits within half a page. For a column chosen for off-page storage, InnoDB stores the first 768 bytes locally in the row, and the rest externally into overflow pages. Each such column has its own list of overflow pages. The 768-byte prefix is accompanied by a 20-byte value that stores the true length of the column and points into the overflow list where the rest of the value is stored.

5.3.3. InnoDB Checkpoints

Making your log files very large may reduce disk I/O during checkpointing. It often makes sense to set the total size of the log files as large as the buffer pool or even larger. Although in the past large log files could make crash recovery take excessive time, starting with MySQL 5.5, performance enhancements to crash recovery make it possible to use large log files with fast startup after a crash. (Strictly speaking, this performance improvement is available for MySQL 5.1 with the InnoDB Plugin 1.0.7 and higher. It is with MySQL 5.5 that this improvement is available in the default InnoDB storage engine.)

How Checkpoint Processing Works

InnoDB implements a checkpoint mechanism known as fuzzy checkpointing. InnoDB flushes modified database pages from the buffer pool in small batches. There is no need to flush the buffer pool in one single batch, which would disrupt processing of user SQL statements during the checkpointing process.

During crash recovery, InnoDB looks for a checkpoint label written to the log files. It knows that all modifications to the database before the label are present in the disk image of the database. Then InnoDB scans the log files forward from the checkpoint, applying the logged modifications to the database.

5.3.4. Defragmenting a Table

Random insertions into or deletions from a secondary index can cause the index to become fragmented. Fragmentation means that the physical ordering of the index pages on the disk is not close to the index ordering of the records on the pages, or that there are many unused pages in the 64-page blocks that were allocated to the index.

One symptom of fragmentation is that a table takes more space than it should take. How much that is exactly, is difficult to determine. All InnoDB data and indexes are stored in B-trees, and their fill factor may vary from 50% to 100%. Another symptom of fragmentation is that a table scan such as this takes more time than it should take:

SELECT COUNT(*) FROM t WHERE non_indexed_column <> 12345;

The preceding query requires MySQL to perform a full table scan, the slowest type of query for a large table.

To speed up index scans, you can periodically perform a null ALTER TABLE operation, which causes MySQL to rebuild the table:

ALTER TABLE tbl_name ENGINE=INNODB

Another way to perform a defragmentation operation is to use mysqldump to dump the table to a text file, drop the table, and reload it from the dump file.

If the insertions into an index are always ascending and records are deleted only from the end, the InnoDB filespace management algorithm guarantees that fragmentation in the index does not occur.

5.4. Creating and Using InnoDB Tables and Indexes

To create an InnoDB table, use the CREATE TABLE statement without any special clauses. Formerly, you needed the ENGINE=InnoDB clause, but not anymore now that InnoDB is the default storage engine. (You might still use that clause if you plan to use mysqldump or replication to replay the CREATE TABLE statement on a server running MySQL 5.1 or earlier, where the default storage engine is MyISAM.)

-- Default storage engine = InnoDB.
CREATE TABLE t1 (a INT, b CHAR (20), PRIMARY KEY (a));
-- Backwards-compatible with older MySQL.
CREATE TABLE t2 (a INT, b CHAR (20), PRIMARY KEY (a)) ENGINE=InnoDB;

Depending on the file-per-table setting, InnoDB creates each table and associated primary key index either in the system tablespace, or in a separate tablespace (represented by a .ibd file) for each table. MySQL creates t1.frm and t2.frm files in the test directory under the MySQL database directory. Internally, InnoDB adds an entry for the table to its own data dictionary. The entry includes the database name. For example, if test is the database in which the t1 table is created, the entry is for 'test/t1'. This means you can create a table of the same name t1 in some other database, and the table names do not collide inside InnoDB.

To see the properties of these tables, issue a SHOW TABLE STATUS statement:

SHOW TABLE STATUS FROM test LIKE 't%';

In the status output, you see the row format property of these first tables is Compact. Although that setting is fine for basic experimentation, to take advantage of the most powerful InnoDB performance features, you will quickly graduate to using other row formats such as Dynamic and Compressed. Using those values requires a little bit of setup first:

set global innodb_file_per_table=1;
set global innodb_file_format=barracuda;
CREATE TABLE t3 (a INT, b CHAR (20), PRIMARY KEY (a)) row_format=dynamic;
CREATE TABLE t4 (a INT, b CHAR (20), PRIMARY KEY (a)) row_format=compressed;

Always set up a primary key for each InnoDB table, specifying the column or columns that:

  • Are referenced by the most important queries.

  • Are never left blank.

  • Never have duplicate values.

  • Rarely if ever change value once inserted.

For example, in a table containing information about people, you would not create a primary key on (firstname, lastname) because more than one person can have the same name, some people have blank last names, and sometimes people change their names. With so many constraints, often there is not an obvious set of columns to use as a primary key, so you create a new column with a numeric ID to serve as all or part of the primary key. You can declare an auto-increment column so that ascending values are filled in automatically as rows are inserted:

-- The value of ID can act like a pointer between related items in different tables.
CREATE TABLE t5 (id INT AUTO_INCREMENT, b CHAR (20), PRIMARY KEY (id));
-- The primary key can consist of more than one column. Any autoinc column must come first.
CREATE TABLE t6 (id INT AUTO_INCREMENT, a INT, b CHAR (20), PRIMARY KEY (id,a));

Although the table works correctly without you defining a primary key, the primary key is involved with many aspects of performance and is a crucial design aspect for any large or frequently used table. Make a habit of always specifying one in the CREATE TABLE statement. (If you create the table, load data, and then do ALTER TABLE to add a primary key later, that operation is much slower than defining the primary key when creating the table.)

5.4.1. Managing InnoDB Tablespaces

Historically, all InnoDB tables and indexes were stored in the system tablespace. This monolithic approach was targeted at machines dedicated entirely to database processing, with carefully planned data growth, where any disk storage allocated to MySQL would never be needed for other purposes. InnoDB's file-per-table mode is a more flexible alternative, where you store each InnoDB table and its indexes in a separate file. Each such .ibd file represents a separate tablespace. This mode is controlled by the innodb_file_per_table configuration option, and is the default in MySQL 5.6.6 and higher.

Advantages of File-Per-Table Mode

  • You can reclaim disk space when truncating or dropping a table. For tables created when file-per-table mode is turned off, truncating or dropping them creates free space internally in the ibdata files. That free space can only be used for new InnoDB data.

  • The TRUNCATE TABLE operation is faster when run on individual .ibd files.

  • You can store specific tables on separate storage devices, for I/O optimization, space management, or backup purposes. In previous releases, you had to move entire database directories to other drives and create symbolic links in the MySQL data directory, as described in Section 8.11.3.1, “Using Symbolic Links”. In MySQL 5.6 and higher, you can specify the location of each table using the syntax CREATE TABLE ... DATA DIRECTORY = absolute_path_to_directory, as explained in Section 5.4.1.2, “Specifying the Location of a Tablespace”.

  • You can run OPTIMIZE TABLE to compact or recreate a tablespace. When you run an OPTIMIZE TABLE, InnoDB will create a new .ibd file with a temporary name, using only the space required to store actual data. When the optimization is complete, InnoDB removes the old .ibd file and replaces it with the new .ibd file. If the previous .ibd file had grown significantly but actual data only accounted for a portion of its size, running OPTIMIZE TABLE allows you to reclaim the unused space.

  • You can move individual InnoDB tables rather than entire databases.

  • You can copy individual InnoDB tables from one MySQL instance to another (known as the transportable tablespace feature).

  • You can enable compression for table and index data, using the compressed row format.

  • You can enable more efficient storage for tables with large BLOB or text columns using the dynamic row format.

  • Using innodb_file_per_table may improve chances for a successful recovery and save time if a corruption occurs, a server cannot be restarted, or backup and binary logs are unavailable.

  • You can back up or restore a single table quickly, without interrupting the use of other InnoDB tables, using the MySQL Enterprise Backup product. See Backing Up and Restoring a Single .ibd File for the procedure and restrictions.

  • File-per-table mode allows you to excluded tables from a backup. This is beneficial if you have tables that require backup less frequently or on a different schedule.

  • File-per-table mode is convenient for per-table status reporting when copying or backing up tables.

  • File-per-table mode allows you to monitor table size at a file system level, without accessing MySQL.

  • Common Linux file systems do not permit concurrent writes to a single file when innodb_flush_method is set to O_DIRECT. As a result, there are possible performance improvements when using innodb_file_per_table in conjunction with innodb_flush_method.

  • If innodb_file_per_table is disabled, there is one shared tablespace (the system tablespace) for tables, the data dictionary, and undo logs. This single tablespace has a 64TB size limit. If innodb_file_per_table is enabled, each table has its own tablespace, each with a 64TB size limit. See Section D.10.3, “Limits on Table Size” for related information.

Potential Disadvantages of File-Per-Table Mode

  • With innodb_file_per_table, each table may have unused table space, which can only be utilized by rows of the same table. This could lead to more rather than less wasted table space if not properly managed.

  • fsync operations must run on each open table rather than on a single file. Because there is a separate fsync operation for each file, write operations on multiple tables cannot be combined into a single I/O operation. This may require InnoDB to perform a higher total number of fsync operations.

  • mysqld must keep 1 open file handle per table, which may impact performance if you have numerous tables.

  • More file descriptors are used.

  • innodb_file_per_table is on by default in MySQL 5.6.6 and higher. You may want to consider disabling it if backward compatibility with MySQL 5.5 or 5.1 is a concern. Disabling innodb_file_per_table prevents ALTER TABLE from moving InnoDB tables from the system tablespace to individual .ibd files.

  • If many tables are growing there is potential for more fragmentation which can impede DROP TABLE and table scan performance. However, when fragmentation is managed, having files in their own tablespace can improve performance.

  • The buffer pool is scanned when dropping a per-table tablespace, which can take several seconds for buffer pools that are tens of gigabytes in size. The scan is performed with a broad internal lock, which may delay other operations. Tables in the shared tablespace are not affected.

  • The innodb_autoextend_increment variable, which defines increment size (in MB) for extending the size of an auto-extending shared tablespace file when it becomes full, does not apply to per-table tablespace files. Per-table tablespace files are auto-extending regardless of the value of innodb_autoextend_increment. The initial extensions are by small amounts, after which extensions occur in increments of 4MB.

5.4.1.1. Enabling and Disabling File-Per-Table Mode

To make file-per-table mode the default for a MySQL server, start the server with the --innodb_file_per_table command-line option, or add this line to the [mysqld] section of my.cnf:

[mysqld]
innodb_file_per_table

You can also issue the command while the server is running:

SET GLOBAL innodb_file_per_table=1;

With file-per-table mode enabled, InnoDB stores each newly created table in its own tbl_name.ibd file in the appropriate database directory. Unlike the MyISAM storage engine, with its separate tbl_name.MYD and tbl_name.MYI files for indexes and data, InnoDB stores the data and the indexes together in a single .ibd file. The tbl_name.frm file is still created as usual.

If you remove innodb_file_per_table from your startup options and restart the server, or turn it off with the SET GLOBAL command, InnoDB creates any new tables inside the system tablespace.

You can always read and write any InnoDB tables, regardless of the file-per-table setting.

To move a table from the system tablespace to its own tablespace, or vice versa, change the innodb_file_per_table setting and rebuild the table:

-- Move table from system tablespace to its own tablespace.
SET GLOBAL innodb_file_per_table=1;
ALTER TABLE table_name ENGINE=InnoDB;
-- Move table from its own tablespace to system tablespace.
SET GLOBAL innodb_file_per_table=0;
ALTER TABLE table_name ENGINE=InnoDB;
Note

InnoDB always needs the system tablespace because it puts its internal data dictionary and undo logs there. The .ibd files are not sufficient for InnoDB to operate.

When a table is moved out of the system tablespace into its own .ibd file, the data files that make up the system tablespace remain the same size. The space formerly occupied by the table can be reused for new InnoDB data, but is not reclaimed for use by the operating system. When moving large InnoDB tables out of the system tablespace, where disk space is limited, you might prefer to turn on innodb_file_per_table and then recreate the entire instance using the mysqldump command.

5.4.1.2. Specifying the Location of a Tablespace

To create a new InnoDB table in a specific location outside the MySQL data directory, use the DATA DIRECTORY = absolute_path_to_directory clause of the CREATE TABLE statement. (Plan the location in advance, because you cannot use this clause with the ALTER TABLE statement.) The directory you specify could be on another storage device with particular performance or capacity characteristics, such as a fast SSD or a high-capacity HDD.

Within the destination directory, MySQL creates a subdirectory corresponding to the database name, and within that a .ibd file for the new table. In the database directory underneath the MySQL DATADIR directory, MySQL creates a table_name.isl file containing the path name for the table. The .isl file is treated by MySQL like a symbolic link. (Using actual symbolic links has never been supported for InnoDB tables.)

The following example shows how you might run a small development or test instance of MySQL on a laptop with a primary hard drive that is 95% full, and place a new table EXTERNAL on a different storage device with more free space. The shell commands show the different paths to the LOCAL table in its default location under the DATADIR directory, and the EXTERNAL table in the location you specified:

mysql> \! df -k .
Filesystem   1024-blocks      Used Available Capacity  iused   ifree %iused  Mounted on
/dev/disk0s2   244277768 231603532  12418236    95% 57964881 3104559   95%   /

mysql> use test;
Database changed
mysql> show variables like 'innodb_file_per_table';
+-----------------------+-------+
| Variable_name         | Value |
+-----------------------+-------+
| innodb_file_per_table | ON    |
+-----------------------+-------+
1 row in set (0.00 sec)

mysql> \! pwd
/usr/local/mysql
mysql> create table local (x int unsigned not null primary key);
Query OK, 0 rows affected (0.03 sec)

mysql> \! ls -l data/test/local.ibd
-rw-rw----  1 cirrus  staff  98304 Nov 13 15:24 data/test/local.ibd

mysql> create table external (x int unsigned not null primary key) data directory = '/volumes/external1/data';
Query OK, 0 rows affected (0.03 sec)

mysql> \! ls -l /volumes/external1/data/test/external.ibd
-rwxrwxrwx  1 cirrus  staff  98304 Nov 13 15:34 /volumes/external1/data/test/external.ibd

mysql> select count(*) from local;
+----------+
| count(*) |
+----------+
|        0 |
+----------+
1 row in set (0.01 sec)

mysql> select count(*) from external;
+----------+
| count(*) |
+----------+
|        0 |
+----------+
1 row in set (0.01 sec)
Notes:
  • MySQL initially holds the .ibd file open, preventing you from dismounting the device, but might eventually close the table if the server is busy. Be careful not to accidentally dismount the external device while MySQL is running, or to start MySQL while the device is disconnected. Attempting to access a table when the associated .ibd file is missing causes a serious error that requires a server restart.

    The server restart might fail if the .ibd file is still not at the expected path. In this case, manually remove the table_name.isl file in the database directory, and after restarting do a DROP TABLE to delete the .frm file and remove the information about the table from the data dictionary.

  • Do not put MySQL tables on an NFS-mounted volume. NFS uses a message-passing protocol to write to files, which could cause data inconsistency if network messages are lost or received out of order.

  • If you use an LVM snapshot, file copy, or other file-based mechanism to back up the .ibd file, always use the FLUSH TABLES ... FOR EXPORT statement first to make sure all changes that were buffered in memory are flushed to disk before the backup occurs.

  • The DATA DIRECTORY clause is a supported alternative to using symbolic links, which has always been problematic and was never supported for individual InnoDB tables.

5.4.1.3. Copying Tablespaces to Another Server (Transportable Tablespaces)

There are many reasons why you might copy an InnoDB table to a different database server:

  • To run reports without putting extra load on a production server.

  • To set up identical data for a table on a new slave server.

  • To restore a backed-up version of a table after a problem or mistake.

  • As a faster way of moving data around than importing the results of a mysqldump command. The data is available immediately, rather than having to be re-inserted and the indexes rebuilt.

To copy an InnoDB table to another server instance or to perform a full table restore on the same instance, you can use the FLUSH TABLES statement with the FOR EXPORT clause. FLUSH TABLES ... FOR EXPORT places .ibd files into a consistent state so that they can be copied. It also and creates a .cfg binary metadata file that is used by ALTER TABLE ... IMPORT TABLESPACE for schema verification during the import process. See Section 13.7.6.3, “FLUSH Syntax” for additional information about FLUSH TABLES ... FOR EXPORT.

As of MySQL 5.6.8, ALTER TABLE ... IMPORT TABLESPACE does not require a .cfg metadata file to import a tablespace. However, metadata checks are not performed when importing without a .cfg file, and the following warning will be issued:

Message: InnoDB: IO Read error: (2, No such file or directory) Error opening '.\
test\t.cfg', will attempt to import without schema verification
1 row in set (0.00 sec) 
      

The ability to import without a .cfg file may be more convenient when no schema mismatches are expected. Additionally, the ability to import without a .cfg file could be useful in crash and recovery scenarios in which metadata cannot be collected from an .ibd file.

Tablespace Copying Limitations (Transportable Tablespaces)
  • The tablespace copy procedure is only possible when innodb_file_per_table is set to ON. Tables residing in the shared system tablespace cannot be quiesced.

  • When a table is quiesced, only read-only transactions are allowed on the affected table.

  • When importing a tablespace, the page size must match the page size of the importing instance.

  • DISCARD TABLESPACE is not supported for partitioned tables meaning that transportable tablespaces is also unsupported. If you run ALTER TABLE ... DISCARD TABLESPACE on a partitioned table, the following error is returned: ERROR 1031 (HY000): Table storage engine for 'part' doesn't have this option.

  • DISCARD TABLESPACE is not supported for tablespaces with a parent-child (primary key-foreign key) relationship when foreign_key_checks is set to 1. Before discarding a tablespace for parent-child tables, set foreign_key_checks=0.

  • ALTER TABLE ... IMPORT TABLESPACE does not enforce foreign key constraints on imported data. If there are foreign key constraints between tables, all tables should be exported at the same (logical) point in time.

  • In replication scenarios, innodb_file_per_table must be set to ON on both the master and slave.

Example Procedure: Copying a Tablespace From One Server To Another (Transportable Tablespaces)

This procedure describes how to copy a table from a running MySQL server instance to another running instance. The same procedure, with minor adjustments, can also be used to perform a full table restore on the same instance.

  1. On the source server, create a table if one does not already exist:

    mysql> use test;
    mysql> CREATE TABLE t(c1 INT) engine=InnoDB;
  2. On the destination server, create a table if one does not already exist:

    mysql> use test;
    mysql> CREATE TABLE t(c1 INT) engine=InnoDB;
  3. On the destination server, discard the existing tablespace. Before a tablespace can be imported, InnoDB must discard the tablespace that is attached to the receiving table.

    mysql> ALTER TABLE t DISCARD TABLESPACE;
    Note

    The tablespace file need not necessarily have been created on the server into which the tablespace file is being imported. In MySQL 5.6 or later, importing a tablespace file from another server works if the both servers have GA (General Availability) status and their versions are within the same series. Otherwise, the file must have been created on the server into which it is imported.

  4. On the source server, quiesce the table and create the .cfg metadata file by running FLUSH TABLES ... FOR EXPORT:

    mysql> use test;
    mysql> FLUSH TABLES t FOR EXPORT;

    The metadata file (t.cfg) is created in the InnoDB data directory.

    Note

    FLUSH TABLES ... FOR EXPORT applies to InnoDB tables. It is available as of MySQL 5.6.6. The statement ensures that changes to the named tables have been flushed to disk so that binary table copies can be made while the server is running. When FLUSH TABLES ... FOR EXPORT is run, InnoDB produces a .cfg file in the same database directory as the table. The .cfg file contains metadata used for schema verification when importing the tablespace file.

  5. Copy the .cfg metadata file and .ibd file from the source server to the destination server. For example:

    shell> scp /innodb_data_dir/test/t.{ibd,cfg} destination-server:/innodb_data_dir/test
    Note

    The .ibd file and .cfg file must be copied before releasing the shared locks, as described in the next step.

  6. On the source server, use UNLOCK TABLES to release the locks acquired by FLUSH TABLES ... FOR EXPORT:

    mysql> use test;
    mysql> UNLOCK TABLES;
  7. On the destination server, import the tablespace:

    mysql> use test;
    mysql> ALTER TABLE t IMPORT TABLESPACE;
    Note

    The ALTER TABLE ... IMPORT TABLESPACE feature does not enforce foreign key constraints on imported data. If there are foreign key constraints between tables, all tables should be exported at the same (logical) point in time. In this case you would stop updating the tables, commit all transactions, acquire shared locks on the tables, and then perform the export operation.

Tablespace Copying Internals (Transportable Tablespaces)

The following information describes internals and error log messaging for the transportable tablespaces copy procedure.

When ALTER TABLE ... DISCARD TABLESPACE is run on the destination instance:

  • The table is locked in X mode.

  • The tablespace is detached from the table.

When FLUSH TABLES ... FOR EXPORT is run on the source instance:

  • The table being flushed for export is locked in shared mode.

  • The purge coordinator thread is stopped.

  • Dirty pages are synchronized to disk.

  • Table metadata is written to the binary .cfg file.

Expected error log messages for this operation:

2013-07-18 14:47:31 34471 [Note] InnoDB: Sync to disk of '"test"."t"' started.
2013-07-18 14:47:31 34471 [Note] InnoDB: Stopping purge
2013-07-18 14:47:31 34471 [Note] InnoDB: Writing table metadata to './test/t.cfg'
2013-07-18 14:47:31 34471 [Note] InnoDB: Table '"test"."t"' flushed to disk
 

When UNLOCK TABLES is run on the source instance:

  • The binary .cfg file is deleted.

  • The shared lock on the table or tables being imported is released and the purge coordinator thread is restarted.

Expected error log messages for this operation:

2013-07-18 15:01:40 34471 [Note] InnoDB: Deleting the meta-data file './test/t.cfg'
2013-07-18 15:01:40 34471 [Note] InnoDB: Resuming purge

When ALTER TABLE ... IMPORT TABLESPACE is run on the destination instance, the import algorithm performs the following operations for each tablespace being imported:

  • Each tablespace page is checked for corruption.

  • The space ID and log sequence numbers (LSNs) on each page are updated

  • Flags are validated and LSN updated for the header page.

  • Btree pages are updated.

  • The page state is set to dirty so that it will be written to disk.

Expected error log messages for this operation:

2013-07-18 15:15:01 34960 [Note] InnoDB: Importing tablespace for table 'test/t' that was exported from host 'ubuntu'
2013-07-18 15:15:01 34960 [Note] InnoDB: Phase I - Update all pages
2013-07-18 15:15:01 34960 [Note] InnoDB: Sync to disk
2013-07-18 15:15:01 34960 [Note] InnoDB: Sync to disk - done!
2013-07-18 15:15:01 34960 [Note] InnoDB: Phase III - Flush changes to disk
2013-07-18 15:15:01 34960 [Note] InnoDB: Phase IV - Flush complete
Note

You may also receive a warning that a tablespace is discarded (if you discarded the tablespace for the destination table) and a message stating that statistics could not be calculated due to a missing .ibd file:

2013-07-18 15:14:38 34960 [Warning] InnoDB: Table "test"."t" tablespace is set as discarded.
2013-07-18 15:14:38 7f34d9a37700 InnoDB: cannot calculate statistics for table "test"."t" because the .ibd file is missing. For help, please refer to 
http://dev.mysql.com/doc/refman/5.7/en/innodb-troubleshooting.html

5.4.1.4. Moving the Undo Log out of the System Tablespace

Although tablespace management typically involves files holding tables and indexes, you can also divide the undo log into separate undo tablespace files. This layout is different from the default configuration where the undo log is part of the system tablespace. See Section 14.2.4.2.4, “Separate Tablespaces for InnoDB Undo Logs” for details.

5.4.2. Grouping DML Operations with Transactions

By default, connection to the MySQL server begins with autocommit mode enabled, which automatically commits every SQL statement as you execute it. This mode of operation might be unfamiliar if you have experience with other database systems, where it is standard practice to issue a sequence of DML statements and commit them or roll them back all together.

To use multiple-statement transactions, switch autocommit off with the SQL statement SET autocommit = 0 and end each transaction with COMMIT or ROLLBACK as appropriate. To leave autocommit on, begin each transaction with START TRANSACTION and end it with COMMIT or ROLLBACK. The following example shows two transactions. The first is committed; the second is rolled back.

shell> mysql test

mysql> CREATE TABLE customer (a INT, b CHAR (20), INDEX (a));
Query OK, 0 rows affected (0.00 sec)
mysql> -- Do a transaction with autocommit turned on.
mysql> START TRANSACTION;
Query OK, 0 rows affected (0.00 sec)
mysql> INSERT INTO customer VALUES (10, 'Heikki');
Query OK, 1 row affected (0.00 sec)
mysql> COMMIT;
Query OK, 0 rows affected (0.00 sec)
mysql> -- Do another transaction with autocommit turned off.
mysql> SET autocommit=0;
Query OK, 0 rows affected (0.00 sec)
mysql> INSERT INTO customer VALUES (15, 'John');
Query OK, 1 row affected (0.00 sec)
mysql> INSERT INTO customer VALUES (20, 'Paul');
Query OK, 1 row affected (0.00 sec)
mysql> DELETE FROM customer WHERE b = 'Heikki';
Query OK, 1 row affected (0.00 sec)
mysql> -- Now we undo those last 2 inserts and the delete.
mysql> ROLLBACK;
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT * FROM customer;
+------+--------+
| a    | b      |
+------+--------+
|   10 | Heikki |
+------+--------+
1 row in set (0.00 sec)
mysql>

Transactions in Client-Side Languages

In APIs such as PHP, Perl DBI, JDBC, ODBC, or the standard C call interface of MySQL, you can send transaction control statements such as COMMIT to the MySQL server as strings just like any other SQL statements such as SELECT or INSERT. Some APIs also offer separate special transaction commit and rollback functions or methods.

5.4.3. Converting Tables from MyISAM to InnoDB

If you have existing tables, and applications that use them, that you want to convert to InnoDB for better reliability and scalability, use the following guidelines and tips. This section assumes most such tables were originally MyISAM, which was formerly the default.

Reduce Memory Usage for MyISAM, Increase Memory Usage for InnoDB

As you transition away from MyISAM tables, lower the value of the key_buffer_size configuration option to free memory no longer needed for caching results. Increase the value of the innodb_buffer_pool_size configuration option, which performs a similar role of allocating cache memory for InnoDB tables. The InnoDB buffer pool caches both table data and index data, so it does double duty in speeding up lookups for queries and keeping query results in memory for reuse.

  • Allocate as much memory to this option as you can afford, often up to 80% of physical memory on the server.

  • If the operating system runs short of memory for other processes and begins to swap, reduce the innodb_buffer_pool_size value. Swapping is such an expensive operation that it drastically reduces the benefit of the cache memory.

  • If the innodb_buffer_pool_size value is several gigabytes or higher, consider increasing the values of innodb_buffer_pool_instances. Doing so helps on busy servers where many connections are reading data into the cache at the same time.

  • On a busy server, run benchmarks with the Query Cache turned off. The InnoDB buffer pool provides similar benefits, so the Query Cache might be tying up memory unnecessarily.

Watch Out for Too-Long Or Too-Short Transactions

Because MyISAM tables do not support transactions, you might not have paid much attention to the autocommit configuration option and the COMMIT and ROLLBACK statements. These keywords are important to allow multiple sessions to read and write InnoDB tables concurrently, providing substantial scalability benefits in write-heavy workloads.

While a transaction is open, the system keeps a snapshot of the data as seen at the beginning of the transaction, which can cause substantial overhead if the system inserts, updates, and deletes millions of rows while a stray transaction keeps running. Thus, take care to avoid transactions that run for too long:

  • If you are using a mysql session for interactive experiments, always COMMIT (to finalize the changes) or ROLLBACK (to undo the changes) when finished. Close down interactive sessions rather than leaving them open for long periods, to avoid keeping transactions open for long periods by accident.

  • Make sure that any error handlers in your application also ROLLBACK incomplete changes or COMMIT completed changes.

  • ROLLBACK is a relatively expensive operation, because INSERT, UPDATE, and DELETE operations are written to InnoDB tables prior to the COMMIT, with the expectation that most changes will be committed successfully and rollbacks will be rare. When experimenting with large volumes of data, avoid making changes to large numbers of rows and then rolling back those changes.

  • When loading large volumes of data with a sequence of INSERT statements, periodically COMMIT the results to avoid having transactions that last for hours. In typical load operations for data warehousing, if something goes wrong, you TRUNCATE TABLE and start over from the beginning rather than doing a ROLLBACK.

The preceding tips save memory and disk space that can be wasted during too-long transactions. When transactions are shorter than they should be, the problem is excessive I/O. With each COMMIT, MySQL makes sure each change is safely recorded to disk, which involves some I/O.

  • For most operations on InnoDB tables, you should use the setting autocommit=0. From an efficiency perspective, this avoids unnecessary I/O when you issue large numbers of consecutive INSERT, UPDATE, or DELETE statements. From a safety perspective, this allows you to issue a ROLLBACK statement to recover lost or garbled data if you make a mistake on the mysql command line, or in an exception handler in your application.

  • The time when autocommit=1 is suitable for InnoDB tables is when running a sequence of queries for generating reports or analyzing statistics. In this situation, there is no I/O penalty related to COMMIT or ROLLBACK, and InnoDB can automatically optimize the read-only workload.

  • If you make a series of related changes, finalize all those changes at once with a single COMMIT at the end. For example, if you insert related pieces of information into several tables, do a single COMMIT after making all the changes. Or if you run many consecutive INSERT statements, do a single COMMIT after all the data is loaded; if you are doing millions of INSERT statements, perhaps split up the huge transaction by issuing a COMMIT every ten thousand or hundred thousand records, so the transaction does not grow too large.

  • Remember that even a SELECT statement opens a transaction, so after running some report or debugging queries in an interactive mysql session, either issue a COMMIT or close the mysql session.

Don't Worry Too Much About Deadlocks

You might see warning messages referring to deadlocks in the MySQL error log, or the output of SHOW ENGINE INNODB STATUS. Despite the scary-sounding name, a deadlock is not a serious issue for InnoDB tables, and often does not require any corrective action. When two transactions start modifying multiple tables, accessing the tables in a different order, they can reach a state where each transaction is waiting for the other and neither can proceed. MySQL immediately detects this condition and cancels (rolls back) the smaller transaction, allowing the other to proceed.

Your applications do need error-handling logic to restart a transaction that is forcibly cancelled like this. When you re-issue the same SQL statements as before, the original timing issue no longer applies: either the other transaction has already finished and yours can proceed, or the other transaction is still in progress and your transaction waits until it finishes.

If deadlock warnings occur constantly, you might review the application code to reorder the SQL operations in a consistent way, or to shorten the transactions. You can test with the innodb_print_all_deadlocks option enabled to see all deadlock warnings in the MySQL error log, rather than only the last warning in the SHOW ENGINE INNODB STATUS output.

Plan the Storage Layout

To get the best performance from InnoDB tables, you can adjust a number of parameters related to storage layout.

When you convert MyISAM tables that are large, frequently accessed, and hold vital data, investigate and consider the innodb_file_per_table, innodb_file_format, and innodb_page_size configuration options, and the ROW_FORMAT and KEY_BLOCK_SIZE clauses of the CREATE TABLE statement.

During your initial experiments, the most important setting is innodb_file_per_table. Enabling this option before creating new InnoDB tables ensures that the InnoDB system tablespace files do not allocate disk space permanently for all the InnoDB data. With innodb_file_per_table enabled, DROP TABLE and TRUNCATE TABLE free disk space as you would expect.

Converting an Existing Table

To convert a non-InnoDB table to use InnoDB use ALTER TABLE:

ALTER TABLE table_name ENGINE=InnoDB;
Important

Do not convert MySQL system tables in the mysql database (such as user or host) to the InnoDB type. This is an unsupported operation. The system tables must always be of the MyISAM type.

Cloning the Structure of a Table

You might make an InnoDB table that is a clone of a MyISAM table, rather than doing the ALTER TABLE conversion, to test the old and new table side-by-side before switching.

Create an empty InnoDB table with identical column and index definitions. Use show create table table_name\G to see the full CREATE TABLE statement to use. Change the ENGINE clause to ENGINE=INNODB.

Transferring Existing Data

To transfer a large volume of data into an empty InnoDB table created as shown in the previous section, insert the rows with INSERT INTO innodb_table SELECT * FROM myisam_table ORDER BY primary_key_columns.

You can also create the indexes for the InnoDB table after inserting the data. Historically, creating new secondary indexes was a slow operation for InnoDB, but now you can create the indexes after the data is loaded with relatively little overhead from the index creation step.

If you have UNIQUE constraints on secondary keys, you can speed up a table import by turning off the uniqueness checks temporarily during the import operation:

SET unique_checks=0;... import operation ...
SET unique_checks=1;

For big tables, this saves disk I/O because InnoDB can use its insert buffer to write secondary index records as a batch. Be certain that the data contains no duplicate keys. unique_checks permits but does not require storage engines to ignore duplicate keys.

To get better control over the insertion process, you might insert big tables in pieces:

INSERT INTO newtable SELECT * FROM oldtable
   WHERE yourkey > something AND yourkey <= somethingelse;

After all records have been inserted, you can rename the tables.

During the conversion of big tables, increase the size of the InnoDB buffer pool to reduce disk I/O, to a maximum of 80% of physical memory. You can also increase the sizes of the InnoDB log files.

Storage Requirements

By this point, as already mentioned, you should already have the innodb_file_per_table option enabled, so that if you temporarily make several copies of your data in InnoDB tables, you can recover all that disk space by dropping unneeded tables afterward.

Whether you convert the MyISAM table directly or create a cloned InnoDB table, make sure that you have sufficient disk space to hold both the old and new tables during the process. InnoDB tables require more disk space than MyISAM tables. If an ALTER TABLE operation runs out of space, it starts a rollback, and that can take hours if it is disk-bound. For inserts, InnoDB uses the insert buffer to merge secondary index records to indexes in batches. That saves a lot of disk I/O. For rollback, no such mechanism is used, and the rollback can take 30 times longer than the insertion.

In the case of a runaway rollback, if you do not have valuable data in your database, it may be advisable to kill the database process rather than wait for millions of disk I/O operations to complete. For the complete procedure, see Section 14.2.4.6, “Starting InnoDB on a Corrupted Database”.

Carefully Choose a PRIMARY KEY for Each Table

The PRIMARY KEY clause is a critical factor affecting the performance of MySQL queries and the space usage for tables and indexes. Perhaps you have phoned a financial institution where you are asked for an account number. If you do not have the number, you are asked for a dozen different pieces of information to uniquely identify yourself. The primary key is like that unique account number that lets you get straight down to business when querying or modifying the information in a table. Every row in the table must have a primary key value, and no two rows can have the same primary key value.

Here are some guidelines for the primary key, followed by more detailed explanations.

  • Declare a PRIMARY KEY for each table. Typically, it is the most important column that you refer to in WHERE clauses when looking up a single row.

  • Declare the PRIMARY KEY clause in the original CREATE TABLE statement, rather than adding it later through an ALTER TABLE statement.

  • Choose the column and its data type carefully. Prefer numeric columns over character or string ones.

  • Consider using an auto-increment column if there is not another stable, unique, non-null, numeric column to use.

  • An auto-increment column is also a good choice if there is any doubt whether the value of the primary key column could ever change. Changing the value of a primary key column is an expensive operation, possibly involving rearranging data within the table and within each secondary index.

Consider adding a primary key to any table that does not already have one. Use the smallest practical numeric type based on the maximum projected size of the table. This can make each row slightly more compact, which can yield substantial space savings for large tables. The space savings are multiplied if the table has any secondary indexes, because the primary key value is repeated in each secondary index entry. In addition to reducing data size on disk, a small primary key also lets more data fit into the buffer pool, speeding up all kinds of operations and improving concurrency.

If the table already has a primary key on some longer column, such as a VARCHAR, consider adding a new unsigned AUTO_INCREMENT column and switching the primary key to that, even if that column is not referenced in queries. This design change can produce substantial space savings in the secondary indexes. You can designate the former primary key columns as UNIQUE NOT NULL to enforce the same constraints as the PRIMARY KEY clause, that is, to prevent duplicate or null values across all those columns.

If you spread related information across multiple tables, typically each table uses the same column for its primary key. For example, a personnel database might have several tables, each with a primary key of employee number. A sales database might have some tables with a primary key of customer number, and other tables with a primary key of order number. Because lookups using the primary key are very fast, you can construct efficient join queries for such tables.

If you leave the PRIMARY KEY clause out entirely, MySQL creates an invisible one for you. It is a 6-byte value that might be longer than you need, thus wasting space. Because it is hidden, you cannot refer to it in queries.

Application Performance Considerations

The extra reliability and scalability features of InnoDB do require more disk storage than equivalent MyISAM tables. You might change the column and index definitions slightly, for better space utilization, reduced I/O and memory consumption when processing result sets, and better query optimization plans making efficient use of index lookups.

If you do set up a numeric ID column for the primary key, use that value to cross-reference with related values in any other tables, particularly for join queries. For example, rather than accepting a country name as input and doing queries searching for the same name, do one lookup to determine the country ID, then do other queries (or a single join query) to look up relevant information across several tables. Rather than storing a customer or catalog item number as a string of digits, potentially using up several bytes, convert it to a numeric ID for storing and querying. A 4-byte unsigned INT column can index over 4 billion items (with the US meaning of billion: 1000 million). For the ranges of the different integer types, see Section 11.2.1, “Integer Types (Exact Value) - INTEGER, INT, SMALLINT, TINYINT, MEDIUMINT, BIGINT.

Understand Files Associated with InnoDB Tables

InnoDB files require more care and planning than MyISAM files do:

  • You must not delete the ibdata files that represent the InnoDB system tablespace.

  • Copying InnoDB tables from one server to another requires issuing the FLUSH TABLES ... FOR EXPORT statement first, and copying the table_name.cfg file along with the table_name.ibd file.

5.4.4. AUTO_INCREMENT Handling in InnoDB

InnoDB provides an optimization that significantly improves scalability and performance of SQL statements that insert rows into tables with AUTO_INCREMENT columns. To use the AUTO_INCREMENT mechanism with an InnoDB table, an AUTO_INCREMENT column ai_col must be defined as part of an index such that it is possible to perform the equivalent of an indexed SELECT MAX(ai_col) lookup on the table to obtain the maximum column value. Typically, this is achieved by making the column the first column of some table index.

This section provides background information on the original (traditional) implementation of auto-increment locking in InnoDB, explains the configurable locking mechanism, documents the parameter for configuring the mechanism, and describes its behavior and interaction with replication.

5.4.4.1. Traditional InnoDB Auto-Increment Locking

The original implementation of auto-increment handling in InnoDB uses the following strategy to prevent problems when using the binary log for statement-based replication or for certain recovery scenarios.

If you specify an AUTO_INCREMENT column for an InnoDB table, the table handle in the InnoDB data dictionary contains a special counter called the auto-increment counter that is used in assigning new values for the column. This counter is stored only in main memory, not on disk.

InnoDB uses the following algorithm to initialize the auto-increment counter for a table t that contains an AUTO_INCREMENT column named ai_col: After a server startup, for the first insert into a table t, InnoDB executes the equivalent of this statement:

SELECT MAX(ai_col) FROM t FOR UPDATE;

InnoDB increments the value retrieved by the statement and assigns it to the column and to the auto-increment counter for the table. By default, the value is incremented by one. This default can be overridden by the auto_increment_increment configuration setting.

If the table is empty, InnoDB uses the value 1. This default can be overridden by the auto_increment_offset configuration setting.

If a SHOW TABLE STATUS statement examines the table t before the auto-increment counter is initialized, InnoDB initializes but does not increment the value and stores it for use by later inserts. This initialization uses a normal exclusive-locking read on the table and the lock lasts to the end of the transaction.

InnoDB follows the same procedure for initializing the auto-increment counter for a freshly created table.

After the auto-increment counter has been initialized, if you do not explicitly specify a value for an AUTO_INCREMENT column, InnoDB increments the counter and assigns the new value to the column. If you insert a row that explicitly specifies the column value, and the value is bigger than the current counter value, the counter is set to the specified column value.

If a user specifies NULL or 0 for the AUTO_INCREMENT column in an INSERT, InnoDB treats the row as if the value was not specified and generates a new value for it.

The behavior of the auto-increment mechanism is not defined if you assign a negative value to the column, or if the value becomes bigger than the maximum integer that can be stored in the specified integer type.

When accessing the auto-increment counter, InnoDB uses a special table-level AUTO-INC lock that it keeps to the end of the current SQL statement, not to the end of the transaction. The special lock release strategy was introduced to improve concurrency for inserts into a table containing an AUTO_INCREMENT column. Nevertheless, two transactions cannot have the AUTO-INC lock on the same table simultaneously, which can have a performance impact if the AUTO-INC lock is held for a long time. That might be the case for a statement such as INSERT INTO t1 ... SELECT ... FROM t2 that inserts all rows from one table into another.

InnoDB uses the in-memory auto-increment counter as long as the server runs. When the server is stopped and restarted, InnoDB reinitializes the counter for each table for the first INSERT to the table, as described earlier.

A server restart also cancels the effect of the AUTO_INCREMENT = N table option in CREATE TABLE and ALTER TABLE statements, which you can use with InnoDB tables to set the initial counter value or alter the current counter value.

You may see gaps in the sequence of values assigned to the AUTO_INCREMENT column if you roll back transactions that have generated numbers using the counter.

5.4.4.2. Configurable InnoDB Auto-Increment Locking

As described in the previous section, InnoDB uses a special lock called the table-level AUTO-INC lock for inserts into tables with AUTO_INCREMENT columns. This lock is normally held to the end of the statement (not to the end of the transaction), to ensure that auto-increment numbers are assigned in a predictable and repeatable order for a given sequence of INSERT statements.

In the case of statement-based replication, this means that when an SQL statement is replicated on a slave server, the same values are used for the auto-increment column as on the master server. The result of execution of multiple INSERT statements is deterministic, and the slave reproduces the same data as on the master. If auto-increment values generated by multiple INSERT statements were interleaved, the result of two concurrent INSERT statements would be nondeterministic, and could not reliably be propagated to a slave server using statement-based replication.

To make this clear, consider an example that uses this table:

CREATE TABLE t1 (
  c1 INT(11) NOT NULL AUTO_INCREMENT,
  c2 VARCHAR(10) DEFAULT NULL,
  PRIMARY KEY (c1)
) ENGINE=InnoDB;

Suppose that there are two transactions running, each inserting rows into a table with an AUTO_INCREMENT column. One transaction is using an INSERT ... SELECT statement that inserts 1000 rows, and another is using a simple INSERT statement that inserts one row:

Tx1: INSERT INTO t1 (c2) SELECT 1000 rows from another table ...
Tx2: INSERT INTO t1 (c2) VALUES ('xxx');

InnoDB cannot tell in advance how many rows will be retrieved from the SELECT in the INSERT statement in Tx1, and it assigns the auto-increment values one at a time as the statement proceeds. With a table-level lock, held to the end of the statement, only one INSERT statement referring to table t1 can execute at a time, and the generation of auto-increment numbers by different statements is not interleaved. The auto-increment value generated by the Tx1 INSERT ... SELECT statement will be consecutive, and the (single) auto-increment value used by the INSERT statement in Tx2 will either be smaller or larger than all those used for Tx1, depending on which statement executes first.

As long as the SQL statements execute in the same order when replayed from the binary log (when using statement-based replication, or in recovery scenarios), the results will be the same as they were when Tx1 and Tx2 first ran. Thus, table-level locks held until the end of a statement make INSERT statements using auto-increment safe for use with statement-based replication. However, those locks limit concurrency and scalability when multiple transactions are executing insert statements at the same time.

In the preceding example, if there were no table-level lock, the value of the auto-increment column used for the INSERT in Tx2 depends on precisely when the statement executes. If the INSERT of Tx2 executes while the INSERT of Tx1 is running (rather than before it starts or after it completes), the specific auto-increment values assigned by the two INSERT statements are nondeterministic, and may vary from run to run.

InnoDB can avoid using the table-level AUTO-INC lock for a class of INSERT statements where the number of rows is known in advance, and still preserve deterministic execution and safety for statement-based replication. Further, if you are not using the binary log to replay SQL statements as part of recovery or replication, you can entirely eliminate use of the table-level AUTO-INC lock for even greater concurrency and performance, at the cost of permitting gaps in auto-increment numbers assigned by a statement and potentially having the numbers assigned by concurrently executing statements interleaved.

For INSERT statements where the number of rows to be inserted is known at the beginning of processing the statement, InnoDB quickly allocates the required number of auto-increment values without taking any lock, but only if there is no concurrent session already holding the table-level AUTO-INC lock (because that other statement will be allocating auto-increment values one-by-one as it proceeds). More precisely, such an INSERT statement obtains auto-increment values under the control of a mutex (a light-weight lock) that is not held until the statement completes, but only for the duration of the allocation process.

This new locking scheme enables much greater scalability, but it does introduce some subtle differences in how auto-increment values are assigned compared to the original mechanism. To describe the way auto-increment works in InnoDB, the following discussion defines some terms, and explains how InnoDB behaves using different settings of the innodb_autoinc_lock_mode configuration parameter, which you can set at server startup. Additional considerations are described following the explanation of auto-increment locking behavior.

First, some definitions:

  • INSERT-like statements

    All statements that generate new rows in a table, including INSERT, INSERT ... SELECT, REPLACE, REPLACE ... SELECT, and LOAD DATA.

  • Simple inserts

    Statements for which the number of rows to be inserted can be determined in advance (when the statement is initially processed). This includes single-row and multiple-row INSERT and REPLACE statements that do not have a nested subquery, but not INSERT ... ON DUPLICATE KEY UPDATE.

  • Bulk inserts

    Statements for which the number of rows to be inserted (and the number of required auto-increment values) is not known in advance. This includes INSERT ... SELECT, REPLACE ... SELECT, and LOAD DATA statements, but not plain INSERT. InnoDB will assign new values for the AUTO_INCREMENT column one at a time as each row is processed.

  • Mixed-mode inserts

    These are simple insert statements that specify the auto-increment value for some (but not all) of the new rows. An example follows, where c1 is an AUTO_INCREMENT column of table t1:

    INSERT INTO t1 (c1,c2) VALUES (1,'a'), (NULL,'b'), (5,'c'), (NULL,'d');

    Another type of mixed-mode insert is INSERT ... ON DUPLICATE KEY UPDATE, which in the worst case is in effect an INSERT followed by a UPDATE, where the allocated value for the AUTO_INCREMENT column may or may not be used during the update phase.

There are three possible settings for the innodb_autoinc_lock_mode parameter:

  • innodb_autoinc_lock_mode = 0 (traditional lock mode)

    This lock mode provides the same behavior as before innodb_autoinc_lock_mode existed. For all INSERT-like statements, a special table-level AUTO-INC lock is obtained and held to the end of the statement. This assures that the auto-increment values assigned by any given statement are consecutive.

    This lock mode is provided for:

    • Backward compatibility.

    • Performance testing.

    • Working around issues with mixed-mode inserts, due to the possible differences in semantics described later.

  • innodb_autoinc_lock_mode = 1 (consecutive lock mode)

    This is the default lock mode. In this mode, bulk inserts use the special AUTO-INC table-level lock and hold it until the end of the statement. This applies to all INSERT ... SELECT, REPLACE ... SELECT, and LOAD DATA statements. Only one statement holding the AUTO-INC lock can execute at a time.

    With this lock mode, simple inserts (only) use a new locking model where a light-weight mutex is used during the allocation of auto-increment values, and no table-level AUTO-INC lock is used, unless an AUTO-INC lock is held by another transaction. If another transaction does hold an AUTO-INC lock, a simple insert waits for the AUTO-INC lock, as if it too were a bulk insert.

    This lock mode ensures that, in the presence of INSERT statements where the number of rows is not known in advance (and where auto-increment numbers are assigned as the statement progresses), all auto-increment values assigned by any INSERT-like statement are consecutive, and operations are safe for statement-based replication.

    Simply put, the important impact of this lock mode is significantly better scalability. This mode is safe for use with statement-based replication. Further, as with traditional lock mode, auto-increment numbers assigned by any given statement are consecutive. In this mode, there is no change in semantics compared to traditional mode for any statement that uses auto-increment, with one important exception.

    The exception is for mixed-mode inserts, where the user provides explicit values for an AUTO_INCREMENT column for some, but not all, rows in a multiple-row simple insert. For such inserts, InnoDB will allocate more auto-increment values than the number of rows to be inserted. However, all values automatically assigned are consecutively generated (and thus higher than) the auto-increment value generated by the most recently executed previous statement. Excess numbers are lost.

  • innodb_autoinc_lock_mode = 2 (interleaved lock mode)

    In this lock mode, no INSERT-like statements use the table-level AUTO-INC lock, and multiple statements can execute at the same time. This is the fastest and most scalable lock mode, but it is not safe when using statement-based replication or recovery scenarios when SQL statements are replayed from the binary log.

    In this lock mode, auto-increment values are guaranteed to be unique and monotonically increasing across all concurrently executing INSERT-like statements. However, because multiple statements can be generating numbers at the same time (that is, allocation of numbers is interleaved across statements), the values generated for the rows inserted by any given statement may not be consecutive.

    If the only statements executing are simple inserts where the number of rows to be inserted is known ahead of time, there will be no gaps in the numbers generated for a single statement, except for mixed-mode inserts. However, when bulk inserts are executed, there may be gaps in the auto-increment values assigned by any given statement.

The auto-increment locking modes provided by innodb_autoinc_lock_mode have several usage implications:

  • Using auto-increment with replication

    If you are using statement-based replication, set innodb_autoinc_lock_mode to 0 or 1 and use the same value on the master and its slaves. Auto-increment values are not ensured to be the same on the slaves as on the master if you use innodb_autoinc_lock_mode = 2 (interleaved) or configurations where the master and slaves do not use the same lock mode.

    If you are using row-based replication, all of the auto-increment lock modes are safe. Row-based replication is not sensitive to the order of execution of the SQL statements.

  • Lost auto-increment values and sequence gaps

    In all lock modes (0, 1, and 2), if a transaction that generated auto-increment values rolls back, those auto-increment values are lost. Once a value is generated for an auto-increment column, it cannot be rolled back, whether or not the INSERT-like statement is completed, and whether or not the containing transaction is rolled back. Such lost values are not reused. Thus, there may be gaps in the values stored in an AUTO_INCREMENT column of a table.

  • Gaps in auto-increment values for bulk inserts

    With innodb_autoinc_lock_mode set to 0 (traditional) or 1 (consecutive), the auto-increment values generated by any given statement will be consecutive, without gaps, because the table-level AUTO-INC lock is held until the end of the statement, and only one such statement can execute at a time.

    With innodb_autoinc_lock_mode set to 2 (interleaved), there may be gaps in the auto-increment values generated by bulk inserts, but only if there are concurrently executing INSERT-like statements.

    For lock modes 1 or 2, gaps may occur between successive statements because for bulk inserts the exact number of auto-increment values required by each statement may not be known and overestimation is possible.

  • Auto-increment values assigned by mixed-mode inserts

    Consider a mixed-mode insert, where a simple insert specifies the auto-increment value for some (but not all) resulting rows. Such a statement will behave differently in lock modes 0, 1, and 2. For example, assume c1 is an AUTO_INCREMENT column of table t1, and that the most recent automatically generated sequence number is 100. Consider the following mixed-mode insert statement:

    INSERT INTO t1 (c1,c2) VALUES (1,'a'), (NULL,'b'), (5,'c'), (NULL,'d');

    With innodb_autoinc_lock_mode set to 0 (traditional), the four new rows will be:

    +-----+------+
    | c1  | c2   |
    +-----+------+
    |   1 | a    |
    | 101 | b    |
    |   5 | c    |
    | 102 | d    |
    +-----+------+
    

    The next available auto-increment value will be 103 because the auto-increment values are allocated one at a time, not all at once at the beginning of statement execution. This result is true whether or not there are concurrently executing INSERT-like statements (of any type).

    With innodb_autoinc_lock_mode set to 1 (consecutive), the four new rows will also be:

    +-----+------+
    | c1  | c2   |
    +-----+------+
    |   1 | a    |
    | 101 | b    |
    |   5 | c    |
    | 102 | d    |
    +-----+------+

    However, in this case, the next available auto-increment value will be 105, not 103 because four auto-increment values are allocated at the time the statement is processed, but only two are used. This result is true whether or not there are concurrently executing INSERT-like statements (of any type).

    With innodb_autoinc_lock_mode set to mode 2 (interleaved), the four new rows will be:

    +-----+------+
    | c1  | c2   |
    +-----+------+
    |   1 | a    |
    |   x | b    |
    |   5 | c    |
    |   y | d    |
    +-----+------+
    

    The values of x and y will be unique and larger than any previously generated rows. However, the specific values of x and y will depend on the number of auto-increment values generated by concurrently executing statements.

    Finally, consider the following statement, issued when the most-recently generated sequence number was the value 4:

    INSERT INTO t1 (c1,c2) VALUES (1,'a'), (NULL,'b'), (5,'c'), (NULL,'d');

    With any innodb_autoinc_lock_mode setting, this statement will generate a duplicate-key error 23000 (Can't write; duplicate key in table) because 5 will be allocated for the row (NULL, 'b') and insertion of the row (5, 'c') will fail.

5.4.5. InnoDB and FOREIGN KEY Constraints

This section describes differences in the InnoDB storage engine' handling of foreign keys as compared with that of the MySQL Server.

Foreign Key Definitions

Foreign key definitions for InnoDB tables are subject to the following conditions:

  • InnoDB permits a foreign key to reference any index column or group of columns. However, in the referenced table, there must be an index where the referenced columns are listed as the first columns in the same order.

  • InnoDB does not currently support foreign keys for tables with user-defined partitioning. This means that no user-partitioned InnoDB table may contain foreign key references or columns referenced by foreign keys.

  • InnoDB allows a foreign key constraint to reference a non-unique key. This is an InnoDB extension to standard SQL.

Referential Actions

Referential actions for foreign keys of InnoDB tables are subject to the following conditions:

  • While SET DEFAULT is allowed by the MySQL Server, it is rejected as invalid by InnoDB. CREATE TABLE and ALTER TABLE statements using this clause are not allowed for InnoDB tables.

  • If there are several rows in the parent table that have the same referenced key value, InnoDB acts in foreign key checks as if the other parent rows with the same key value do not exist. For example, if you have defined a RESTRICT type constraint, and there is a child row with several parent rows, InnoDB does not permit the deletion of any of those parent rows.

  • InnoDB performs cascading operations through a depth-first algorithm, based on records in the indexes corresponding to the foreign key constraints.

  • If ON UPDATE CASCADE or ON UPDATE SET NULL recurses to update the same table it has previously updated during the cascade, it acts like RESTRICT. This means that you cannot use self-referential ON UPDATE CASCADE or ON UPDATE SET NULL operations. This is to prevent infinite loops resulting from cascaded updates. A self-referential ON DELETE SET NULL, on the other hand, is possible, as is a self-referential ON DELETE CASCADE. Cascading operations may not be nested more than 15 levels deep.

  • Like MySQL in general, in an SQL statement that inserts, deletes, or updates many rows, InnoDB checks UNIQUE and FOREIGN KEY constraints row-by-row. When performing foreign key checks, InnoDB sets shared row-level locks on child or parent records it has to look at. InnoDB checks foreign key constraints immediately; the check is not deferred to transaction commit. According to the SQL standard, the default behavior should be deferred checking. That is, constraints are only checked after the entire SQL statement has been processed. Until InnoDB implements deferred constraint checking, some things will be impossible, such as deleting a record that refers to itself using a foreign key.

Foreign Key Usage and Error Information

You can obtain general information about foreign keys and their usage from querying the INFORMATION_SCHEMA.KEY_COLUMN_USAGE table, and more information more specific to InnoDB tables can be found in the INNODB_SYS_FOREIGN and INNODB_SYS_FOREIGN_COLS tables, also in the INFORMATION_SCHEMA database. See also Section 13.1.14.2, “Using FOREIGN KEY Constraints”.

In addition to SHOW ERRORS, in the event of a foreign key error involving InnoDB tables (usually Error 150 in the MySQL Server), you can obtain a detailed explanation of the most recent InnoDB foreign key error by checking the output of SHOW ENGINE INNODB STATUS.

5.4.6. Working with InnoDB Compressed Tables

By using the SQL syntax and MySQL configuration options for compression, you can create tables where the data is stored in compressed form. Compression can help to improve both raw performance and scalability. The compression means less data is transferred between disk and memory, and takes up less space on disk and in memory. The benefits are amplified for tables with secondary indexes, because index data is compressed also. Compression can be especially important for SSD storage devices, because they tend to have lower capacity than HDD devices.

5.4.6.1. Overview of Table Compression

Because processors and cache memories have increased in speed more than disk storage devices, many workloads are disk-bound. Data compression enables smaller database size, reduced I/O, and improved throughput, at the small cost of increased CPU utilization. Compression is especially valuable for read-intensive applications, on systems with enough RAM to keep frequently used data in memory.

An InnoDB table created with ROW_FORMAT=COMPRESSED can use a smaller page size on disk than the usual 16KB default. Smaller pages require less I/O to read from and write to disk, which is especially valuable for SSD devices.

The page size is specified through the KEY_BLOCK_SIZE parameter. The different page size means the table must be in its own .ibd file rather than in the system tablespace, which requires enabling the innodb_file_per_table option. The level of compression is the same regardless of the KEY_BLOCK_SIZE value. As you specify smaller values for KEY_BLOCK_SIZE, you get the I/O benefits of increasingly smaller pages. But if you specify a value that is too small, there is additional overhead to reorganize the pages when data values cannot be compressed enough to fit multiple rows in each page. There is a hard limit on how small KEY_BLOCK_SIZE can be for a table, based on the lengths of the key columns for each of its indexes. Specify a value that is too small, and the CREATE TABLE or ALTER TABLE statement fails.

In the buffer pool, the compressed data is held in small pages, with a page size based on the KEY_BLOCK_SIZE value. For extracting or updating the column values, MySQL also creates a 16KB page in the buffer pool with the uncompressed data. Within the buffer pool, any updates to the uncompressed page are also re-written back to the equivalent compressed page. You might need to size your buffer pool to accommodate the additional data of both compressed and uncompressed pages, although the uncompressed pages are evicted from the buffer pool when space is needed, and then uncompressed again on the next access.

5.4.6.2. Enabling Compression for a Table

Before creating a compressed table, make sure the innodb_file_per_table configuration option is enabled, and innodb_file_format is set to Barracuda. You can set these parameters in the MySQL configuration file my.cnf or my.ini, or with the SET statement without shutting down the MySQL server.

To enable compression for a table, you use the clauses ROW_FORMAT=COMPRESSED, KEY_BLOCK_SIZE, or both in a CREATE TABLE or ALTER TABLE statement.

To create a compressed table, you might use statements like these:

SET GLOBAL innodb_file_per_table=1;
SET GLOBAL innodb_file_format=Barracuda;
CREATE TABLE t1
 (c1 INT PRIMARY KEY) 
 ROW_FORMAT=COMPRESSED 
KEY_BLOCK_SIZE=8;
  • If you specify ROW_FORMAT=COMPRESSED, you can omit KEY_BLOCK_SIZE; the default compressed page size of 8KB is used.

  • If you specify KEY_BLOCK_SIZE, you can omit ROW_FORMAT=COMPRESSED; compression is enabled automatically.

  • To determine the best value for KEY_BLOCK_SIZE, typically you create several copies of the same table with different values for this clause, then measure the size of the resulting .ibd files and see how well each performs with a realistic workload.

  • For additional performance-related configuration options, see Section 5.4.6.3, “Tuning Compression for InnoDB Tables”.

The default uncompressed size of InnoDB data pages is 16KB. Depending on the combination of option values, MySQL uses a page size of 1KB, 2KB, 4KB, 8KB, or 16KB for the .ibd file of the table. The actual compression algorithm is not affected by the KEY_BLOCK_SIZE value; the value determines how large each compressed chunk is, which in turn affects how many rows can be packed into each compressed page.

Setting KEY_BLOCK_SIZE=16 typically does not result in much compression, since the normal InnoDB page size is 16KB. This setting may still be useful for tables with many long BLOB, VARCHAR or TEXT columns, because such values often do compress well, and might therefore require fewer overflow pages as described in Section 5.4.6.5, “How Compression Works for InnoDB Tables”.

All indexes of a table (including the clustered index) are compressed using the same page size, as specified in the CREATE TABLE or ALTER TABLE statement. Table attributes such as ROW_FORMAT and KEY_BLOCK_SIZE are not part of the CREATE INDEX syntax, and are ignored if they are specified (although you see them in the output of the SHOW CREATE TABLE statement).

Restrictions on Compressed Tables

Because MySQL versions prior to 5.1 cannot process compressed tables, using compression requires specifying the configuration parameter innodb_file_format=Barracuda, to avoid accidentally introducing compatibility issues.

Table compression is also not available for the InnoDB system tablespace. The system tablespace (space 0, the ibdata* files) can contain user data, but it also contains internal system information, and therefore is never compressed. Thus, compression applies only to tables (and indexes) stored in their own tablespaces, that is, created with the innodb_file_per_table option enabled.

Compression applies to an entire table and all its associated indexes, not to individual rows, despite the clause name ROW_FORMAT.

5.4.6.3. Tuning Compression for InnoDB Tables

Most often, the internal optimizations described in Section 5.4.6.5, “ InnoDB Data Storage and Compression ” ensure that the system runs well with compressed data. However, because the efficiency of compression depends on the nature of your data, you can make decisions that affect the performance of compressed tables:

  • Which tables to compress.

  • What compressed page size to use.

  • Whether to adjust the size of the buffer pool based on run-time performance characteristics, such as the amount of time the system spends compressing and uncompressing data. Whether the workload is more like a data warehouse (primarily queries) or an OLTP system (mix of queries and DML).

  • If the system performs DML operations on compressed tables, and the way the data is distributed leads to expensive compression failures at runtime, you might adjust additional advanced configuration options.

Use the guidelines in this section to help make those architectural and configuration choices. When you are ready to conduct long-term testing and put compressed tables into production, see Section 5.4.6.4, “Monitoring Compression at Runtime” for ways to verify the effectiveness of those choices under real-world conditions.

When to Use Compression

In general, compression works best on tables that include a reasonable number of character string columns and where the data is read far more often than it is written. Because there are no guaranteed ways to predict whether or not compression benefits a particular situation, always test with a specific workload and data set running on a representative configuration. Consider the following factors when deciding which tables to compress.

Data Characteristics and Compression

A key determinant of the efficiency of compression in reducing the size of data files is the nature of the data itself. Recall that compression works by identifying repeated strings of bytes in a block of data. Completely randomized data is the worst case. Typical data often has repeated values, and so compresses effectively. Character strings often compress well, whether defined in CHAR, VARCHAR, TEXT or BLOB columns. On the other hand, tables containing mostly binary data (integers or floating point numbers) or data that is previously compressed (for example JPEG or PNG images) may not generally compress well, significantly or at all.

You choose whether to turn on compression for each InnoDB table. A table and all of its indexes use the same (compressed) page size. It might be that the primary key (clustered) index, which contains the data for all columns of a table, compresses more effectively than the secondary indexes. For those cases where there are long rows, the use of compression might result in long column values being stored off-page, as discussed in Section 5.4.8.3, “DYNAMIC and COMPRESSED Row Formats”. Those overflow pages may compress well. Given these considerations, for many applications, some tables compress more effectively than others, and you might find that your workload performs best only with a subset of tables compressed.

To determine whether or not to compress a particular table, conduct experiments. You can get a rough estimate of how efficiently your data can be compressed by using a utility that implements LZ77 compression (such as gzip or WinZip) on a copy of the .ibd file for an uncompressed table. You can expect less compression from a MySQL compressed table than from file-based compression tools, because MySQL compresses data in chunks based on the page size, 16KB by default. In addition to user data, the page format includes some internal system data that is not compressed. File-based compression utilities can examine much larger chunks of data, and so might find more repeated strings in a huge file than MySQL can find in an individual page.

Another way to test compression on a specific table is to copy some data from your uncompressed table to a similar, compressed table (having all the same indexes) and look at the size of the resulting .ibd file. For example:

use test;
set global innodb_file_per_table=1;
set global innodb_file_format=Barracuda;
set global autocommit=0;

-- Create an uncompressed table with a million or two rows.
create table big_table as select * from information_schema.columns;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
commit;
alter table big_table add id int unsigned not null primary key auto_increment;

show create table big_table\G

select count(id) from big_table;

-- Check how much space is needed for the uncompressed table.
\! ls -l data/test/big_table.ibd

create table key_block_size_4 like big_table;
alter table key_block_size_4 key_block_size=4 row_format=compressed;

insert into key_block_size_4 select * from big_table;
commit;

-- Check how much space is needed for a compressed table
-- with particular compression settings.
\! ls -l data/test/key_block_size_4.ibd

This experiment produced the following numbers, which of course could vary considerably depending on your table structure and data:

-rw-rw----  1 cirrus  staff  310378496 Jan  9 13:44 data/test/big_table.ibd
-rw-rw----  1 cirrus  staff  83886080 Jan  9 15:10 data/test/key_block_size_4.ibd

To see whether compression is efficient for your particular workload:

Database Compression versus Application Compression

Decide whether to compress data in your application or in the table; do not use both types of compression for the same data. When you compress the data in the application and store the results in a compressed table, extra space savings are extremely unlikely, and the double compression just wastes CPU cycles.

Compressing in the Database

When enabled, MySQL table compression is automatic and applies to all columns and index values. The columns can still be tested with operators such as LIKE, and sort operations can still use indexes even when the index values are compressed. Because indexes are often a significant fraction of the total size of a database, compression could result in significant savings in storage, I/O or processor time. The compression and decompression operations happen on the database server, which likely is a powerful system that is sized to handle the expected load.

Compressing in the Application

If you compress data such as text in your application, before it is inserted into the database, You might save overhead for data that does not compress well by compressing some columns and not others. This approach uses CPU cycles for compression and uncompression on the client machine rather than the database server, which might be appropriate for a distributed application with many clients, or where the client machine has spare CPU cycles.

Hybrid Approach

Of course, it is possible to combine these approaches. For some applications, it may be appropriate to use some compressed tables and some uncompressed tables. It may be best to externally compress some data (and store it in uncompressed tables) and allow MySQL to compress (some of) the other tables in the application. As always, up-front design and real-life testing are valuable in reaching the right decision.

Workload Characteristics and Compression

In addition to choosing which tables to compress (and the page size), the workload is another key determinant of performance. If the application is dominated by reads, rather than updates, fewer pages need to be reorganized and recompressed after the index page runs out of room for the per-page modification log that MySQL maintains for compressed data. If the updates predominantly change non-indexed columns or those containing BLOBs or large strings that happen to be stored off-page, the overhead of compression may be acceptable. If the only changes to a table are INSERTs that use a monotonically increasing primary key, and there are few secondary indexes, there is little need to reorganize and recompress index pages. Since MySQL can delete-mark and delete rows on compressed pages in place by modifying uncompressed data, DELETE operations on a table are relatively efficient.

For some environments, the time it takes to load data can be as important as run-time retrieval. Especially in data warehouse environments, many tables may be read-only or read-mostly. In those cases, it might or might not be acceptable to pay the price of compression in terms of increased load time, unless the resulting savings in fewer disk reads or in storage cost is significant.

Fundamentally, compression works best when the CPU time is available for compressing and uncompressing data. Thus, if your workload is I/O bound, rather than CPU-bound, you might find that compression can improve overall performance. When you test your application performance with different compression configurations, test on a platform similar to the planned configuration of the production system.

Configuration Characteristics and Compression

Reading and writing database pages from and to disk is the slowest aspect of system performance. Compression attempts to reduce I/O by using CPU time to compress and uncompress data, and is most effective when I/O is a relatively scarce resource compared to processor cycles.

This is often especially the case when running in a multi-user environment with fast, multi-core CPUs. When a page of a compressed table is in memory, MySQL often uses additional memory, typically 16KB, in the buffer pool for an uncompressed copy of the page. The adaptive LRU algorithm attempts to balance the use of memory between compressed and uncompressed pages to take into account whether the workload is running in an I/O-bound or CPU-bound manner. Still, a configuration with more memory dedicated to the buffer pool tends to run better when using compressed tables than a configuration where memory is highly constrained.

Choosing the Compressed Page Size

The optimal setting of the compressed page size depends on the type and distribution of data that the table and its indexes contain. The compressed page size should always be bigger than the maximum record size, or operations may fail as noted in Section 5.4.6.5, “ Compression of B-Tree Pages ”.

Setting the compressed page size too large wastes some space, but the pages do not have to be compressed as often. If the compressed page size is set too small, inserts or updates may require time-consuming recompression, and the B-tree nodes may have to be split more frequently, leading to bigger data files and less efficient indexing.

Typically, you set the compressed page size to 8K or 4K bytes. Given that the maximum row size for an InnoDB table is around 8K, KEY_BLOCK_SIZE=8 is usually a safe choice.

5.4.6.4. Monitoring Compression at Runtime

Overall application performance, CPU and I/O utilization and the size of disk files are good indicators of how effective compression is for your application. This section builds on the performance tuning advice from Section 5.4.6.3, “Tuning Compression for InnoDB Tables”, and shows how to find problems that might not turn up during initial testing.

To dig deeper into performance considerations for compressed tables, you can monitor compression performance at runtime using the Information Schema tables described in Example 14.2, “Using the Compression Information Schema Tables”. These tables reflect the internal use of memory and the rates of compression used overall.

The INNODB_CMP table reports information about compression activity for each compressed page size (KEY_BLOCK_SIZE) in use. The information in these tables is system-wide: it summarizes the compression statistics across all compressed tables in your database. You can use this data to help decide whether or not to compress a table by examining these tables when no other compressed tables are being accessed. It involves relatively low overhead on the server, so you might query it periodically on a production server to check the overall efficiency of the compression feature.

The INNODB_CMP_PER_INDEX table reports information about compression activity for individual tables and indexes. This information is more targeted and more useful for evaluating compression efficiency and diagnosing performance issues one table or index at a time. (Because that each InnoDB table is represented as a clustered index, MySQL does not make a big distinction between tables and indexes in this context.) The INNODB_CMP_PER_INDEX table does involve substantial overhead, so it is more suitable for development servers, where you can compare the effects of different workloads, data, and compression settings in isolation. To guard against imposing this monitoring overhead by accident, you must enable the innodb_cmp_per_index_enabled configuration option before you can query the INNODB_CMP_PER_INDEX table.

The key statistics to consider are the number of, and amount of time spent performing, compression and uncompression operations. Since MySQL splits B-tree nodes when they are too full to contain the compressed data following a modification, compare the number of successful compression operations with the number of such operations overall. Based on the information in the INNODB_CMP and INNODB_CMP_PER_INDEX tables and overall application performance and hardware resource utilization, you might make changes in your hardware configuration, adjust the size of the buffer pool, choose a different page size, or select a different set of tables to compress.

If the amount of CPU time required for compressing and uncompressing is high, changing to faster or multi-core CPUs can help improve performance with the same data, application workload and set of compressed tables. Increasing the size of the buffer pool might also help performance, so that more uncompressed pages can stay in memory, reducing the need to uncompress pages that exist in memory only in compressed form.

A large number of compression operations overall (compared to the number of INSERT, UPDATE and DELETE operations in your application and the size of the database) could indicate that some of your compressed tables are being updated too heavily for effective compression. If so, choose a larger page size, or be more selective about which tables you compress.

If the number of successful compression operations (COMPRESS_OPS_OK) is a high percentage of the total number of compression operations (COMPRESS_OPS), then the system is likely performing well. If the ratio is low, then MySQL is reorganizing, recompressing, and splitting B-tree nodes more often than is desirable. In this case, avoid compressing some tables, or increase KEY_BLOCK_SIZE for some of the compressed tables. You might turn off compression for tables that cause the number of compression failures in your application to be more than 1% or 2% of the total. (Such a failure ratio might be acceptable during a temporary operation such as a data load).

5.4.6.5. How Compression Works for InnoDB Tables

This section describes some internal implementation details about MySQL compression for InnoDB tables. The information presented here may be helpful in tuning for performance, but is not necessary to know for basic use of compression.

Compression Algorithms

Some operating systems implement compression at the file system level. Files are typically divided into fixed-size blocks that are compressed into variable-size blocks, which easily leads into fragmentation. Every time something inside a block is modified, the whole block is recompressed before it is written to disk. These properties make this compression technique unsuitable for use in an update-intensive database system.

MySQL implements compression with the help of the well-known zlib library, which implements the LZ77 compression algorithm. This compression algorithm is mature, robust, and efficient in both CPU utilization and in reduction of data size. The algorithm is lossless, so that the original uncompressed data can always be reconstructed from the compressed form. LZ77 compression works by finding sequences of data that are repeated within the data to be compressed. The patterns of values in your data determine how well it compresses, but typical user data often compresses by 50% or more.

Unlike compression performed by an application, or compression features of some other database management systems, InnoDB compression applies both to user data and to indexes. In many cases, indexes can constitute 40-50% or more of the total database size, so this difference is significant. When compression is working well for a data set, the size of the InnoDB data files (the .idb files) is 25% to 50% of the uncompressed size or possibly smaller. Depending on the workload, this smaller database can in turn lead to a reduction in I/O, and an increase in throughput, at a modest cost in terms of increased CPU utilization. You can adjust the balance between compression level and CPU overhead by modifying the innodb_compression_level configuration option.

InnoDB Data Storage and Compression

All user data in InnoDB tables is stored in pages comprising a B-tree index (the clustered index). In some other database systems, this type of index is called an index-organized table. Each row in the index node contains the values of the (user-specified or system-generated) primary key and all the other columns of the table.

Secondary indexes in InnoDB tables are also B-trees, containing pairs of values: the index key and a pointer to a row in the clustered index. The pointer is in fact the value of the primary key of the table, which is used to access the clustered index if columns other than the index key and primary key are required. Secondary index records must always fit on a single B-tree page.

The compression of B-tree nodes (of both clustered and secondary indexes) is handled differently from compression of overflow pages used to store long VARCHAR, BLOB, or TEXT columns, as explained in the following sections.

Compression of B-Tree Pages

Because they are frequently updated, B-tree pages require special treatment. It is important to minimize the number of times B-tree nodes are split, as well as to minimize the need to uncompress and recompress their content.

One technique MySQL uses is to maintain some system information in the B-tree node in uncompressed form, thus facilitating certain in-place updates. For example, this allows rows to be delete-marked and deleted without any compression operation.

In addition, MySQL attempts to avoid unnecessary uncompression and recompression of index pages when they are changed. Within each B-tree page, the system keeps an uncompressed modification log to record changes made to the page. Updates and inserts of small records may be written to this modification log without requiring the entire page to be completely reconstructed.

When the space for the modification log runs out, InnoDB uncompresses the page, applies the changes and recompresses the page. If recompression fails (a situation known as a compression failure), the B-tree nodes are split and the process is repeated until the update or insert succeeds.

To avoid frequent compression failures in write-intensive workloads, such as for OLTP applications, MySQL sometimes reserves some empty space (padding) in the page, so that the modification log fills up sooner and the page is recompressed while there is still enough room to avoid splitting it. The amount of padding space left in each page varies as the system keeps track of the frequency of page splits. On a busy server doing frequent writes to compressed tables, you can adjust the innodb_compression_failure_threshold_pct, and innodb_compression_pad_pct_max configuration options to fine-tune this mechanism.

Generally, MySQL requires that each B-tree page in an InnoDB table can accommodate at least two records. For compressed tables, this requirement has been relaxed. Leaf pages of B-tree nodes (whether of the primary key or secondary indexes) only need to accommodate one record, but that record must fit, in uncompressed form, in the per-page modification log. If innodb_strict_mode is ON, MySQL checks the maximum row size during CREATE TABLE or CREATE INDEX. If the row does not fit, the following error message is issued: ERROR HY000: Too big row.

If you create a table when innodb_strict_mode is OFF, and a subsequent INSERT or UPDATE statement attempts to create an index entry that does not fit in the size of the compressed page, the operation fails with ERROR 42000: Row size too large. (This error message does not name the index for which the record is too large, or mention the length of the index record or the maximum record size on that particular index page.) To solve this problem, rebuild the table with ALTER TABLE and select a larger compressed page size (KEY_BLOCK_SIZE), shorten any column prefix indexes, or disable compression entirely with ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPACT.

Compressing BLOB, VARCHAR, and TEXT Columns

In an InnoDB table, BLOB, VARCHAR, and TEXT columns that are not part of the primary key may be stored on separately allocated overflow pages. We refer to these columns as off-page columns. Their values are stored on singly-linked lists of overflow pages.

For tables created in ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED, the values of BLOB, TEXT, or VARCHAR columns may be stored fully off-page, depending on their length and the length of the entire row. For columns that are stored off-page, the clustered index record only contains 20-byte pointers to the overflow pages, one per column. Whether any columns are stored off-page depends on the page size and the total size of the row. When the row is too long to fit entirely within the page of the clustered index, MySQL chooses the longest columns for off-page storage until the row fits on the clustered index page. As noted above, if a row does not fit by itself on a compressed page, an error occurs.

Tables created in older versions of MySQL use the Antelope file format, which supports only ROW_FORMAT=REDUNDANT and ROW_FORMAT=COMPACT. In these formats, MySQL stores the first 768 bytes of BLOB, VARCHAR, and TEXT columns in the clustered index record along with the primary key. The 768-byte prefix is followed by a 20-byte pointer to the overflow pages that contain the rest of the column value.

When a table is in COMPRESSED format, all data written to overflow pages is compressed as is; that is, MySQL applies the zlib compression algorithm to the entire data item. Other than the data, compressed overflow pages contain an uncompressed header and trailer comprising a page checksum and a link to the next overflow page, among other things. Therefore, very significant storage savings can be obtained for longer BLOB, TEXT, or VARCHAR columns if the data is highly compressible, as is often the case with text data. Image data, such as JPEG, is typically already compressed and so does not benefit much from being stored in a compressed table; the double compression can waste CPU cycles for little or no space savings.

The overflow pages are of the same size as other pages. A row containing ten columns stored off-page occupies ten overflow pages, even if the total length of the columns is only 8K bytes. In an uncompressed table, ten uncompressed overflow pages occupy 160K bytes. In a compressed table with an 8K page size, they occupy only 80K bytes. Thus, it is often more efficient to use compressed table format for tables with long column values.

Using a 16K compressed page size can reduce storage and I/O costs for BLOB, VARCHAR, or TEXT columns, because such data often compress well, and might therefore require fewer overflow pages, even though the B-tree nodes themselves take as many pages as in the uncompressed form.

Compression and the InnoDB Buffer Pool

In a compressed InnoDB table, every compressed page (whether 1K, 2K, 4K or 8K) corresponds to an uncompressed page of 16K bytes (or a smaller size if innodb_page_size is set). To access the data in a page, MySQL reads the compressed page from disk if it is not already in the buffer pool, then uncompresses the page to its original form. This section describes how InnoDB manages the buffer pool with respect to pages of compressed tables.

To minimize I/O and to reduce the need to uncompress a page, at times the buffer pool contains both the compressed and uncompressed form of a database page. To make room for other required database pages, MySQL can evict from the buffer pool an uncompressed page, while leaving the compressed page in memory. Or, if a page has not been accessed in a while, the compressed form of the page might be written to disk, to free space for other data. Thus, at any given time, the buffer pool might contain both the compressed and uncompressed forms of the page, or only the compressed form of the page, or neither.

MySQL keeps track of which pages to keep in memory and which to evict using a least-recently-used (LRU) list, so that hot (frequently accessed) data tends to stay in memory. When compressed tables are accessed, MySQL uses an adaptive LRU algorithm to achieve an appropriate balance of compressed and uncompressed pages in memory. This adaptive algorithm is sensitive to whether the system is running in an I/O-bound or CPU-bound manner. The goal is to avoid spending too much processing time uncompressing pages when the CPU is busy, and to avoid doing excess I/O when the CPU has spare cycles that can be used for uncompressing compressed pages (that may already be in memory). When the system is I/O-bound, the algorithm prefers to evict the uncompressed copy of a page rather than both copies, to make more room for other disk pages to become memory resident. When the system is CPU-bound, MySQL prefers to evict both the compressed and uncompressed page, so that more memory can be used for hot pages and reducing the need to uncompress data in memory only in compressed form.

Compression and the InnoDB Redo Log Files

Before a compressed page is written to a data file, MySQL writes a copy of the page to the redo log (if it has been recompressed since the last time it was written to the database). This is done to ensure that redo logs are usable for crash recovery, even in the unlikely case that the zlib library is upgraded and that change introduces a compatibility problem with the compressed data. Therefore, some increase in the size of log files, or a need for more frequent checkpoints, can be expected when using compression. The amount of increase in the log file size or checkpoint frequency depends on the number of times compressed pages are modified in a way that requires reorganization and recompression.

Note that compressed tables use a different file format for the redo log and the per-table tablespaces than in MySQL 5.1 and earlier. The MySQL Enterprise Backup product supports this latest Barracuda file format for compressed InnoDB tables. The older InnoDB Hot Backup product can only back up tables using the file format Antelope, and thus does not support compressed InnoDB tables.

5.4.6.6. SQL Compression Syntax Warnings and Errors

Specifying ROW_FORMAT=COMPRESSED or KEY_BLOCK_SIZE in CREATE TABLE or ALTER TABLE statements produces the following warnings if the Barracuda file format is not enabled. You can view them with the SHOW WARNINGS statement.

LevelCodeMessage
Warning1478InnoDB: KEY_BLOCK_SIZE requires innodb_file_per_table.
Warning1478InnoDB: KEY_BLOCK_SIZE requires innodb_file_format=1
Warning1478InnoDB: ignoring KEY_BLOCK_SIZE=4.
Warning1478InnoDB: ROW_FORMAT=COMPRESSED requires innodb_file_per_table.
Warning1478InnoDB: assuming ROW_FORMAT=COMPACT.

Notes:

  • By default, these messages are only warnings, not errors, and the table is created without compression, as if the options were not specified.

  • When innodb_strict_mode is enabled, MySQL generates an error, not a warning, for these cases. The table is not created if the current configuration does not permit using compressed tables.

The non-strict behavior lets you import a mysqldump file into a database that does not support compressed tables, even if the source database contained compressed tables. In that case, MySQL creates the table in ROW_FORMAT=COMPACT instead of preventing the operation.

To import the dump file into a new database, and have the tables re-created as they exist in the original database, ensure the server has the proper settings for the configuration parameters innodb_file_format and innodb_file_per_table.

The attribute KEY_BLOCK_SIZE is permitted only when ROW_FORMAT is specified as COMPRESSED or is omitted. Specifying a KEY_BLOCK_SIZE with any other ROW_FORMAT generates a warning that you can view with SHOW WARNINGS. However, the table is non-compressed; the specified KEY_BLOCK_SIZE is ignored).

LevelCodeMessage
Warning1478 InnoDB: ignoring KEY_BLOCK_SIZE=n unless ROW_FORMAT=COMPRESSED.

If you are running with innodb_strict_mode enabled, the combination of a KEY_BLOCK_SIZE with any ROW_FORMAT other than COMPRESSED generates an error, not a warning, and the table is not created.

Table 5.5, “Meaning of CREATE TABLE and ALTER TABLE options” summarizes how the various options on CREATE TABLE and ALTER TABLE are handled.

Table 5.5. Meaning of CREATE TABLE and ALTER TABLE options

OptionUsageDescription
ROW_FORMAT=​REDUNDANTStorage format used prior to MySQL 5.0.3Less efficient than ROW_FORMAT=COMPACT; for backward compatibility
ROW_FORMAT=​COMPACTDefault storage format since MySQL 5.0.3Stores a prefix of 768 bytes of long column values in the clustered index page, with the remaining bytes stored in an overflow page
ROW_FORMAT=​DYNAMICAvailable only with innodb_file​_format=BarracudaStore values within the clustered index page if they fit; if not, stores only a 20-byte pointer to an overflow page (no prefix)
ROW_FORMAT=​COMPRESSEDAvailable only with innodb_file​_format=BarracudaCompresses the table and indexes using zlib to default compressed page size of 8K bytes; implies ROW_FORMAT=DYNAMIC
KEY_BLOCK_​SIZE=nAvailable only with innodb_file​_format=BarracudaSpecifies compressed page size of 1, 2, 4, 8 or 16 kilobytes; implies ROW_FORMAT=DYNAMIC and ROW_FORMAT=COMPRESSED

Table 5.6, “CREATE/ALTER TABLE Warnings and Errors when InnoDB Strict Mode is OFF” summarizes error conditions that occur with certain combinations of configuration parameters and options on the CREATE TABLE or ALTER TABLE statements, and how the options appear in the output of SHOW TABLE STATUS.

When innodb_strict_mode is OFF, MySQL creates or alters the table, but ignores certain settings as shown below. You can see the warning messages in the MySQL error log. When innodb_strict_mode is ON, these specified combinations of options generate errors, and the table is not created or altered. To see the full description of the error condition, issue the SHOW ERRORS statement: example:

mysql> CREATE TABLE x (id INT PRIMARY KEY, c INT)

-> ENGINE=INNODB KEY_BLOCK_SIZE=33333;

ERROR 1005 (HY000): Can't create table 'test.x' (errno: 1478)

mysql> SHOW ERRORS;
+-------+------+-------------------------------------------+ 
| Level | Code | Message                                   | 
+-------+------+-------------------------------------------+ 
| Error | 1478 | InnoDB: invalid KEY_BLOCK_SIZE=33333.     | 
| Error | 1005 | Can't create table 'test.x' (errno: 1478) | 
+-------+------+-------------------------------------------+ 

2 rows in set (0.00 sec)

Table 5.6. CREATE/ALTER TABLE Warnings and Errors when InnoDB Strict Mode is OFF

SyntaxWarning or Error ConditionResulting ROW_FORMAT, as shown in SHOW TABLE STATUS
ROW_FORMAT=REDUNDANTNoneREDUNDANT
ROW_FORMAT=COMPACTNoneCOMPACT
ROW_FORMAT=COMPRESSED or ROW_FORMAT=DYNAMIC or KEY_BLOCK_SIZE is specifiedIgnored unless both innodb_file_format=Barracuda and innodb_file_per_table are enabledCOMPACT
Invalid KEY_BLOCK_SIZE is specified (not 1, 2, 4, 8 or 16)KEY_BLOCK_SIZE is ignoredthe requested one, or COMPACT by default
ROW_FORMAT=COMPRESSED and valid KEY_BLOCK_SIZE are specifiedNone; KEY_BLOCK_SIZE specified is used, not the 8K defaultCOMPRESSED
KEY_BLOCK_SIZE is specified with REDUNDANT, COMPACT or DYNAMIC row formatKEY_BLOCK_SIZE is ignoredREDUNDANT, COMPACT or DYNAMIC
ROW_FORMAT is not one of REDUNDANT, COMPACT, DYNAMIC or COMPRESSEDIgnored if recognized by the MySQL parser. Otherwise, an error is issued.COMPACT or N/A

When innodb_strict_mode is ON, MySQL rejects invalid ROW_FORMAT or KEY_BLOCK_SIZE parameters. For compatibility with earlier versions of MySQL, strict mode is not enabled by default; instead, MySQL issues warnings (not errors) for ignored invalid parameters.

Note that it is not possible to see the chosen KEY_BLOCK_SIZE using SHOW TABLE STATUS. The statement SHOW CREATE TABLE displays the KEY_BLOCK_SIZE (even if it was ignored when creating the table). The real compressed page size of the table cannot be displayed by MySQL.

5.4.7. InnoDB File-Format Management

As InnoDB evolves, new on-disk data structures are sometimes required to support new features. Features such as compressed tables (see Section 5.4.6, “Working with InnoDB Compressed Tables”), and long variable-length columns stored off-page (see Section 5.4.8, “How InnoDB Stores Variable-Length Columns”) require data file formats that are not compatible with prior versions of InnoDB. These features both require use of the new Barracuda file format.

Note

All other new features are compatible with the original Antelope file format and do not require the Barracuda file format.

This section discusses enabling file formats for new InnoDB tables, verifying compatibility of different file formats between MySQL releases, identifying the file format in use, downgrading the file format, and file format names that may be used in the future.

Named File Formats.  InnoDB 1.1 has the idea of a named file format and a configuration parameter to enable the use of features that require use of that format. The new file format is the Barracuda format, and the original InnoDB file format is called Antelope. Compressed tables and the new row format that stores long columns off-page require the use of the Barracuda file format or newer. Future versions of InnoDB may introduce a series of file formats, identified with the names of animals, in ascending alphabetic order.

5.4.7.1. Enabling File Formats

The configuration parameter innodb_file_format controls whether such statements as CREATE TABLE and ALTER TABLE can be used to create tables that depend on support for the Barracuda file format.

Although Oracle recommends using the Barracuda format for new tables where practical, in MySQL 5.5 the default file format is still Antelope, for maximum compatibility with replication configurations containing different MySQL releases.

The file format is a dynamic, global parameter that can be specified in the MySQL option file (my.cnf or my.ini) or changed with the SET GLOBAL command.

5.4.7.2. Verifying File Format Compatibility

InnoDB 1.1 incorporates several checks to guard against the possible crashes and data corruptions that might occur if you run an older release of the MySQL server on InnoDB data files using a newer file format. These checks take place when the server is started, and when you first access a table. This section describes these checks, how you can control them, and error and warning conditions that might arise.

Backward Compatibility

Considerations of backward compatibility only apply when using a recent version of InnoDB (the InnoDB Plugin, or MySQL 5.5 and higher with InnoDB 1.1) alongside an older one (MySQL 5.1 or earlier, with the built-in InnoDB rather than the InnoDB Plugin). To minimize the chance of compatibility issues, you can standardize on the InnoDB Plugin for all your MySQL 5.1 and earlier database servers.

In general, a newer version of InnoDB may create a table or index that cannot safely be read or written with a prior version of InnoDB without risk of crashes, hangs, wrong results or corruptions. InnoDB 1.1 includes a mechanism to guard against these conditions, and to help preserve compatibility among database files and versions of InnoDB. This mechanism lets you take advantage of some new features of an InnoDB release (such as performance improvements and bug fixes), and still preserve the option of using your database with a prior version of InnoDB, by preventing accidental use of new features that create downward-incompatible disk files.

If a version of InnoDB supports a particular file format (whether or not that format is the default), you can query and update any table that requires that format or an earlier format. Only the creation of new tables using new features is limited based on the particular file format enabled. Conversely, if a tablespace contains a table or index that uses a file format that is not supported by the currently running software, it cannot be accessed at all, even for read access.

The only way to downgrade an InnoDB tablespace to an earlier file format is to copy the data to a new table, in a tablespace that uses the earlier format. This can be done with the ALTER TABLE statement, as described in Section 5.4.7.4, “Downgrading the File Format”.

The easiest way to determine the file format of an existing InnoDB tablespace is to examine the properties of the table it contains, using the SHOW TABLE STATUS command or querying the table INFORMATION_SCHEMA.TABLES. If the Row_format of the table is reported as 'Compressed' or 'Dynamic', the tablespace containing the table uses the Barracuda format. Otherwise, it uses the prior InnoDB file format, Antelope.

Internal Details

Every InnoDB per-table tablespace (represented by a *.ibd file) file is labeled with a file format identifier. The system tablespace (represented by the ibdata files) is tagged with the highest file format in use in a group of InnoDB database files, and this tag is checked when the files are opened.

Creating a compressed table, or a table with ROW_FORMAT=DYNAMIC, updates the file header for the corresponding .ibd file and the table type in the InnoDB data dictionary with the identifier for the Barracuda file format. From that point forward, the table cannot be used with a version of InnoDB that does not support this new file format. To protect against anomalous behavior, InnoDB version 5.0.21 and later performs a compatibility check when the table is opened. (In many cases, the ALTER TABLE statement recreates a table and thus changes its properties. The special case of adding or dropping indexes without rebuilding the table is described in Fast Index Creation in the InnoDB Storage Engine.)

Definition of ib-file set

To avoid confusion, for the purposes of this discussion we define the term ib-file set to mean the set of operating system files that InnoDB manages as a unit. The ib-file set includes the following files:

  • The system tablespace (one or more ibdata files) that contain internal system information (including internal catalogs and undo information) and may include user data and indexes.

  • Zero or more single-table tablespaces (also called file per table files, named *.ibd files).

  • InnoDB log files; usually two, ib_logfile0 and ib_logfile1. Used for crash recovery and in backups.

An ib-file set does not include the corresponding .frm files that contain metadata about InnoDB tables. The .frm files are created and managed by MySQL, and can sometimes get out of sync with the internal metadata in InnoDB.

Multiple tables, even from more than one database, can be stored in a single ib-file set. (In MySQL, a database is a logical collection of tables, what other systems refer to as a schema or catalog.)

5.4.7.2.1. Compatibility Check When InnoDB Is Started

To prevent possible crashes or data corruptions when InnoDB opens an ib-file set, it checks that it can fully support the file formats in use within the ib-file set. If the system is restarted following a crash, or a fast shutdown (i.e., innodb_fast_shutdown is greater than zero), there may be on-disk data structures (such as redo or undo entries, or doublewrite pages) that are in a too-new format for the current software. During the recovery process, serious damage can be done to your data files if these data structures are accessed. The startup check of the file format occurs before any recovery process begins, thereby preventing consistency issues with the new tables or startup problems for the MySQL server.

Beginning with version InnoDB 1.0.1, the system tablespace records an identifier or tag for the highest file format used by any table in any of the tablespaces that is part of the ib-file set. Checks against this file format tag are controlled by the configuration parameter innodb_file_format_check, which is ON by default.

If the file format tag in the system tablespace is newer or higher than the highest version supported by the particular currently executing software and if innodb_file_format_check is ON, the following error is issued when the server is started:

InnoDB: Error: the system tablespace is in a
file format that this version doesn't support

You can also set innodb_file_format to a file format name. Doing so prevents InnoDB from starting if the current software does not support the file format specified. It also sets the high water mark to the value you specify. The ability to set innodb_file_format_check will be useful (with future releases of InnoDB) if you manually downgrade all of the tables in an ib-file set (as described in Downgrading the InnoDB Storage Engine). You can then rely on the file format check at startup if you subsequently use an older version of InnoDB to access the ib-file set.

In some limited circumstances, you might want to start the server and use an ib-file set that is in a too new format (one that is not supported by the software you are using). If you set the configuration parameter innodb_file_format_check to OFF, InnoDB opens the database, but issues this warning message in the error log:

InnoDB: Warning: the system tablespace is in a
file format that this version doesn't support
Note

This is a very dangerous setting, as it permits the recovery process to run, possibly corrupting your database if the previous shutdown was a crash or fast shutdown. You should only set innodb_file_format_check to OFF if you are sure that the previous shutdown was done with innodb_fast_shutdown=0, so that essentially no recovery process occurs. In a future release, this parameter setting may be renamed from OFF to UNSAFE. (However, until there are newer releases of InnoDB that support additional file formats, even disabling the startup checking is in fact safe.)

The parameter innodb_file_format_check affects only what happens when a database is opened, not subsequently. Conversely, the parameter innodb_file_format (which enables a specific format) only determines whether or not a new table can be created in the enabled format and has no effect on whether or not a database can be opened.

The file format tag is a high water mark, and as such it is increased after the server is started, if a table in a higher format is created or an existing table is accessed for read or write (assuming its format is supported). If you access an existing table in a format higher than the format the running software supports, the system tablespace tag is not updated, but table-level compatibility checking applies (and an error is issued), as described in Section 5.4.7.2.2, “Compatibility Check When a Table Is Opened”. Any time the high water mark is updated, the value of innodb_file_format_check is updated as well, so the command SELECT @@innodb_file_format_check; displays the name of the newest file format known to be used by tables in the currently open ib-file set and supported by the currently executing software.

To best illustrate this behavior, consider the scenario described in Table 5.7, “InnoDB Data File Compatibility and Related InnoDB Parameters”. Imagine that some future version of InnoDB supports the Cheetah format and that an ib-file set has been used with that version.

Table 5.7. InnoDB Data File Compatibility and Related InnoDB Parameters

innodb file format checkinnodb file formatHighest file format used in ib-file setHighest file format supported by InnoDBResult
OFFAntelope or BarracudaBarracudaBarracudaDatabase can be opened; tables can be created which require Antelope or Barracuda file format
OFFAntelope or BarracudaCheetahBarracudaDatabase can be opened with a warning, since the database contains files in a too new format; tables can be created in Antelope or Barracuda file format; tables in Cheetah format cannot be accessed
OFFCheetahBarracudaBarracudaDatabase cannot be opened; innodb_file_format cannot be set to Cheetah
ONAntelope or BarracudaBarracudaBarracudaDatabase can be opened; tables can be created in Antelope or Barracuda file format
ONAntelope or BarracudaCheetahBarracudaDatabase cannot be opened, since the database contains files in a too new format (Cheetah)
ONCheetahBarracudaBarracudaDatabase cannot be opened; innodb_file_format cannot be set to Cheetah

5.4.7.2.2. Compatibility Check When a Table Is Opened

When a table is first accessed, InnoDB (including some releases prior to InnoDB 1.0) checks that the file format of the tablespace in which the table is stored is fully supported. This check prevents crashes or corruptions that would otherwise occur when tables using a too new data structure are encountered.

All tables using any file format supported by a release can be read or written (assuming the user has sufficient privileges). The setting of the system configuration parameter innodb_file_format can prevent creating a new table that uses specific file formats, even if they are supported by a given release. Such a setting might be used to preserve backward compatibility, but it does not prevent accessing any table that uses any supported format.

As noted in Named File Formats, versions of MySQL older than 5.0.21 cannot reliably use database files created by newer versions if a new file format was used when a table was created. To prevent various error conditions or corruptions, InnoDB checks file format compatibility when it opens a file (for example, upon first access to a table). If the currently running version of InnoDB does not support the file format identified by the table type in the InnoDB data dictionary, MySQL reports the following error:

ERROR 1146 (42S02): Table 'test.t1' doesn't exist

InnoDB also writes a message to the error log:

InnoDB: table test/t1: unknown table type 33

The table type should be equal to the tablespace flags, which contains the file format version as discussed in Section 5.4.7.3, “Identifying the File Format in Use”.

Versions of InnoDB prior to MySQL 4.1 did not include table format identifiers in the database files, and versions prior to MySQL 5.0.21 did not include a table format compatibility check. Therefore, there is no way to ensure proper operations if a table in a too new format is used with versions of InnoDB prior to 5.0.21.

The file format management capability in InnoDB 1.0 and higher (tablespace tagging and run-time checks) allows InnoDB to verify as soon as possible that the running version of software can properly process the tables existing in the database.

If you permit InnoDB to open a database containing files in a format it does not support (by setting the parameter innodb_file_format_check to OFF), the table-level checking described in this section still applies.

Users are strongly urged not to use database files that contain Barracuda file format tables with releases of InnoDB older than the MySQL 5.1 with the InnoDB Plugin. It is possible to downgrade such tables to the Antelope format with the procedure described in Section 5.4.7.4, “Downgrading the File Format”.

5.4.7.3. Identifying the File Format in Use

After you enable a given innodb_file_format, this change applies only to newly created tables rather than existing ones. If you do create a new table, the tablespace containing the table is tagged with the earliest or simplest file format that is required for the table's features. For example, if you enable file format Barracuda, and create a new table that is not compressed and does not use ROW_FORMAT=DYNAMIC, the new tablespace that contains the table is tagged as using file format Antelope.

It is easy to identify the file format used by a given tablespace or table. The table uses the Barracuda format if the Row_format reported by SHOW CREATE TABLE or INFORMATION_SCHEMA.TABLES is one of 'Compressed' or 'Dynamic'. (The Row_format is a separate column; ignore the contents of the Create_options column, which may contain the string ROW_FORMAT.) If the table in a tablespace uses neither of those features, the file uses the format supported by prior releases of InnoDB, now called file format Antelope. Then, the Row_format is one of 'Redundant' or 'Compact'.

Internal Details

InnoDB has two different file formats (Antelope and Barracuda) and four different row formats (Redundant, Compact, Dynamic, and Compressed). The Antelope file format contains Redundant and Compact row formats. A tablespace that uses the Barracuda file format uses either the Dynamic or Compressed row format.

File and row format information is written in the tablespace flags (a 32-bit number) in the *.ibd file in the 4 bytes starting at position 54 of the file, most significant byte first (the first byte of the file is byte zero). On some systems, you can display these bytes in hexadecimal with the command od -t x1 -j 54 -N 4 tablename.ibd. If all bytes are zero, the tablespace uses the Antelope file format, which is the format used by the standard InnoDB storage engine up to version 5.1. The system tablespace will always have zero in the tablespace flags.

The first 10 bits of the tablespace flags can be described this way:

  • Bit 0: Zero for Antelope, and bits 1 to 5 will also be zero. One for Barracuda, and bits 1 to 5 may be set.

  • Bits 1 to 4: A 4 bit number representing the compressed page size. 0 = not compressed, 1 = 1k, 2 = 2k, 3 = 4k, 4 = 8k.

  • Bit 5: Same value as Bit 0, zero for Antelope, one for Barracuda. If bits 0 and 5 are set and bits 1 to 4 are not, the row format is Dynamic.

  • Bits 6 to 9: A 4-bit number indicating the physical page size of the tablespace. 0=16k (original default), 3=4k, 4=8k, 5=16k. These are the only valid values for My SQL 5.6 and later.

  • Bit 10: Tablespace location. 0 = default, 1 = used DATA DIRECTORY in CREATE TABLE to choose the tablespace location.

Note

Tablespace flags are similar to table flags found in the InnoDB dictionary table, SYS_TABLES. They differ in the meaning of bit 0 and bits 6 to 10. Table flags will set bit 0 to one if the row format of a particular table is Compact. Tablespace flags cannot do that since the system tablespace can contain both Redundant and Compact row formats. So, for tablespace flags, bit 0 and bit 5 are always the same value.

Table flags can be viewed by issuing the command:

SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES;

The first 7 bits of the table flags can be described this way:

  • Bit 0: Zero for Redundant row format, and bits 1 to 5 will be zero. One for Compact row format, and bits 1 to 5 may be set.

  • Bits 1 to 4: A 4 bit number representing the compressed page size. 0 = not compressed, 1 = 1k, 2 = 2k, 3 = 4k, 4 = 8k.

  • Bit 5: Zero for Antelope file format, and one for Barracuda file format. If bit 5 is set and bits 1 to 4 are not, the row format is Dynamic. Also, if bit 5 is set, bit 0 must also be set.

  • Bit 6: Tablespace location. 0 = default, 1 = DATA DIRECTORY was used in CREATE TABLE to choose a tablespace location.

If bits 7 to 31 are not zero, the table is corrupt or the SYS_TABLES record is corrupt, and the table cannot be used.

5.4.7.4. Downgrading the File Format

Each InnoDB tablespace file (with a name matching *.ibd) is tagged with the file format used to create its table and indexes. The way to downgrade the tablespace is to re-create the table and its indexes. The easiest way to recreate a table and its indexes is to use the command:

ALTER TABLE t ROW_FORMAT=COMPACT;

on each table that you want to downgrade. The COMPACT row format uses the file format Antelope. It was introduced in MySQL 5.0.3.

5.4.7.5. Future InnoDB File Formats

The file format used by the standard built-in InnoDB in MySQL 5.1 is the Antelope format. The file format introduced with InnoDB Plugin 1.0 is the Barracuda format. Although no new features have been announced that would require additional new file formats, the InnoDB file format mechanism allows for future enhancements.

For the sake of completeness, these are the file format names that might be used for future file formats: Antelope, Barracuda, Cheetah, Dragon, Elk, Fox, Gazelle, Hornet, Impala, Jaguar, Kangaroo, Leopard, Moose, Nautilus, Ocelot, Porpoise, Quail, Rabbit, Shark, Tiger, Urchin, Viper, Whale, Xenops, Yak and Zebra. These file formats correspond to the internal identifiers 0..25.

5.4.8. How InnoDB Stores Variable-Length Columns

This section discusses how certain InnoDB features, such as table compression and off-page storage of long columns, are controlled by the ROW_FORMAT clause of the CREATE TABLE statement. It discusses considerations for choosing the right row format and compatibility of row formats between MySQL releases.

5.4.8.1. Overview of InnoDB Row Storage

The storage for rows and associated columns affects performance for queries and DML operations. As more rows fit into a single disk page, queries and index lookups can work faster, less cache memory is required in the InnoDB buffer pool, and less I/O is required to write out updated values for the numeric and short string columns.

The data in each InnoDB table is divided into pages. The pages that make up each table are arranged in a tree data structure called a B-tree index. Table data and secondary indexes both use this type of structure. The B-tree index that represents an entire table is known as the clustered index, which is organized according to the primary key columns. The nodes of the index data structure contain the values of all the columns in that row (for the clustered index) or the index columns and the primary key columns (for secondary indexes).

Variable-length columns are an exception to this rule. Columns such as BLOB and VARCHAR that are too long to fit on a B-tree page are stored on separately allocated disk pages called overflow pages. We call such columns off-page columns. The values of these columns are stored in singly-linked lists of overflow pages, and each such column has its own list of one or more overflow pages. In some cases, all or a prefix of the long column value is stored in the B-tree, to avoid wasting storage and eliminating the need to read a separate page.

This section describes the clauses you can use with the CREATE TABLE and ALTER TABLE statements to control how these variable-length columns are represented: ROW_FORMAT and KEY_BLOCK_SIZE. To use these clauses, you might also need to change the settings for the innodb_file_per_table and innodb_file_format configuration options.

5.4.8.2. Specifying the Row Format for a Table

You specify the row format for a table with the ROW_FORMAT clause of the CREATE TABLE and ALTER TABLE statements.

5.4.8.3. DYNAMIC and COMPRESSED Row Formats

This section discusses the DYNAMIC and COMPRESSED row formats for InnoDB tables. You can only create these kinds of tables when the innodb_file_format configuration option is set to Barracuda. (The Barracuda file format also allows the COMPACT and REDUNDANT row formats.)

When a table is created with ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED, long column values are stored fully off-page, and the clustered index record contains only a 20-byte pointer to the overflow page.

Whether any columns are stored off-page depends on the page size and the total size of the row. When the row is too long, InnoDB chooses the longest columns for off-page storage until the clustered index record fits on the B-tree page.

The DYNAMIC row format maintains the efficiency of storing the entire row in the index node if it fits (as do the COMPACT and REDUNDANT formats), but this new format avoids the problem of filling B-tree nodes with a large number of data bytes of long columns. The DYNAMIC format is based on the idea that if a portion of a long data value is stored off-page, it is usually most efficient to store all of the value off-page. With DYNAMIC format, shorter columns are likely to remain in the B-tree node, minimizing the number of overflow pages needed for any given row.

The COMPRESSED row format uses similar internal details for off-page storage as the DYNAMIC row format, with additional storage and performance considerations from the table and index data being compressed and using smaller page sizes. With the COMPRESSED row format, the option KEY_BLOCK_SIZE controls how much column data is stored in the clustered index, and how much is placed on overflow pages. For full details about the COMPRESSED row format, see Section 5.4.6, “Working with InnoDB Compressed Tables”.

5.4.8.4. COMPACT and REDUNDANT Row Formats

Early versions of InnoDB used an unnamed file format (now called Antelope) for database files. With that file format, tables are defined with ROW_FORMAT=COMPACT or ROW_FORMAT=REDUNDANT. InnoDB stores up to the first 768 bytes of variable-length columns (such as BLOB and VARCHAR) in the index record within the B-tree node, with the remainder stored on the overflow pages.

To preserve compatibility with those prior versions, tables created with the newest InnoDB default to the COMPACT row format. See Section 5.4.8.3, “DYNAMIC and COMPRESSED Row Formats” for information about the newer DYNAMIC and COMPRESSED row formats.

With the Antelope file format, if the value of a column is 768 bytes or less, no overflow page is needed, and some savings in I/O may result, since the value is in the B-tree node. This works well for relatively short BLOBs, but may cause B-tree nodes to fill with data rather than key values, reducing their efficiency. Tables with many BLOB columns could cause B-tree nodes to become too full of data, and contain too few rows, making the entire index less efficient than if the rows were shorter or if the column values were stored off-page.

5.5. Online DDL for InnoDB Tables

You can perform several kinds of online DDL operations on InnoDB tables: that is, allowing DML operations and queries on the table while the DDL is in progress, performing the operation in-place without rebuilding the entire table, or both. This enhancement has the following benefits:

  • It improves responsiveness and availability in busy production environments, where making a table unavailable for minutes or hours whenever you modify its indexes or column definitions is not practical.

  • It lets you adjust the balance between performance and concurrency during the DDL operation, by choosing whether to block access to the table entirely (LOCK=EXCLUSIVE clause), allow queries but not DML (LOCK=SHARED clause), or allow full query and DML access to the table (LOCK=NONE clause). When you omit the LOCK clause or specify LOCK=DEFAULT, MySQL allows as much concurrency as possible depending on the type of operation.

  • By doing the changes in-place where possible, rather than creating a new copy of the table, it avoids temporary increases in disk space usage and the I/O overhead of copying the table and reconstructing all the secondary indexes.

5.5.1. Overview of Online DDL

Historically, many DDL operations on InnoDB tables were expensive. Many ALTER TABLE operations worked by creating a new, empty table defined with the requested table options and indexes, then copying the existing rows to the new table one-by-one, updating the indexes as the rows were inserted. After all rows from the original table were copied, the old table was dropped and the copy was renamed with the name of the original table.

MySQL 5.5, and MySQL 5.1 with the InnoDB Plugin, optimized CREATE INDEX and DROP INDEX to avoid the table-copying behavior. That feature was known as Fast Index Creation. MySQL 5.6 enhances many other types of ALTER TABLE operations to avoid copying the table. Another enhancement allows SELECT queries and INSERT, UPDATE, and DELETE (DML) statements to proceed while the table is being altered. In MySQL 5.7, ALTER TABLE RENAME INDEX was also enhanced to avoid table copying. This combination of features is now known as online DDL.

This new mechanism also means that you can generally speed the overall process of creating and loading a table and associated indexes by creating the table with without any secondary indexes, then adding the secondary indexes after the data is loaded.

Although no syntax changes are required in the CREATE INDEX or DROP INDEX commands, some factors affect the performance, space usage, and semantics of this operation (see Section 5.5.9, “Limitations of Online DDL”).

The online DDL enhancements in MySQL 5.6 improve many DDL operations that formerly required a table copy, blocked DML operations on the table, or both. Table 5.8, “Summary of Online Status for DDL Operations” shows the variations of the ALTER TABLE statement and shows how the online DDL feature applies to each one.

With the exception of ALTER TABLE partitioning clauses, online DDL operations for partitioned InnoDB tables follow the same rules that apply to regular InnoDB tables. For more information, see Section 5.5.8, “Online DDL for Partitioned InnoDB Tables”.

  • The In-Place? column shows which operations allow the ALGORITHM=INPLACE clause; the preferred value is Yes.

  • The Copies Table? column shows which operations are able to avoid the expensive table-copying operation; the preferred value is No. This column is mostly the reverse of the In-Place? column, except that a few operations allow ALGORITHM=INPLACE but still involve some amount of table copying.

  • The Allows Concurrent DML? column shows which operations can be performed fully online; the preferred value is Yes. You can specify LOCK=NONE to assert that full concurrency is allowed during the DDL, but MySQL automatically allows this level of concurrency when possible. When concurrent DML is allowed, concurrent queries are also always allowed.

  • The Allows Concurrent Queries? column shows which DDL operations allow queries on the table while the operation is in progress; the preferred value is Yes. Concurrent query is allowed during all online DDL operations. It is shown with Yes listed for all cells, for reference purposes. You can specify LOCK=SHARED to assert that concurrent queries are allowed during the DDL, but MySQL automatically allows this level of concurrency when possible.

  • The Notes column explains any exceptions to the yes/no values of the other columns, such as when the answer depends on the setting of a configuration option or some other clause in the DDL statement. The values Yes* and No* indicate that an answer depends on these additional notes.

Table 5.8. Summary of Online Status for DDL Operations

OperationIn-Place?Copies Table?Allows Concurrent DML?Allows Concurrent Query?Notes
CREATE INDEX, ADD INDEXYes*No*YesYesSome restrictions for FULLTEXT index; see next row. Currently, the operation is not in-place (that is, it copies the table) if the same index being created was also dropped by an earlier clause in the same ALTER TABLE statement.
ADD FULLTEXT INDEXYesNo*NoYesCreating the first FULLTEXT index for a table involves a table copy, unless there is a user-supplied FTS_DOC_ID column. Subsequent FULLTEXT indexes on the same table can be created in-place.
RENAME INDEXYesNoNoNo 
DROP INDEXYesNoYesYes 
Set default value for a columnYesNoYesYesModifies .frm file only, not the data file.
Change auto-increment value for a columnYesNoYesYesModifies a value stored in memory, not the data file.
Add a foreign key constraintYes*No*YesYesTo avoid copying the table, disable foreign_key_checks during constraint creation.
Drop a foreign key constraintYesNoYesYesThe foreign_key_checks option can be enabled or disabled.
Rename a columnYes*No*Yes*YesTo allow concurrent DML, keep the same data type and only change the column name.
Add a columnYesYesYes*YesConcurrent DML is not allowed when adding an auto-increment column. Although ALGORITHM=INPLACE is allowed, the data is reorganized substantially, so it is still an expensive operation.
Drop a columnYesYesYesYesAlthough ALGORITHM=INPLACE is allowed, the data is reorganized substantially, so it is still an expensive operation.
Reorder columnsYesYesYesYesAlthough ALGORITHM=INPLACE is allowed, the data is reorganized substantially, so it is still an expensive operation.
Change ROW_FORMAT propertyYesYesYesYesAlthough ALGORITHM=INPLACE is allowed, the data is reorganized substantially, so it is still an expensive operation.
Change KEY_BLOCK_SIZE propertyYesYesYesYesAlthough ALGORITHM=INPLACE is allowed, the data is reorganized substantially, so it is still an expensive operation.
Make column NULLYesYesYesYesAlthough ALGORITHM=INPLACE is allowed, the data is reorganized substantially, so it is still an expensive operation.
Make column NOT NULLYes*YesYesYesWhen SQL_MODE includes strict_all_tables or strict_all_tables, the operation fails if the column contains any nulls. Although ALGORITHM=INPLACE is allowed, the data is reorganized substantially, so it is still an expensive operation.
Change data type of columnNo*Yes*NoYesException: VARCHAR size may be increased using online ALTER TABLE. See Section 14.2.5.12, “Increase VARCHAR Size Online”.
Add primary keyYes*YesYesYesAlthough ALGORITHM=INPLACE is allowed, the data is reorganized substantially, so it is still an expensive operation. ALGORITHM=INPLACE is not allowed under certain conditions if columns have to be converted to NOT NULL. See Example 5.9, “Creating and Dropping the Primary Key”.
Drop primary key and add anotherYesYesYesYesALGORITHM=INPLACE is only allowed when you add a new primary key in the same ALTER TABLE; the data is reorganized substantially, so it is still an expensive operation.
Drop primary keyNoYesNoYesRestrictions apply when you drop a primary key primary key without adding a new one in the same ALTER TABLE statement.
Convert character setNoYesNoYesRebuilds the table if the new character encoding is different.
Specify character setNoYesNoYesRebuilds the table if the new character encoding is different.
Rebuild with FORCE optionNoYesNoYesActs like the ALGORITHM=COPY clause or the setting old_alter_table=1.

The following sections shows the basic syntax, and usage notes related to online DDL, for each of the major operations that can be performed with concurrent DML, in-place, or both:

Secondary Indexes

  • Create secondary indexes: CREATE INDEX name ON table (col_list) or ALTER TABLE table ADD INDEX name (col_list). (Creating a a FULLTEXT index still requires locking the table.)

  • Drop secondary indexes: DROP INDEX name ON table; or ALTER TABLE table DROP INDEX name

Creating and dropping secondary indexes on InnoDB tables skips the table-copying behavior, the same as in MySQL 5.5 and MySQL 5.1 with the InnoDB Plugin.

In MySQL 5.6 and higher, the table remains available for read and write operations while the index is being created or dropped. The CREATE INDEX or DROP INDEX statement only finishes after all transactions that are accessing the table are completed, so that the initial state of the index reflects the most recent contents of the table. Previously, modifying the table while an index was being created or dropped typically resulted in a deadlock that cancelled the INSERT, UPDATE, or DELETE statement on the table.

Column Properties

  • Set a default value for a column: ALTER TABLE tbl ALTER COLUMN col SET DEFAULT literal or ALTER TABLE tbl ALTER COLUMN col DROP DEFAULT

    The default values for columns are stored in the .frm file for the table, not the InnoDB data dictionary.

  • Changing the auto-increment value for a column: ALTER TABLE table AUTO_INCREMENT=next_value;

    Especially in a distributed system using replication or sharding, you sometimes reset the auto-increment counter for a table to a specific value. The next row inserted into the table uses the specified value for its auto-increment column. You might also use this technique in a data warehousing environment where you periodically empty all the tables and reload them, and you can restart the auto-increment sequence from 1.

  • Renaming a column: ALTER TABLE tbl CHANGE old_col_name new_col_name datatype

    When you keep the same data type and [NOT] NULL attribute, only changing the column name, this operation can always be performed online.

    As part of this enhancement, you can now rename a column that is part of a foreign key constraint, which was not allowed before. The foreign key definition is automatically updated to use the new column name. Renaming a column participating in a foreign key only works with the in-place mode of ALTER TABLE. If you use the ALGORITHM=COPY clause, or some other condition causes the command to use ALGORITHM=COPY behind the scenes, the ALTER TABLE statement will fail.

  • Extending VARCHAR size using an in-place ALTER TABLE statement, as in this example:

    ALTER TABLE t1 ALGORITHM=INPLACE, CHANGE COLUMN c1 c1 VARCHAR(255);

    The number of length bytes required by a VARCHAR column must remain the same. For VARCHAR values of 0 to 255, one length byte is required to encode the value. For VARCHAR values of 256 bytes or more, two length bytes are required. As a result, in-place ALTER TABLE only supports increasing VARCHAR size from 0 to 255 bytes or increasing VARCHAR size from a value equal to or greater than 256 bytes. In-place ALTER TABLE does not support increasing VARCHAR size from less than 256 bytes to a value equal to or greater than 256 bytes. In this case, the number of required length bytes would change from 1 to 2, which is only supported by a table copy (ALGORITHM=COPY).

    Decreasing VARCHAR size using in-place ALTER TABLE is not supported. Decreasing VARCHAR size requires a table copy (ALGORITHM=COPY).

Foreign Keys

  • Adding or dropping a foreign key constraint:

    ALTER TABLE tbl1 ADD CONSTRAINT fk_name FOREIGN KEY index (col1) REFERENCES tbl2(col2) referential_actions;
    ALTER TABLE tbl DROP FOREIGN KEY fk_name;
    

    Dropping a foreign key can be performed online with the foreign_key_checks option enabled or disabled. Creating a foreign key online requires foreign_key_checks to be disabled.

    If you do not know the names of the foreign key constraints on a particular table, issue the following statement and find the constraint name in the CONSTRAINT clause for each foreign key:

    show create table table\G
    

    Or, query the information_schema.table_constraints table and use the constraint_name and constraint_type columns to identify the foreign key names.

    As a consequence of this enhancement, you can now also drop a foreign key and its associated index in a single statement, which previously required separate statements in a strict order:

    ALTER TABLE table DROP FOREIGN KEY constraint, DROP INDEX index;
    

If foreign keys are already present in the table being altered (that is, it is a child table containing any FOREIGN KEY ... REFERENCE clauses), additional restrictions apply to online DDL operations, even those not directly involving the foreign key columns:

  • Concurrent DML is disallowed during online DDL operations on such child tables. (This restriction is being evaluated as a bug and might be lifted.)

  • An ALTER TABLE on the child table could also wait for another transaction to commit, if a change to the parent table caused associated changes in the child table through an ON UPDATE or ON DELETE clause using the CASCADE or SET NULL parameters.

In the same way, if a table is the parent table in a foreign key relationship, even though it does not contain any FOREIGN KEY clauses, it could wait for the ALTER TABLE to complete if an INSERT, UPDATE, or DELETE statement caused an ON UPDATE or ON DELETE action in the child table.

Notes on ALGORITHM=COPY

Any ALTER TABLE operation run with the ALGORITHM=COPY clause prevents concurrent DML operations. Concurrent queries are still allowed. That is, a table-copying operation always includes at least the concurrency restrictions of LOCK=SHARED (allow queries but not DML). You can further restrict concurrency for such operations by specifying LOCK=EXCLUSIVE (prevent DML and queries).

Concurrent DML but Table Copy Still Required

Some other ALTER TABLE operations allow concurrent DML, and are faster than MySQL 5.5 and prior: the table-copying operation is optimized, even though a table copy is still required:

  • Adding, dropping, or reordering columns.

  • Adding or dropping a primary key.

  • Changing the ROW_FORMAT or KEY_BLOCK_SIZE properties for a table.

  • Changing the nullable status for a column.

Note

As your database schema evolves with new columns, data types, constraints, indexes, and so on, keep your CREATE TABLE statements up to date with the latest table definitions. Even with the performance improvements of online DDL, it is more efficient to create stable database structures at the beginning, rather than creating part of the schema and then issuing ALTER TABLE statements afterward.

The main exception to this guideline is for secondary indexes on tables with large numbers of rows. It is typically most efficient to create the table with all details specified except the secondary indexes, load the data, then create the secondary indexes. You can use the same technique with foreign keys (load the data first, then set up the foreign keys) if you know the initial data is clean and do not need consistency checks during the loading process.

Whatever sequence of CREATE TABLE, CREATE INDEX, ALTER TABLE, and similar statements went into putting a table together, you can capture the SQL needed to reconstruct the current form of the table by issuing the statement SHOW CREATE TABLE table\G (uppercase \G required for tidy formatting). This output shows clauses such as numeric precision, NOT NULL, and CHARACTER SET that are sometimes added behind the scenes, and you might otherwise leave out when cloning the table on a new system or setting up foreign key columns with identical type.

5.5.2. Performance and Concurrency Considerations for Online DDL

Online DDL improves several aspects of MySQL operation, such as performance, concurrency, availability, and scalability:

  • Because queries and DML operations on the table can proceed while the DDL is in progress, applications that access the table are more responsive. Reduced locking and waiting for other resources all throughout the MySQL server leads to greater scalability, even for operations not involving the table being altered.

  • For in-place operations, by avoiding the disk I/O and CPU cycles to rebuild the table, you minimize the overall load on the database and maintain good performance and high throughput during the DDL operation.

  • For in-place operations, because less data is read into the buffer pool than if all the data was copied, you avoid purging frequently accessed data from memory, which formerly could cause a temporary performance dip after a DDL operation.

If an online operation requires temporary files, InnoDB creates them in the temporary file directory, not the directory containing the original table. If this directory is not large enough to hold such files, you may need to set the tmpdir system variable to a different directory. (See Section C.5.4.4, “Where MySQL Stores Temporary Files”.)

Locking Options for Online DDL

While an InnoDB table is being changed by a DDL operation, the table may or may not be locked, depending on the internal workings of that operation and the LOCK clause of the ALTER TABLE statement. By default, MySQL uses as little locking as possible during a DDL operation; you specify the clause either to make the locking more restrictive than it normally would be (thus limiting concurrent DML, or DML and queries), or to ensure that some expected degree of locking is allowed for an operation. If the LOCK clause specifies a level of locking that is not available for that specific kind of DDL operation, such as LOCK=SHARED or LOCK=NONE while creating or dropping a primary key, the clause works like an assertion, causing the statement to fail with an error. The following list shows the different possibilities for the LOCK clause, from the most permissive to the most restrictive:

  • For DDL operations with LOCK=NONE, both queries and concurrent DML are allowed. This clause makes the ALTER TABLE fail if the kind of DDL operation cannot be performed with the requested type of locking, so specify LOCK=NONE if keeping the table fully available is vital and it is OK to cancel the DDL if that is not possible. For example, you might use this clause in DDLs for tables involving customer signups or purchases, to avoid making those tables unavailable by mistakenly issuing an expensive ALTER TABLE statement.

  • For DDL operations with LOCK=SHARED, any writes to the table (that is, DML operations) are blocked, but the data in the table can be read. This clause makes the ALTER TABLE fail if the kind of DDL operation cannot be performed with the requested type of locking, so specify LOCK=SHARED if keeping the table available for queries is vital and it is OK to cancel the DDL if that is not possible. For example, you might use this clause in DDLs for tables in a data warehouse, where it is OK to delay data load operations until the DDL is finished, but queries cannot be delayed for long periods.

  • For DDL operations with LOCK=DEFAULT, or with the LOCK clause omitted, MySQL uses the lowest level of locking that is available for that kind of operation, allowing concurrent queries, DML, or both wherever possible. This is the setting to use when making pre-planned, pre-tested changes that you know will not cause any availability problems based on the workload for that table.

  • For DDL operations with LOCK=EXCLUSIVE, both queries and DML operations are blocked. This clause makes the ALTER TABLE fail if the kind of DDL operation cannot be performed with the requested type of locking, so specify LOCK=EXCLUSIVE if the primary concern is finishing the DDL in the shortest time possible, and it is OK to make applications wait when they try to access the table. You might also use LOCK=EXCLUSIVE if the server is supposed to be idle, to avoid unexpected accesses to the table.

An online DDL statement for an InnoDB table always waits for currently executing transactions that are accessing the table to commit or roll back, because it requires exclusive access to the table for a brief period while the DDL statement is being prepared. Likewise, it requires exclusive access to the table for a brief time before finishing. Thus, an online DDL statement waits for any transactions that are started while the DDL is in progress, and query or modify the table, to commit or roll back before the DDL completes.

Because there is some processing work involved with recording the changes made by concurrent DML operations, then applying those changes at the end, an online DDL operation could take longer overall than the old-style mechanism that blocks table access from other sessions. The reduction in raw performance is balanced against better responsiveness for applications that use the table. When evaluating the ideal techniques for changing table structure, consider end-user perception of performance, based on factors such as load times for web pages.

A newly created InnoDB secondary index contains only the committed data in the table at the time the CREATE INDEX or ALTER TABLE statement finishes executing. It does not contain any uncommitted values, old versions of values, or values marked for deletion but not yet removed from the old index.

Performance of In-Place versus Table-Copying DDL Operations

The raw performance of an online DDL operation is largely determined by whether the operation is performed in-place, or requires copying and rebuilding the entire table. See Table 5.8, “Summary of Online Status for DDL Operations” to see what kinds of operations can be performed in-place, and any requirements for avoiding table-copy operations.

The performance speedup from in-place DDL applies to operations on secondary indexes, not to the primary key index. The rows of an InnoDB table are stored in a clustered index organized based on the primary key, forming what some database systems call an index-organized table. Because the table structure is so closely tied to the primary key, redefining the primary key still requires copying the data.

When an operation on the primary key uses ALGORITHM=INPLACE, even though the data is still copied, it is more efficient than using ALGORITHM=COPY because:

  • No undo logging or associated redo logging is required for ALGORITHM=INPLACE. These operations add overhead to DDL statements that use ALGORITHM=COPY.

  • The secondary index entries are pre-sorted, and so can be loaded in order.

  • The change buffer is not used, because there are no random-access inserts into the secondary indexes.

To judge the relative performance of online DDL operations, you can run such operations on a big InnoDB table using current and earlier versions of MySQL. You can also run all the performance tests under the latest MySQL version, simulating the previous DDL behavior for the before results, by setting the old_alter_table system variable. Issue the statement set old_alter_table=1 in the session, and measure DDL performance to record the before figures. Then set old_alter_table=0 to re-enable the newer, faster behavior, and run the DDL operations again to record the after figures.

For a basic idea of whether a DDL operation does its changes in-place or performs a table copy, look at the rows affected value displayed after the command finishes. For example, here are lines you might see after doing different types of DDL operations:

  • Changing the default value of a column (super-fast, does not affect the table data at all):

    Query OK, 0 rows affected (0.07 sec)
  • Adding an index (takes time, but 0 rows affected shows that the table is not copied):

    Query OK, 0 rows affected (21.42 sec)
  • Changing the data type of a column (takes substantial time and does require rebuilding all the rows of the table):

    Query OK, 1671168 rows affected (1 min 35.54 sec)
    Note

    Changing the data type of a column requires rebuilding all the rows of the table with the exception of changing VARCHAR size, which may be performed using online ALTER TABLE. See Section 14.2.5.12, “Increase VARCHAR Size Online”.

For example, before running a DDL operation on a big table, you might check whether the operation will be fast or slow as follows:

  1. Clone the table structure.

  2. Populate the cloned table with a tiny amount of data.

  3. Run the DDL operation on the cloned table.

  4. Check whether the rows affected value is zero or not. A non-zero value means the operation will require rebuilding the entire table, which might require special planning. For example, you might do the DDL operation during a period of scheduled downtime, or on each replication slave server one at a time.

For a deeper understanding of the reduction in MySQL processing, examine the performance_schema and INFORMATION_SCHEMA tables related to InnoDB before and after DDL operations, to see the number of physical reads, writes, memory allocations, and so on.

5.5.3. SQL Syntax for Online DDL

Typically, you do not need to do anything special to enable online DDL when using the ALTER TABLE statement for InnoDB tables. See Table 5.8, “Summary of Online Status for DDL Operations” for the kinds of DDL operations that can be performed in-place, allowing concurrent DML, or both. Some variations require particular combinations of configuration settings or ALTER TABLE clauses.

You can control the various aspects of a particular online DDL operation by using the LOCK and ALGORITHM clauses of the ALTER TABLE statement. These clauses come at the end of the statement, separated from the table and column specifications by commas. The LOCK clause is useful for fine-tuning the degree of concurrent access to the table. The ALGORITHM clause is primarily intended for performance comparisons and as a fallback to the older table-copying behavior in case you encounter any issues with existing DDL code. For example:

  • To avoid accidentally making the table unavailable for reads, writes, or both, you could specify a clause on the ALTER TABLE statement such as LOCK=NONE (allow both reads and writes) or LOCK=SHARED (allow reads). The operation halts immediately if the requested level of concurrency is not available.

  • To compare performance, you could run one statement with ALGORITHM=INPLACE and another with ALGORITHM=COPY, as an alternative to setting the old_alter_table configuration option.

  • To avoid the chance of tying up the server by running an ALTER TABLE that copied the table, you could include ALGORITHM=INPLACE so the statement halts immediately if it cannot use the in-place mechanism. See Table 5.8, “Summary of Online Status for DDL Operations” for a list of the DDL operations that can or cannot be performed in-place.

See Section 5.5.2, “Performance and Concurrency Considerations for Online DDL” for more details about the LOCK clause. For full examples of using online DDL, see Section 5.5.5, “Examples of Online DDL”.

5.5.4. Combining or Separating DDL Statements

Before the introduction of online DDL, it was common practice to combine many DDL operations into a single ALTER TABLE statement. Because each ALTER TABLE statement involved copying and rebuilding the table, it was more efficient to make several changes to the same table at once, since those changes could all be done with a single rebuild operation for the table. The downside was that SQL code involving DDL operations was harder to maintain and to reuse in different scripts. If the specific changes were different each time, you might have to construct a new complex ALTER TABLE for each slightly different scenario.

For DDL operations that can be done in-place, as shown in Table 5.8, “Summary of Online Status for DDL Operations”, now you can separate them into individual ALTER TABLE statements for easier scripting and maintenance, without sacrificing efficiency. For example, you might take a complicated statement such as:

alter table t1 add index i1(c1), add unique index i2(c2), change c4_old_name c4_new_name integer unsigned;

and break it down into simpler parts that can be tested and performed independently, such as:

alter table t1 add index i1(c1);
alter table t1 add unique index i2(c2);
alter table t1 change c4_old_name c4_new_name integer unsigned not null;

You might still use multi-part ALTER TABLE statements for:

  • Operations that must be performed in a specific sequence, such as creating an index followed by a foreign key constraint that uses that index.

  • Operations all using the same specific LOCK clause, that you want to either succeed or fail as a group.

  • Operations that cannot be performed in-place, that is, that still copy and rebuild the table.

  • Operations for which you specify ALGORITHM=COPY or old_alter_table=1, to force the table-copying behavior if needed for precise backward-compatibility in specialized scenarios.

5.5.5. Examples of Online DDL

Here are code examples showing some operations whose performance, concurrency, and scalability are improved by the latest online DDL enhancements.

Example 5.1. Schema Setup Code for Online DDL Experiments

Here is the code that sets up the initial tables used in these demonstrations:

/* 
Setup code for the online DDL demonstration:
- Set up some config variables.
- Create 2 tables that are clones of one of the INFORMATION_SCHEMA tables
  that always has some data. The "small" table has a couple of thousand rows.
  For the "big" table, keep doubling the data until it reaches over a million rows.
- Set up a primary key for the sample tables, since we are demonstrating InnoDB aspects.
*/ 

set autocommit = 0;
set foreign_key_checks = 1;
set global innodb_file_per_table = 1;
set old_alter_table=0;
prompt mysql: 

use test;

\! echo "Setting up 'small' table:"
drop table if exists small_table;
create table small_table as select * from information_schema.columns;
alter table small_table add id int unsigned not null primary key auto_increment;
select count(id) from small_table;

\! echo "Setting up 'big' table:"
drop table if exists big_table;
create table big_table as select * from information_schema.columns;
show create table big_table\G

insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
commit;

alter table big_table add id int unsigned not null primary key auto_increment;
select count(id) from big_table;

Running this code gives this output, condensed for brevity and with the most important points bolded:

Setting up 'small' table:
Query OK, 0 rows affected (0.01 sec)

Query OK, 1678 rows affected (0.13 sec)
Records: 1678  Duplicates: 0  Warnings: 0

Query OK, 1678 rows affected (0.07 sec)
Records: 1678  Duplicates: 0  Warnings: 0

+-----------+
| count(id) |
+-----------+
|      1678 |
+-----------+
1 row in set (0.00 sec)

Setting up 'big' table:
Query OK, 0 rows affected (0.16 sec)

Query OK, 1678 rows affected (0.17 sec)
Records: 1678  Duplicates: 0  Warnings: 0

*************************** 1. row ***************************
       Table: big_table
Create Table: CREATE TABLE `big_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0',
  `COLUMN_DEFAULT` longtext CHARACTER SET utf8,
  `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL,
  `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL,
  `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT ''
) ENGINE=InnoDB DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

Query OK, 1678 rows affected (0.09 sec)
Records: 1678  Duplicates: 0  Warnings: 0

Query OK, 3356 rows affected (0.07 sec)
Records: 3356  Duplicates: 0  Warnings: 0

Query OK, 6712 rows affected (0.17 sec)
Records: 6712  Duplicates: 0  Warnings: 0

Query OK, 13424 rows affected (0.44 sec)
Records: 13424  Duplicates: 0  Warnings: 0

Query OK, 26848 rows affected (0.63 sec)
Records: 26848  Duplicates: 0  Warnings: 0

Query OK, 53696 rows affected (1.72 sec)
Records: 53696  Duplicates: 0  Warnings: 0

Query OK, 107392 rows affected (3.02 sec)
Records: 107392  Duplicates: 0  Warnings: 0

Query OK, 214784 rows affected (6.28 sec)
Records: 214784  Duplicates: 0  Warnings: 0

Query OK, 429568 rows affected (13.25 sec)
Records: 429568  Duplicates: 0  Warnings: 0

Query OK, 859136 rows affected (28.16 sec)
Records: 859136  Duplicates: 0  Warnings: 0

Query OK, 0 rows affected (0.03 sec)

Query OK, 1718272 rows affected (1 min 9.22 sec)
Records: 1718272  Duplicates: 0  Warnings: 0

+-----------+
| count(id) |
+-----------+
|   1718272 |
+-----------+
1 row in set (1.75 sec)

Example 5.2. Speed and Efficiency of CREATE INDEX and DROP INDEX

Here is a sequence of statements demonstrating the relative speed of CREATE INDEX and DROP INDEX statements. For a small table, the elapsed time is less than a second whether we use the fast or slow technique, so we look at the rows affected output to verify which operations can avoid the table rebuild. For a large table, the difference in efficiency is obvious because skipping the table rebuild saves substantial time.

\! clear

\! echo "=== Create and drop index (small table, new/fast technique) ==="
\! echo
\! echo "Data size (kilobytes) before index created: "
\! du -k data/test/small_table.ibd
create index i_dtyp_small on small_table (data_type), algorithm=inplace;
\! echo "Data size after index created: "
\! du -k data/test/small_table.ibd
drop index i_dtyp_small on small_table, algorithm=inplace;

-- Compare against the older slower DDL.

\! echo "=== Create and drop index (small table, old/slow technique) ==="
\! echo
\! echo "Data size (kilobytes) before index created: "
\! du -k data/test/small_table.ibd
create index i_dtyp_small on small_table (data_type), algorithm=copy;
\! echo "Data size after index created: "
\! du -k data/test/small_table.ibd
drop index i_dtyp_small on small_table, algorithm=copy;

-- In the above example, we examined the "rows affected" number,
-- ideally looking for a zero figure. Let's try again with a larger
-- sample size, where we'll see that the actual time taken can
-- vary significantly.

\! echo "=== Create and drop index (big table, new/fast technique) ==="
\! echo
\! echo "Data size (kilobytes) before index created: "
\! du -k data/test/big_table.ibd
create index i_dtyp_big on big_table (data_type), algorithm=inplace;
\! echo "Data size after index created: "
\! du -k data/test/big_table.ibd
drop index i_dtyp_big on big_table, algorithm=inplace;

\! echo "=== Create and drop index (big table, old/slow technique) ==="
\! echo
\! echo "Data size (kilobytes) before index created: "
\! du -k data/test/big_table.ibd
create index i_dtyp_big on big_table (data_type), algorithm=copy;
\! echo "Data size after index created: "
\! du -k data/test/big_table.ibd
drop index i_dtyp_big on big_table, algorithm=copy;

Running this code gives this output, condensed for brevity and with the most important points bolded:

Query OK, 0 rows affected (0.00 sec)

=== Create and drop index (small table, new/fast technique) ===

Data size (kilobytes) before index created: 
384  data/test/small_table.ibd
Query OK, 0 rows affected (0.04 sec)
Records: 0  Duplicates: 0  Warnings: 0

Data size after index created: 
432  data/test/small_table.ibd
Query OK, 0 rows affected (0.02 sec)
Records: 0  Duplicates: 0  Warnings: 0

Query OK, 0 rows affected (0.00 sec)

=== Create and drop index (small table, old/slow technique) ===

Data size (kilobytes) before index created: 
432  data/test/small_table.ibd
Query OK, 1678 rows affected (0.12 sec)
Records: 1678  Duplicates: 0  Warnings: 0

Data size after index created: 
448  data/test/small_table.ibd
Query OK, 1678 rows affected (0.10 sec)
Records: 1678  Duplicates: 0  Warnings: 0

Query OK, 0 rows affected (0.00 sec)

=== Create and drop index (big table, new/fast technique) ===

Data size (kilobytes) before index created: 
315392  data/test/big_table.ibd
Query OK, 0 rows affected (33.32 sec)
Records: 0  Duplicates: 0  Warnings: 0

Data size after index created: 
335872  data/test/big_table.ibd
Query OK, 0 rows affected (0.02 sec)
Records: 0  Duplicates: 0  Warnings: 0

Query OK, 0 rows affected (0.00 sec)

=== Create and drop index (big table, old/slow technique) ===

Data size (kilobytes) before index created: 
335872  data/test/big_table.ibd
Query OK, 1718272 rows affected (1 min 5.01 sec)
Records: 1718272  Duplicates: 0  Warnings: 0

Data size after index created: 
348160  data/test/big_table.ibd
Query OK, 1718272 rows affected (46.59 sec)
Records: 1718272  Duplicates: 0  Warnings: 0

Example 5.3. Concurrent DML During CREATE INDEX and DROP INDEX

Here are some snippets of code that I ran in separate mysql sessions connected to the same database, to illustrate DML statements (insert, update, or delete) running at the same time as CREATE INDEX and DROP INDEX.

/*
CREATE INDEX statement to run against a table while 
insert/update/delete statements are modifying the
column being indexed.
*/

-- We'll run this script in one session, while simultaneously creating and dropping
-- an index on test/big_table.table_name in another session.

use test;
create index i_concurrent on big_table(table_name);
/*
DROP INDEX statement to run against a table while
insert/update/delete statements are modifying the
column being indexed.
*/

-- We'll run this script in one session, while simultaneously creating and dropping
-- an index on test/big_table.table_name in another session.

use test;
drop index i_concurrent on big_table;
/*
Some queries and insert/update/delete statements to run against a table
while an index is being created or dropped. Previously, these operations
would have stalled during the index create/drop period and possibly
timed out or deadlocked.
*/

-- We'll run this script in one session, while simultaneously creating and dropping
-- an index on test/big_table.table_name in another session.

-- In our test instance, that column has about 1.7M rows, with 136 different values.
-- Sample values: COLUMNS (20480), ENGINES (6144), EVENTS (24576), FILES (38912), TABLES (21504), VIEWS (10240).

set autocommit = 0;
use test;

select distinct character_set_name from big_table where table_name = 'FILES';
delete from big_table where table_name = 'FILES';
select distinct character_set_name from big_table where table_name = 'FILES';

-- I'll issue the final rollback interactively, not via script,
-- the better to control the timing.
-- rollback;

Running this code gives this output, condensed for brevity and with the most important points bolded:

mysql: source concurrent_ddl_create.sql
Database changed
Query OK, 0 rows affected (1 min 25.15 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql: source concurrent_ddl_drop.sql
Database changed
Query OK, 0 rows affected (24.98 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql: source concurrent_dml.sql
Query OK, 0 rows affected (0.00 sec)

Database changed
+--------------------+
| character_set_name |
+--------------------+
| NULL               |
| utf8               |
+--------------------+
2 rows in set (0.32 sec)

Query OK, 38912 rows affected (1.84 sec)

Empty set (0.01 sec)

mysql: rollback;
Query OK, 0 rows affected (1.05 sec)

Example 5.4. Renaming a Column

Here is a demonstration of using ALTER TABLE to rename a column. We use the new, fast DDL mechanism to change the name, then the old, slow DDL mechanism (with old_alter_table=1) to restore the original column name.

Notes:

  • Because the syntax for renaming a column also involves re-specifying the data type, be very careful to specify exactly the same data type to avoid a costly table rebuild. In this case, we checked the output of show create table table\G and copied any clauses such as CHARACTER SET and NOT NULL from the original column definition.

  • Again, renaming a column for a small table is fast enough that we need to examine the rows affected number to verify that the new DDL mechanism is more efficient than the old one. With a big table, the difference in elapsed time makes the improvement obvious.

/*
Run through a sequence of 'rename column' statements.
Because this operation involves only metadata, not table data,
it is fast for big and small tables, with new or old DDL mechanisms.
*/

\! clear

\! echo "Rename column (fast technique, small table):"
alter table small_table change `IS_NULLABLE` `NULLABLE` varchar(3) character set utf8 not null, algorithm=inplace;
\! echo "Rename back to original name (slow technique):"
alter table small_table change `NULLABLE` `IS_NULLABLE` varchar(3) character set utf8 not null, algorithm=copy;


\! echo "Rename column (fast technique, big table):"
alter table big_table change `IS_NULLABLE` `NULLABLE` varchar(3) character set utf8 not null, algorithm=inplace;
\! echo "Rename back to original name (slow technique):"
alter table big_table change `NULLABLE` `IS_NULLABLE` varchar(3) character set utf8 not null, algorithm=copy;

Running this code gives this output, condensed for brevity and with the most important points bolded:

Rename column (fast technique, small table):
Query OK, 0 rows affected (0.05 sec)

Query OK, 0 rows affected (0.13 sec)
Records: 0  Duplicates: 0  Warnings: 0

Rename back to original name (slow technique):
Query OK, 0 rows affected (0.00 sec)

Query OK, 1678 rows affected (0.35 sec)
Records: 1678  Duplicates: 0  Warnings: 0

Rename column (fast technique, big table):
Query OK, 0 rows affected (0.00 sec)

Query OK, 0 rows affected (0.11 sec)
Records: 0  Duplicates: 0  Warnings: 0

Rename back to original name (slow technique):
Query OK, 0 rows affected (0.00 sec)

Query OK, 1718272 rows affected (1 min 0.00 sec)
Records: 1718272  Duplicates: 0  Warnings: 0

Query OK, 0 rows affected (0.00 sec)

Example 5.5. Dropping Foreign Keys

Here is a demonstration of foreign keys, including improvement to the speed of dropping a foreign key constraint.

/*
Demonstrate aspects of foreign keys that are or aren't affected by the DDL improvements.
- Create a new table with only a few values to serve as the parent table.
- Set up the 'small' and 'big' tables as child tables using a foreign key.
- Verify that the ON DELETE CASCADE clause makes changes ripple from parent to child tables.
- Drop the foreign key constraints, and optionally associated indexes. (This is the operation that is sped up.)
*/

\! clear

-- Make sure foreign keys are being enforced, and allow
-- rollback after doing some DELETEs that affect both
-- parent and child tables.
set foreign_key_checks = 1;
set autocommit = 0;

-- Create a parent table, containing values that we know are already present
-- in the child tables.
drop table if exists schema_names;
create table schema_names (id int unsigned not null primary key auto_increment, schema_name varchar(64) character set utf8 not null, index i_schema (schema_name)) as select distinct table_schema schema_name from small_table;

show create table schema_names\G
show create table small_table\G
show create table big_table\G

-- Creating the foreign key constraint still involves a table rebuild when foreign_key_checks=1,
-- as illustrated by the "rows affected" figure.
alter table small_table add constraint small_fk foreign key i_table_schema (table_schema) references schema_names(schema_name) on delete cascade;
alter table big_table add constraint big_fk foreign key i_table_schema (table_schema) references schema_names(schema_name) on delete cascade;

show create table small_table\G
show create table big_table\G

select schema_name from schema_names order by schema_name;
select count(table_schema) howmany, table_schema from small_table group by table_schema;
select count(table_schema) howmany, table_schema from big_table group by table_schema;

-- big_table is the parent table.
-- schema_names is the parent table.
-- big_table is the child table.
-- (One row in the parent table can have many "children" in the child table.)
-- Changes to the parent table can ripple through to the child table.
-- For example, removing the value 'test' from schema_names.schema_name will
-- result in the removal of 20K or so rows from big_table.

delete from schema_names where schema_name = 'test';

select schema_name from schema_names order by schema_name;
select count(table_schema) howmany, table_schema from small_table group by table_schema;
select count(table_schema) howmany, table_schema from big_table group by table_schema;

-- Because we've turned off autocommit, we can still get back those deleted rows
-- if the DELETE was issued by mistake.
rollback;

select schema_name from schema_names order by schema_name;
select count(table_schema) howmany, table_schema from small_table group by table_schema;
select count(table_schema) howmany, table_schema from big_table group by table_schema;

-- All of the cross-checking between parent and child tables would be
-- deadly slow if there wasn't the requirement for the corresponding
-- columns to be indexed!

-- But we can get rid of the foreign key using a fast operation
-- that doesn't rebuild the table.
-- If we didn't specify a constraint name when setting up the foreign key, we would
-- have to find the auto-generated name such as 'big_table_ibfk_1' in the
-- output from 'show create table'.

-- For the small table, we'll drop the foreign key and the associated index.
-- Having an index on a small table is less critical.

\! echo "DROP FOREIGN KEY and INDEX from small_table:"
alter table small_table drop foreign key small_fk, drop index small_fk;

-- For the big table, we'll drop the foreign key and leave the associated index.
-- If we are still doing queries that reference the indexed column, the index is
-- very important to avoid a full table scan of the big table.
\! echo "DROP FOREIGN KEY from big_table:"
alter table big_table drop foreign key big_fk;


show create table small_table\G
show create table big_table\G

Running this code gives this output, condensed for brevity and with the most important points bolded:

Query OK, 0 rows affected (0.00 sec)

Query OK, 0 rows affected (0.00 sec)

Query OK, 0 rows affected (0.01 sec)

Query OK, 4 rows affected (0.03 sec)
Records: 4  Duplicates: 0  Warnings: 0

*************************** 1. row ***************************
       Table: schema_names
Create Table: CREATE TABLE `schema_names` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `schema_name` varchar(64) CHARACTER SET utf8 NOT NULL,
  PRIMARY KEY (`id`),
  KEY `i_schema` (`schema_name`)
) ENGINE=InnoDB AUTO_INCREMENT=8 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

*************************** 1. row ***************************
       Table: small_table
Create Table: CREATE TABLE `small_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0',
  `COLUMN_DEFAULT` longtext CHARACTER SET utf8,
  `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL,
  `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL,
  `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL,
  `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1679 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

*************************** 1. row ***************************
       Table: big_table
Create Table: CREATE TABLE `big_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0',
  `COLUMN_DEFAULT` longtext CHARACTER SET utf8,
  `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL,
  `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL,
  `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL,
  `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`),
  KEY `big_fk` (`TABLE_SCHEMA`) 
) ENGINE=InnoDB AUTO_INCREMENT=1718273 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

Query OK, 1678 rows affected (0.10 sec)
Records: 1678  Duplicates: 0  Warnings: 0

Query OK, 1718272 rows affected (1 min 14.54 sec)
Records: 1718272  Duplicates: 0  Warnings: 0

*************************** 1. row ***************************
       Table: small_table
Create Table: CREATE TABLE `small_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0',
  `COLUMN_DEFAULT` longtext CHARACTER SET utf8,
  `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL,
  `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL,
  `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL,
  `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`),
  KEY `small_fk` (`TABLE_SCHEMA`), 
  CONSTRAINT `small_fk` FOREIGN KEY (`TABLE_SCHEMA`) REFERENCES `schema_names` (`schema_name`) ON DELETE CASCADE 
) ENGINE=InnoDB AUTO_INCREMENT=1679 DEFAULT CHARSET=latin1
1 row in set (0.12 sec)

*************************** 1. row ***************************
       Table: big_table
Create Table: CREATE TABLE `big_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0',
  `COLUMN_DEFAULT` longtext CHARACTER SET utf8,
  `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL,
  `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL,
  `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL,
  `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`),
  KEY `big_fk` (`TABLE_SCHEMA`), 
  CONSTRAINT `big_fk` FOREIGN KEY (`TABLE_SCHEMA`) REFERENCES `schema_names` (`schema_name`) ON DELETE CASCADE 
) ENGINE=InnoDB AUTO_INCREMENT=1718273 DEFAULT CHARSET=latin1
1 row in set (0.01 sec)

+--------------------+
| schema_name        |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.00 sec)

+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|     563 | information_schema |
|     286 | mysql              |
|     786 | performance_schema |
|      43 | test               |
+---------+--------------------+
4 rows in set (0.01 sec)

+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|  576512 | information_schema |
|  292864 | mysql              |
|  804864 | performance_schema |
|   44032 | test               |
+---------+--------------------+
4 rows in set (2.10 sec)

Query OK, 1 row affected (1.52 sec)

+--------------------+
| schema_name        |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
+--------------------+
3 rows in set (0.00 sec)

+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|     563 | information_schema |
|     286 | mysql              |
|     786 | performance_schema |
+---------+--------------------+
3 rows in set (0.00 sec)

+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|  576512 | information_schema |
|  292864 | mysql              |
|  804864 | performance_schema |
+---------+--------------------+
3 rows in set (1.74 sec)

Query OK, 0 rows affected (0.60 sec)

+--------------------+
| schema_name        |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.00 sec)

+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|     563 | information_schema |
|     286 | mysql              |
|     786 | performance_schema |
|      43 | test               |
+---------+--------------------+
4 rows in set (0.01 sec)

+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|  576512 | information_schema |
|  292864 | mysql              |
|  804864 | performance_schema |
|   44032 | test               |
+---------+--------------------+
4 rows in set (1.59 sec)

DROP FOREIGN KEY and INDEX from small_table:
Query OK, 0 rows affected (0.02 sec)
Records: 0  Duplicates: 0  Warnings: 0

DROP FOREIGN KEY from big_table:
Query OK, 0 rows affected (0.02 sec)
Records: 0  Duplicates: 0  Warnings: 0

*************************** 1. row ***************************
       Table: small_table
Create Table: CREATE TABLE `small_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0',
  `COLUMN_DEFAULT` longtext CHARACTER SET utf8,
  `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL,
  `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL,
  `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL,
  `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1679 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

*************************** 1. row ***************************
       Table: big_table
Create Table: CREATE TABLE `big_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0',
  `COLUMN_DEFAULT` longtext CHARACTER SET utf8,
  `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL,
  `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL,
  `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL,
  `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`),
  KEY `big_fk` (`TABLE_SCHEMA`)
) ENGINE=InnoDB AUTO_INCREMENT=1718273 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

Example 5.6. Changing Auto-Increment Value

Here is an illustration of increasing the auto-increment lower limit for a table column, demonstrating how this operation now avoids a table rebuild, plus some other fun facts about InnoDB auto-increment columns.

/*
If this script is run after foreign_key.sql, the schema_names table is
already set up. But to allow this script to run multiple times without
running into duplicate ID errors, we set up the schema_names table
all over again.
*/

\! clear

\! echo "=== Adjusting the Auto-Increment Limit for a Table ==="
\! echo

drop table if exists schema_names;
create table schema_names (id int unsigned not null primary key auto_increment,
  schema_name varchar(64) character set utf8 not null, index i_schema (schema_name))
  as select distinct table_schema schema_name from small_table;

\! echo "Initial state of schema_names table. AUTO_INCREMENT is included in SHOW CREATE TABLE output."
\! echo "Note how MySQL reserved a block of IDs, but only needed 4 of them in this transaction, so the next inserted values would get IDs 8 and 9."
show create table schema_names\G
select * from schema_names order by id;

\! echo "Inserting even a tiny amount of data can produce gaps in the ID sequence."
insert into schema_names (schema_name) values ('eight'), ('nine');

\! echo "Bumping auto-increment lower limit to 20 (fast mechanism):"
alter table schema_names auto_increment=20, algorithm=inplace;

\! echo "Inserting 2 rows that should get IDs 20 and 21:"
insert into schema_names (schema_name) values ('foo'), ('bar');
commit;

\! echo "Bumping auto-increment lower limit to 30 (slow mechanism):"
alter table schema_names auto_increment=30, algorithm=copy;

\! echo "Inserting 2 rows that should get IDs 30 and 31:"
insert into schema_names (schema_name) values ('bletch'),('baz');
commit;

select * from schema_names order by id;

\! echo "Final state of schema_names table. AUTO_INCREMENT value shows the next inserted row would get ID=32."
show create table schema_names\G

Running this code gives this output, condensed for brevity and with the most important points bolded:

=== Adjusting the Auto-Increment Limit for a Table ===

Query OK, 0 rows affected (0.01 sec)

Query OK, 4 rows affected (0.02 sec)
Records: 4  Duplicates: 0  Warnings: 0

Initial state of schema_names table. AUTO_INCREMENT is included in SHOW CREATE TABLE output.
Note how MySQL reserved a block of IDs, but only needed 4 of them in this transaction, so the next inserted values would get IDs 8 and 9.
*************************** 1. row ***************************
       Table: schema_names
Create Table: CREATE TABLE `schema_names` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `schema_name` varchar(64) CHARACTER SET utf8 NOT NULL,
  PRIMARY KEY (`id`),
  KEY `i_schema` (`schema_name`)
) ENGINE=InnoDB AUTO_INCREMENT=8 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

+----+--------------------+
| id | schema_name        |
+----+--------------------+
|  1 | information_schema |
|  2 | mysql              |
|  3 | performance_schema |
|  4 | test               |
+----+--------------------+
4 rows in set (0.00 sec)

Inserting even a tiny amount of data can produce gaps in the ID sequence.
Query OK, 2 rows affected (0.00 sec)
Records: 2  Duplicates: 0  Warnings: 0

Query OK, 0 rows affected (0.00 sec)

Bumping auto-increment lower limit to 20 (fast mechanism):
Query OK, 0 rows affected (0.01 sec)
Records: 0  Duplicates: 0  Warnings: 0

Inserting 2 rows that should get IDs 20 and 21:
Query OK, 2 rows affected (0.00 sec)
Records: 2  Duplicates: 0  Warnings: 0

Query OK, 0 rows affected (0.00 sec)

Query OK, 0 rows affected (0.00 sec)

Bumping auto-increment lower limit to 30 (slow mechanism):
Query OK, 8 rows affected (0.02 sec)
Records: 8  Duplicates: 0  Warnings: 0

Inserting 2 rows that should get IDs 30 and 31:
Query OK, 2 rows affected (0.00 sec)
Records: 2  Duplicates: 0  Warnings: 0

Query OK, 0 rows affected (0.01 sec)

+----+--------------------+
| id | schema_name        |
+----+--------------------+
|  1 | information_schema |
|  2 | mysql              |
|  3 | performance_schema |
|  4 | test               |
|  8 | eight              |
|  9 | nine               |
| 20 | foo                |
| 21 | bar                |
| 30 | bletch             |
| 31 | baz                |
+----+--------------------+
10 rows in set (0.00 sec)

Query OK, 0 rows affected (0.00 sec)

Final state of schema_names table. AUTO_INCREMENT value shows the next inserted row would get ID=32.
*************************** 1. row ***************************
       Table: schema_names
Create Table: CREATE TABLE `schema_names` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `schema_name` varchar(64) CHARACTER SET utf8 NOT NULL,
  PRIMARY KEY (`id`),
  KEY `i_schema` (`schema_name`)
) ENGINE=InnoDB AUTO_INCREMENT=32 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

Example 5.7. Controlling Concurrency with the LOCK Clause

This example shows how to use the LOCK clause of the ALTER TABLE statement to allow or deny concurrent access to the table while an online DDL operation is in progress. The clause has settings that allow queries and DML statements (LOCK=NONE), just queries (LOCK=SHARED), or no concurrent access at all (LOCK=EXCLUSIVE).

In one session, we run a succession of ALTER TABLE statements to create and drop an index, using different values for the LOCK clause to see what happens with waiting or deadlocking in either session. We are using the same BIG_TABLE table as in previous examples, starting with approximately 1.7 million rows. For illustration purposes, we will index and query the IS_NULLABLE column. (Although in real life it would be silly to make an index for a tiny column with only 2 distinct values.)

mysql: desc big_table;
+--------------------------+---------------------+------+-----+---------+----------------+
| Field                    | Type                | Null | Key | Default | Extra          |
+--------------------------+---------------------+------+-----+---------+----------------+
| TABLE_CATALOG            | varchar(512)        | NO   |     |         |                |
| TABLE_SCHEMA             | varchar(64)         | NO   |     |         |                |
| TABLE_NAME               | varchar(64)         | NO   |     |         |                |
| COLUMN_NAME              | varchar(64)         | NO   |     |         |                |
| ORDINAL_POSITION         | bigint(21) unsigned | NO   |     | 0       |                |
| COLUMN_DEFAULT           | longtext            | YES  |     | NULL    |                |
| IS_NULLABLE              | varchar(3)          | NO   |     |         |                |
...
+--------------------------+---------------------+------+-----+---------+----------------+
21 rows in set (0.14 sec)

mysql: alter table big_table add index i1(is_nullable);
Query OK, 0 rows affected (20.71 sec)

mysql: alter table big_table drop index i1;
Query OK, 0 rows affected (0.02 sec)

mysql: alter table big_table add index i1(is_nullable), lock=exclusive;
Query OK, 0 rows affected (19.44 sec)

mysql: alter table big_table drop index i1;
Query OK, 0 rows affected (0.03 sec)

mysql: alter table big_table add index i1(is_nullable), lock=shared;
Query OK, 0 rows affected (16.71 sec)

mysql: alter table big_table drop index i1;
Query OK, 0 rows affected (0.05 sec)

mysql: alter table big_table add index i1(is_nullable), lock=none;
Query OK, 0 rows affected (12.26 sec)

mysql: alter table big_table drop index i1;
Query OK, 0 rows affected (0.01 sec)

... repeat statements like the above while running queries ...
... and DML statements at the same time in another session ...

Nothing dramatic happens in the session running the DDL statements. Sometimes, an ALTER TABLE takes unusually long because it is waiting for another transaction to finish, when that transaction modified the table during the DDL or queried the table before the DDL:

mysql: alter table big_table add index i1(is_nullable), lock=none;
Query OK, 0 rows affected (59.27 sec)

mysql: -- The previous ALTER took so long because it was waiting for all the concurrent
mysql: -- transactions to commit or roll back.

mysql: alter table big_table drop index i1;
Query OK, 0 rows affected (41.05 sec)

mysql: -- Even doing a SELECT on the table in the other session first causes
mysql: -- the ALTER TABLE above to stall until the transaction
mysql: -- surrounding the SELECT is committed or rolled back.

Here is the log from another session running concurrently, where we issue queries and DML statements against the table before, during, and after the DDL operations shown in the previous listings. This first listing shows queries only. We expect the queries to be allowed during DDL operations using LOCK=NONE or LOCK=SHARED, and for the query to wait until the DDL is finished if the ALTER TABLE statement includes LOCK=EXCLUSIVE.

mysql: show variables like 'autocommit';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| autocommit    | ON    |
+---------------+-------+
1 row in set (0.01 sec)

mysql: -- A trial query before any ADD INDEX in the other session:
mysql: -- Note: because autocommit is enabled, each
mysql: -- transaction finishes immediately after the query.
mysql: select distinct is_nullable from big_table;
+-------------+
| is_nullable |
+-------------+
| NO          |
| YES         |
+-------------+
2 rows in set (4.49 sec)

mysql: -- Index is being created with LOCK=EXCLUSIVE on the ALTER statement.
mysql: -- The query waits until the DDL is finished before proceeding.
mysql: select distinct is_nullable from big_table;
+-------------+
| is_nullable |
+-------------+
| NO          |
| YES         |
+-------------+
2 rows in set (17.26 sec)

mysql: -- Index is being created with LOCK=SHARED on the ALTER statement.
mysql: -- The query returns its results while the DDL is in progress.
mysql: -- The same thing happens with LOCK=NONE on the ALTER statement.
mysql: select distinct is_nullable from big_table;
+-------------+
| is_nullable |
+-------------+
| NO          |
| YES         |
+-------------+
2 rows in set (3.11 sec)

mysql: -- Once the index is created, and with no DDL in progress,
mysql: -- queries referencing the indexed column are very fast:
mysql: select count(*) from big_table where is_nullable = 'YES';
+----------+
| count(*) |
+----------+
|   411648 |
+----------+
1 row in set (0.20 sec)

mysql: select distinct is_nullable from big_table;
+-------------+
| is_nullable |
+-------------+
| NO          |
| YES         |
+-------------+
2 rows in set (0.00 sec)

Now in this concurrent session, we run some transactions including DML statements, or a combination of DML statements and queries. We use DELETE statements to illustrate predictable, verifiable changes to the table. Because the transactions in this part can span multiple statements, we run these tests with autocommit turned off.

mysql: set global autocommit = off;
Query OK, 0 rows affected (0.00 sec)

mysql: -- Count the rows that will be involved in our DELETE statements:
mysql: select count(*) from big_table where is_nullable = 'YES';
+----------+
| count(*) |
+----------+
|   411648 |
+----------+
1 row in set (0.95 sec)

mysql: -- After this point, any DDL statements back in the other session 
mysql: -- stall until we commit or roll back.

mysql: delete from big_table where is_nullable = 'YES' limit 11648;
Query OK, 11648 rows affected (0.14 sec)

mysql: select count(*) from big_table where is_nullable = 'YES';
+----------+
| count(*) |
+----------+
|   400000 |
+----------+
1 row in set (1.04 sec)

mysql: rollback;
Query OK, 0 rows affected (0.09 sec)

mysql: select count(*) from big_table where is_nullable = 'YES';
+----------+
| count(*) |
+----------+
|   411648 |
+----------+
1 row in set (0.93 sec)

mysql: -- OK, now we're going to try that during index creation with LOCK=NONE.
mysql: delete from big_table where is_nullable = 'YES' limit 11648;
Query OK, 11648 rows affected (0.21 sec)

mysql: -- We expect that now there will be 400000 'YES' rows left:
mysql: select count(*) from big_table where is_nullable = 'YES';
+----------+
| count(*) |
+----------+
|   400000 |
+----------+
1 row in set (1.25 sec)

mysql: -- In the other session, the ALTER TABLE is waiting before finishing,
mysql: -- because _this_ transaction hasn't committed or rolled back yet.
mysql: rollback;
Query OK, 0 rows affected (0.11 sec)

mysql: select count(*) from big_table where is_nullable = 'YES';
+----------+
| count(*) |
+----------+
|   411648 |
+----------+
1 row in set (0.19 sec)

mysql: -- The ROLLBACK left the table in the same state we originally found it.
mysql: -- Now let's make a permanent change while the index is being created,
mysql: -- again with ALTER TABLE ... , LOCK=NONE.
mysql: -- First, commit so the DROP INDEX in the other shell can finish;
mysql: -- the previous SELECT started a transaction that accessed the table.
mysql: commit;
Query OK, 0 rows affected (0.00 sec)

mysql: -- Now we add the index back in the other shell, then issue DML in this one
mysql: -- while the DDL is running.
mysql: delete from big_table where is_nullable = 'YES' limit 11648;
Query OK, 11648 rows affected (0.23 sec)

mysql: commit;
Query OK, 0 rows affected (0.01 sec)

mysql: -- In the other shell, the ADD INDEX has finished.
mysql: select count(*) from big_table where is_nullable = 'YES';
+----------+
| count(*) |
+----------+
|   400000 |
+----------+
1 row in set (0.19 sec)

mysql: -- At the point the new index is finished being created, it contains entries
mysql: -- only for the 400000 'YES' rows left when all concurrent transactions are finished.
mysql: 
mysql: -- Now we will run a similar test, while ALTER TABLE ... , LOCK=SHARED is running.
mysql: -- We expect a query to complete during the ALTER TABLE, but for the DELETE
mysql: -- to run into some kind of issue.
mysql: commit;
Query OK, 0 rows affected (0.00 sec)

mysql: -- As expected, the query returns results while the LOCK=SHARED DDL is running:
mysql: select count(*) from big_table where is_nullable = 'YES';
+----------+
| count(*) |
+----------+
|   400000 |
+----------+
1 row in set (2.07 sec)

mysql: -- The DDL in the other session is not going to finish until this transaction
mysql: -- is committed or rolled back. If we tried a DELETE now and it waited because
mysql: -- of LOCK=SHARED on the DDL, both transactions would wait forever (deadlock).
mysql: -- MySQL detects this condition and cancels the attempted DML statement.
mysql: delete from big_table where is_nullable = 'YES' limit 100000;
ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction
mysql: -- The transaction here is still going, so in the other shell, the ADD INDEX operation
mysql: -- is waiting for this transaction to commit or roll back.
mysql: rollback;
Query OK, 0 rows affected (0.00 sec)

mysql: -- Now let's try issuing a query and some DML, on one line, while running
mysql: -- ALTER TABLE ... , LOCK=EXCLUSIVE in the other shell.
mysql: -- Notice how even the query is held up until the DDL is finished.
mysql: -- By the time the DELETE is issued, there is no conflicting access
mysql: -- to the table and we avoid the deadlock error.
mysql: select count(*) from big_table where is_nullable = 'YES'; delete from big_table where is_nullable = 'YES' limit 100000;
+----------+
| count(*) |
+----------+
|   400000 |
+----------+
1 row in set (15.98 sec)

Query OK, 100000 rows affected (2.81 sec)

mysql: select count(*) from big_table where is_nullable = 'YES';
+----------+
| count(*) |
+----------+
|   300000 |
+----------+
1 row in set (0.17 sec)

mysql: rollback;
Query OK, 0 rows affected (1.36 sec)

mysql: select count(*) from big_table where is_nullable = 'YES';
+----------+
| count(*) |
+----------+
|   400000 |
+----------+
1 row in set (0.19 sec)

mysql: commit;
Query OK, 0 rows affected (0.00 sec)

mysql: -- Next, we try ALTER TABLE ... , LOCK=EXCLUSIVE in the other session
mysql: -- and only issue DML, not any query, in the concurrent transaction here.
mysql: delete from big_table where is_nullable = 'YES' limit 100000;
Query OK, 100000 rows affected (16.37 sec)

mysql: -- That was OK because the ALTER TABLE did not have to wait for the transaction
mysql: -- here to complete. The DELETE in this session waited until the index was ready.
mysql: select count(*) from big_table where is_nullable = 'YES';
+----------+
| count(*) |
+----------+
|   300000 |
+----------+
1 row in set (0.16 sec)

mysql: commit;
Query OK, 0 rows affected (0.00 sec)

In the preceding example listings, we learned that:

  • The LOCK clause for ALTER TABLE is set off from the rest of the statement by a comma.

  • Online DDL operations might wait before starting, until any prior transactions that access the table are committed or rolled back.

  • Online DDL operations might wait before completing, until any concurrent transactions that access the table are committed or rolled back.

  • While an online DDL operation is running, concurrent queries are relatively straightforward, as long as the ALTER TABLE statement uses LOCK=NONE or LOCK=SHARED.

  • Pay attention to whether autocommit is turned on or off. If it is turned off, be careful to end transactions in other sessions (even just queries) before performing DDL operations on the table.

  • With LOCK=SHARED, concurrent transactions that mix queries and DML could encounter deadlock errors and have to be restarted after the DDL is finished.

  • With LOCK=NONE, concurrent transactions can freely mix queries and DML. The DDL operation waits until the concurrent transactions are committed or rolled back.

  • With LOCK=NONE, concurrent transactions can freely mix queries and DML, but those transactions wait until the DDL operation is finished before they can access the table.


Example 5.8. Schema Setup Code for Online DDL Experiments

You can create multiple indexes on a table with one ALTER TABLE statement. This is relatively efficient, because the clustered index of the table needs to be scanned only once (although the data is sorted separately for each new index). For example:

CREATE TABLE T1(A INT PRIMARY KEY, B INT, C CHAR(1)) ENGINE=InnoDB;
INSERT INTO T1 VALUES (1,2,'a'), (2,3,'b'), (3,2,'c'), (4,3,'d'), (5,2,'e');
COMMIT;
ALTER TABLE T1 ADD INDEX (B), ADD UNIQUE INDEX (C);

The above statements create table T1 with the primary key on column A, insert several rows, then build two new indexes on columns B and C. If there were many rows inserted into T1 before the ALTER TABLE statement, this approach is much more efficient than creating all the secondary indexes before loading the data.

Because dropping InnoDB secondary indexes also does not require any copying of table data, it is equally efficient to drop multiple indexes with a single ALTER TABLE statement or multiple DROP INDEX statements:

ALTER TABLE T1 DROP INDEX B, DROP INDEX C;

or:

DROP INDEX B ON T1;
DROP INDEX C ON T1;

Example 5.9. Creating and Dropping the Primary Key

Restructuring the clustered index for an InnoDB table always requires copying the table data. Thus, it is best to define the primary key when you create a table, rather than issuing ALTER TABLE ... ADD PRIMARY KEY later, to avoid rebuilding the table.

Defining a PRIMARY KEY later causes the data to be copied, as in the following example:

CREATE TABLE T2 (A INT, B INT);
INSERT INTO T2 VALUES (NULL, 1);
ALTER TABLE T2 ADD PRIMARY KEY (B);

When you create a UNIQUE or PRIMARY KEY index, MySQL must do some extra work. For UNIQUE indexes, MySQL checks that the table contains no duplicate values for the key. For a PRIMARY KEY index, MySQL also checks that none of the PRIMARY KEY columns contains a NULL.

When you add a primary key using the ALGORITHM=COPY clause, MySQL actually converts NULL values in the associated columns to default values: 0 for numbers, the empty string for character-based columns and BLOBs, and January 1, 1975 for dates. This is a non-standard behavior that Oracle recommends you not rely on. Adding a primary key using ALGORITHM=INPLACE is only allowed when the SQL_MODE setting includes the strict_trans_tables or strict_all_tables flags; when the SQL_MODEsetting is strict, ADD PRIMARY KEY ... , ALGORITHM=INPLACE is allowed, but the statement can still fail if the requested primary key columns contain any NULL values. The ALGORITHM=INPLACE behavior is more standard-compliant.

The following example shows the different possibilities for the ADD PRIMARY KEY clause. With the ALGORITHM=COPY clause, the operation succeeds despite the presence of NULL values in the primary key columns; the data is silently changed, which could cause problems. With the ALGORITHM=INPLACE clause, the operation could fail for different reasons, because this setting considers data integrity a high priority: the statement gives an error if the SQL_MODE setting is not strict enough, or if the primary key columns contain any NULL values. Once we address both of those requirements, the ALTER TABLE operation succeeds.

CREATE TABLE add_pk_via_copy (c1 INT, c2 VARCHAR(10), c3 DATETIME);
INSERT INTO add_pk_via_copy VALUES (1,'a','...'),(NULL,NULL,NULL);
ALTER TABLE add_pk_via_copy ADD PRIMARY KEY (c1,c2,c3), ALGORITHM=COPY;
SELECT * FROM add_pk_via_copy;

CREATE TABLE add_pk_via_inplace (c1 INT, c2 VARCHAR(10), c3 DATETIME);
INSERT INTO add_pk_via_inplace VALUES (1,'a','...'),(NULL,NULL,NULL);
SET sql_mode = 'strict_trans_tables';
ALTER TABLE add_pk_via_inplace ADD PRIMARY KEY (c1,c2,c3), ALGORITHM=COPY;
SET sql_mode = '';
ALTER TABLE add_pk_via_inplace ADD PRIMARY KEY (c1,c2,c3), ALGORITHM=COPY;
DELETE FROM add_pk_via_inplace WHERE c1 IS NULL OR c2 IS NULL OR c3 IS NULL;
ALTER TABLE add_pk_via_inplace ADD PRIMARY KEY (c1,c2,c3), ALGORITHM=COPY;
SELECT * FROM add_pk_via_inplace;

If you create a table without a primary key, InnoDB chooses one for you, which can be the first UNIQUE key defined on NOT NULL columns, or a system-generated key. To avoid any uncertainty and the potential space requirement for an extra hidden column, specify the PRIMARY KEY clause as part of the CREATE TABLE statement.


5.5.6. Implementation Details of Online DDL

Each ALTER TABLE operation for an InnoDB table is governed by several aspects:

  • Whether there is any change to the physical representation of the table, or whether it purely a change to metadata that can be done without touching the table itself.

  • Whether the volume of data in the table stays the same, increases, or decreases.

  • Whether a change in table data involves the clustered index, secondary indexes, or both.

  • Whether there are any foreign key relationships between the table being altered and some other table. The mechanics differ depending on whether the foreign_key_checks configuration option is enabled or disabled.

  • Whether the table is partitioned. Partitioning clauses of ALTER TABLE are turned into low-level operations involving one or more tables, and those operations follow the regular rules for online DDL.

  • Whether the table data must be copied, whether the table can be reorganized in-place, or a combination of both.

  • Whether the table contains any auto-increment columns.

  • What degree of locking is required, either by the nature of the underlying database operations, or a LOCK clause that you specify in the ALTER TABLE statement.

This section explains how these factors affect the different kinds of ALTER TABLE operations on InnoDB tables.

Error Conditions for Online DDL

Here are the primary reasons why an online DDL operation could fail:

  • If a LOCK clause specifies a low degree of locking (SHARED or NONE) that is not compatible with the particular type of DDL operation.

  • If a timeout occurs while waiting to get an exclusive lock on the table, which is needed briefly during the initial and final phases of the DDL operation.

  • If the tmpdir file system runs out of disk space, while MySQL writes temporary sort files on disk during index creation.

  • If the ALTER TABLE takes so long, and concurrent DML modifies the table so much, that the size of the temporary online long exceeds the value of the innodb_online_alter_log_max_size configuration option. This condition causes a DB_ONLINE_LOG_TOO_BIG error.

  • If concurrent DML makes changes to the table that are allowed with the original table definition, but not with the new one. The operation only fails at the very end, when MySQL tries to apply all the changes from concurrent DML statements. For example, you might insert duplicate values into a column while a unique index is being created, or you might insert NULL values into a column while creating a primary key index on that column. The changes made by the concurrent DML take precedence, and the ALTER TABLE operation is effectively rolled back.

Although the configuration option innodb_file_per_table has a dramatic effect on the representation for an InnoDB table, all online DDL operations work equally well whether that option is enabled or disabled, and whether the table is physically located in its own .ibd file or inside the system tablespace.

InnoDB has two types of indexes: the clustered index representing all the data in the table, and optional secondary indexes to speed up queries. Since the clustered index contains the data values in its B-tree nodes, adding or dropping a clustered index does involve copying the data, and creating a new copy of the table. A secondary index, however, contains only the index key and the value of the primary key. This type of index can be created or dropped without copying the data in the clustered index. Because each secondary index contains copies of the primary key values (used to access the clustered index when needed), when you change the definition of the primary key, all secondary indexes are recreated as well.

Dropping a secondary index is simple. Only the internal InnoDB system tables and the MySQL data dictionary tables are updated to reflect the fact that the index no longer exists. InnoDB returns the storage used for the index to the tablespace that contained it, so that new indexes or additional table rows can use the space.

To add a secondary index to an existing table, InnoDB scans the table, and sorts the rows using memory buffers and temporary files in order by the values of the secondary index key columns. The B-tree is then built in key-value order, which is more efficient than inserting rows into an index in random order. Because the B-tree nodes are split when they fill, building the index in this way results in a higher fill-factor for the index, making it more efficient for subsequent access.

Primary Key and Secondary Key Indexes

Historically, the MySQL server and InnoDB have each kept their own metadata about table and index structures. The MySQL server stores this information in .frm files that are not protected by a transactional mechanism, while InnoDB has its own data dictionary as part of the system tablespace. If a DDL operation was interrupted by a crash or other unexpected event partway through, the metadata could be left inconsistent between these two locations, causing problems such as startup errors or inability to access the table that was being altered. Now that InnoDB is the default storage engine, addressing such issues is a high priority. These enhancements to DDL operations reduce the window of opportunity for such issues to occur.

5.5.7. How Crash Recovery Works with Online DDL

Although no data is lost if the server crashes while an ALTER TABLE statement is executing, the crash recovery process is different for clustered indexes and secondary indexes.

If the server crashes while creating an InnoDB secondary index, upon recovery, MySQL drops any partially created indexes. You must re-run the ALTER TABLE or CREATE INDEX statement.

When a crash occurs during the creation of an InnoDB clustered index, recovery is more complicated, because the data in the table must be copied to an entirely new clustered index. Remember that all InnoDB tables are stored as clustered indexes. In the following discussion, we use the word table and clustered index interchangeably.

MySQL creates the new clustered index by copying the existing data from the original InnoDB table to a temporary table that has the desired index structure. Once the data is completely copied to this temporary table, the original table is renamed with a different temporary table name. The temporary table comprising the new clustered index is renamed with the name of the original table, and the original table is dropped from the database.

If a system crash occurs while creating a new clustered index, no data is lost, but you must complete the recovery process using the temporary tables that exist during the process. Since it is rare to re-create a clustered index or re-define primary keys on large tables, or to encounter a system crash during this operation, this manual does not provide information on recovering from this scenario.

5.5.8. Online DDL for Partitioned InnoDB Tables

With the exception of ALTER TABLE partitioning clauses, online DDL operations for partitioned InnoDB tables follow the same rules that apply to regular InnoDB tables. Online DDL rules are outlined in Table 5.8, “Summary of Online Status for DDL Operations”.

ALTER TABLE partitioning clauses do not go through the same internal online DDL API as regular non-partitioned InnoDB tables, and are only allowed in conjunction with ALGORITHM=DEFAULT and LOCK=DEFAULT.

If you use an ALTER TABLE partitioning clause in an ALTER TABLE statement, the partitioned table will be re-partitioned using the ALTER TABLE COPY algorithm. In other words, a new partitioned table is created with the new partitioning scheme. The newly created table will include any changes applied by the ALTER TABLE statement and the table data will be copied into the new table structure.

If you do not change the table's partitioning using ALTER TABLE partitioning clauses or perform any other partition management in your ALTER TABLE statement, ALTER TABLE will use the INPLACE algorithm on each table partition. Be aware, however, that when INPLACE ALTER TABLE operations are performed on each partition, there will be increased demand on system resources due to operations being performed on multiple partitions.

Even though partitioning clauses of the ALTER TABLE statement do not go through the same internal online DDL API as regular non-partitioned InnoDB tables, MySQL still attempts to minimize data copying and locking where possible:

  • ADD PARTITION and DROP PARTITION for tables partitioned by RANGE or LIST do not copy any existing data.

  • TRUNCATE PARTITION does not copy any existing data, for all types of partitioned tables.

  • Concurrent queries are allowed during ADD PARTITION and COALESCE PARTITION for tables partitioned by HASH or LIST. MySQL copies the data while holding a shared lock.

  • For REORGANIZE PARTITION, REBUILD PARTITION, or ADD PARTITION or COALESCE PARTITION for a table partitioned by LINEAR HASH or LIST, concurrent queries are allowed. Data from the affected partitions is copied while holding a shared lock.

Note

Full-text search (FTS) and foreign keys are not supported by InnoDB partitioned tables. For more information, see Section 12.9.5, “Full-Text Restrictions” and Section 17.6.2, “Partitioning Limitations Relating to Storage Engines”.

5.5.9. Limitations of Online DDL

Take the following limitations into account when running online DDL operations:

  • During an online DDL operation that copies the table, files are written to the temporary directory ($TMPDIR on Unix, %TEMP% on Windows, or the directory specified by the --tmpdir configuration variable). Each temporary file is large enough to hold one column in the new table or index, and each one is removed as soon as it is merged into the final table or index.

  • An ALTER TABLE statement that contains DROP INDEX and ADD INDEX clauses that both name the same index uses a table copy, not Fast Index Creation.

  • The table is copied, rather than using Fast Index Creation when you create an index on a TEMPORARY TABLE. This has been reported as MySQL Bug #39833.

  • InnoDB handles error cases when users attempt to drop indexes needed for foreign keys. See section Section 14.2.5.9, “Better Error Handling when Dropping Indexes” for details.

  • The ALTER TABLE clause LOCK=NONE is not allowed if there are ON...CASCADE or ON...SET NULL constraints on the table.

  • During each online DDL ALTER TABLE statement, regardless of the LOCK clause, there are brief periods at the beginning and end requiring an exclusive lock on the table (the same kind of lock specified by the LOCK=EXCLUSIVE clause). Thus, an online DDL operation might wait before starting if there is a long-running transaction performing inserts, updates, deletes, or SELECT ... FOR UPDATE on that table; and an online DDL operation might wait before finishing if a similar long-running transaction was started while the ALTER TABLE was in progress.

  • When running an online ALTER TABLE operation, the thread that runs the ALTER TABLE operation will apply an online log of DML operations that were run concurrently on the same table from other connection threads. When the DML operations are applied, it is possible to encounter a duplicate key entry error (ERROR 1062 (23000): Duplicate entry), even if the duplicate entry is only temporary and would be reverted by a later entry in the online log. This is similar to the idea of a foreign key constraint check in InnoDB in which constraints must hold during a transaction.

  • OPTIMIZE TABLE for an InnoDB table is mapped to an ALTER TABLE operation to rebuild the table and update index statistics and free unused space in the clustered index. This operation does not use fast index creation. Secondary indexes are not created as efficiently because keys are inserted in the order they appeared in the primary key.

5.6. Running Multiple MySQL Instances on One Machine

In some cases, you might want to run multiple instances of MySQL on a single machine. You might want to test a new MySQL release while leaving an existing production setup undisturbed. Or you might want to give different users access to different mysqld servers that they manage themselves. (For example, you might be an Internet Service Provider that wants to provide independent MySQL installations for different customers.)

It is possible to use a different MySQL server binary per instance, or use the same binary for multiple instances, or any combination of the two approaches. For example, you might run a server from MySQL 5.6 and one from MySQL 5.7, to see how different versions handle a given workload. Or you might run multiple instances of the current production version, each managing a different set of databases.

Whether or not you use distinct server binaries, each instance that you run must be configured with unique values for several operating parameters. This eliminates the potential for conflict between instances. Parameters can be set on the command line, in option files, or by setting environment variables. See Section 4.2.3, “Specifying Program Options”. To see the values used by a given instance, connect to it and execute a SHOW VARIABLES statement.

The primary resource managed by a MySQL instance is the data directory. Each instance should use a different data directory, the location of which is specified using the --datadir=path option. For methods of configuring each instance with its own data directory, and warnings about the dangers of failing to do so, see Section 5.6.1, “Setting Up Multiple Data Directories”.

In addition to using different data directories, several other options must have different values for each server instance:

  • --port=port_num

    --port controls the port number for TCP/IP connections. Alternatively, if the host has multiple network addresses, you can use --bind-address to cause each server to listen to a different address.

  • --socket=path

    --socket controls the Unix socket file path on Unix or the named pipe name on Windows. On Windows, it is necessary to specify distinct pipe names only for those servers configured to permit named-pipe connections.

  • --shared-memory-base-name=name

    This option is used only on Windows. It designates the shared-memory name used by a Windows server to permit clients to connect using shared memory. It is necessary to specify distinct shared-memory names only for those servers configured to permit shared-memory connections.

  • --pid-file=file_name

    This option indicates the path name of the file in which the server writes its process ID.

If you use the following log file options, their values must differ for each server:

For further discussion of log file options, see Section 5.2, “MySQL Server Logs”.

To achieve better performance, you can specify the following option differently for each server, to spread the load between several physical disks:

Having different temporary directories also makes it easier to determine which MySQL server created any given temporary file.

If you have multiple MySQL installations in different locations, you can specify the base directory for each installation with the --basedir=path option. This causes each instance to automatically use a different data directory, log files, and PID file because the default for each of those parameters is relative to the base directory. In that case, the only other options you need to specify are the --socket and --port options. Suppose that you install different versions of MySQL using tar file binary distributions. These install in different locations, so you can start the server for each installation using the command bin/mysqld_safe under its corresponding base directory. mysqld_safe determines the proper --basedir option to pass to mysqld, and you need specify only the --socket and --port options to mysqld_safe.

As discussed in the following sections, it is possible to start additional servers by specifying appropriate command options or by setting environment variables. However, if you need to run multiple servers on a more permanent basis, it is more convenient to use option files to specify for each server those option values that must be unique to it. The --defaults-file option is useful for this purpose.

5.6.1. Setting Up Multiple Data Directories

Each MySQL Instance on a machine should have its own data directory. The location is specified using the --datadir=path option.

There are different methods of setting up a data directory for a new instance:

  • Create a new data directory.

  • Copy an existing data directory.

The following discussion provides more detail about each method.

Warning

Normally, you should never have two servers that update data in the same databases. This may lead to unpleasant surprises if your operating system does not support fault-free system locking. If (despite this warning) you run multiple servers using the same data directory and they have logging enabled, you must use the appropriate options to specify log file names that are unique to each server. Otherwise, the servers try to log to the same files.

Even when the preceding precautions are observed, this kind of setup works only with MyISAM and MERGE tables, and not with any of the other storage engines. Also, this warning against sharing a data directory among servers always applies in an NFS environment. Permitting multiple MySQL servers to access a common data directory over NFS is a very bad idea. The primary problem is that NFS is the speed bottleneck. It is not meant for such use. Another risk with NFS is that you must devise a way to ensure that two or more servers do not interfere with each other. Usually NFS file locking is handled by the lockd daemon, but at the moment there is no platform that performs locking 100% reliably in every situation.

Create a New Data Directory

With this method, the data directory will be in the same state as when you first install MySQL. It will have the default set of MySQL accounts and no user data.

On Unix, initialize the data directory by running mysql_install_db. See Section 2.10.1, “Unix Postinstallation Procedures”.

On Windows, the data directory is included in the MySQL distribution:

  • MySQL Zip archive distributions for Windows contain an unmodified data directory. You can unpack such a distribution into a temporary location, then copy it data directory to where you are setting up the new instance.

  • Windows MSI package installers create and set up the data directory that the installed server will use, but also create a pristine template data directory named data under the installation directory. After an installation has been performed using an MSI package, the template data directory can be copied to set up additional MySQL instances.

Copy an Existing Data Directory

With this method, any MySQL accounts or user data present in the data directory are carried over to the new data directory.

  1. Stop the existing MySQL instance using the data directory. This must be a clean shutdown so that the instance flushes any pending changes to disk.

  2. Copy the data directory to the location where the new data directory should be.

  3. Copy the my.cnf or my.ini option file used by the existing instance. This serves as a basis for the new instance.

  4. Modify the new option file so that any pathnames referring to the original data directory refer to the new data directory. Also, modify any other options that must be unique per instance, such as the TCP/IP port number and the log files. For a list of parameters that must be unique per instance, see Section 5.6, “Running Multiple MySQL Instances on One Machine”.

  5. Start the new instance, telling it to use the new option file.

5.6.2. Running Multiple MySQL Instances on Windows

You can run multiple servers on Windows by starting them manually from the command line, each with appropriate operating parameters, or by installing several servers as Windows services and running them that way. General instructions for running MySQL from the command line or as a service are given in Section 2.3, “Installing MySQL on Microsoft Windows”. The following sections describe how to start each server with different values for those options that must be unique per server, such as the data directory. These options are listed in Section 5.6, “Running Multiple MySQL Instances on One Machine”.

5.6.2.1. Starting Multiple MySQL Instances at the Windows Command Line

The procedure for starting a single MySQL server manually from the command line is described in Section 2.3.5.5, “Starting MySQL from the Windows Command Line”. To start multiple servers this way, you can specify the appropriate options on the command line or in an option file. It is more convenient to place the options in an option file, but it is necessary to make sure that each server gets its own set of options. To do this, create an option file for each server and tell the server the file name with a --defaults-file option when you run it.

Suppose that you want to run mysqld on port 3307 with a data directory of C:\mydata1, and mysqld-debug on port 3308 with a data directory of C:\mydata2. Use this procedure:

  1. Make sure that each data directory exists, including its own copy of the mysql database that contains the grant tables.

  2. Create two option files. For example, create one file named C:\my-opts1.cnf that looks like this:

    [mysqld]
    datadir = C:/mydata1
    port = 3307

    Create a second file named C:\my-opts2.cnf that looks like this:

    [mysqld]
    datadir = C:/mydata2
    port = 3308
  3. Use the --defaults-file option to start each server with its own option file:

    C:\> C:\mysql\bin\mysqld --defaults-file=C:\my-opts1.cnf
    C:\> C:\mysql\bin\mysqld-debug --defaults-file=C:\my-opts2.cnf
    

    Each server starts in the foreground (no new prompt appears until the server exits later), so you will need to issue those two commands in separate console windows.

To shut down the servers, connect to each using the appropriate port number:

C:\> C:\mysql\bin\mysqladmin --port=3307 shutdown
C:\> C:\mysql\bin\mysqladmin --port=3308 shutdown

Servers configured as just described permit clients to connect over TCP/IP. If your version of Windows supports named pipes and you also want to permit named-pipe connections, use the mysqld or mysqld-debug server and specify options that enable the named pipe and specify its name. Each server that supports named-pipe connections must use a unique pipe name. For example, the C:\my-opts1.cnf file might be written like this:

[mysqld]
datadir = C:/mydata1
port = 3307
enable-named-pipe
socket = mypipe1

Modify C:\my-opts2.cnf similarly for use by the second server. Then start the servers as described previously.

A similar procedure applies for servers that you want to permit shared-memory connections. Enable such connections with the --shared-memory option and specify a unique shared-memory name for each server with the --shared-memory-base-name option.

5.6.2.2. Starting Multiple MySQL Instances as Windows Services

On Windows, a MySQL server can run as a Windows service. The procedures for installing, controlling, and removing a single MySQL service are described in Section 2.3.5.7, “Starting MySQL as a Windows Service”.

To set up multiple MySQL services, you must make sure that each instance uses a different service name in addition to the other parameters that must be unique per instance.

For the following instructions, suppose that you want to run the mysqld server from two different versions of MySQL that are installed at C:\mysql-5.5.9 and C:\mysql-5.7.3, respectively. (This might be the case if you are running 5.5.9 as your production server, but also want to conduct tests using 5.7.3.)

To install MySQL as a Windows service, use the --install or --install-manual option. For information about these options, see Section 2.3.5.7, “Starting MySQL as a Windows Service”.

Based on the preceding information, you have several ways to set up multiple services. The following instructions describe some examples. Before trying any of them, shut down and remove any existing MySQL services.

  • Approach 1: Specify the options for all services in one of the standard option files. To do this, use a different service name for each server. Suppose that you want to run the 5.5.9 mysqld using the service name of mysqld1 and the 5.7.3 mysqld using the service name mysqld2. In this case, you can use the [mysqld1] group for 5.5.9 and the [mysqld2] group for 5.7.3. For example, you can set up C:\my.cnf like this:

    # options for mysqld1 service
    [mysqld1]
    basedir = C:/mysql-5.5.9
    port = 3307
    enable-named-pipe
    socket = mypipe1
    
    # options for mysqld2 service
    [mysqld2]
    basedir = C:/mysql-5.7.3
    port = 3308
    enable-named-pipe
    socket = mypipe2

    Install the services as follows, using the full server path names to ensure that Windows registers the correct executable program for each service:

    C:\> C:\mysql-5.5.9\bin\mysqld --install mysqld1
    C:\> C:\mysql-5.7.3\bin\mysqld --install mysqld2
    

    To start the services, use the services manager, or use NET START with the appropriate service names:

    C:\> NET START mysqld1
    C:\> NET START mysqld2
    

    To stop the services, use the services manager, or use NET STOP with the appropriate service names:

    C:\> NET STOP mysqld1
    C:\> NET STOP mysqld2
    
  • Approach 2: Specify options for each server in separate files and use --defaults-file when you install the services to tell each server what file to use. In this case, each file should list options using a [mysqld] group.

    With this approach, to specify options for the 5.5.9 mysqld, create a file C:\my-opts1.cnf that looks like this:

    [mysqld]
    basedir = C:/mysql-5.5.9
    port = 3307
    enable-named-pipe
    socket = mypipe1

    For the 5.7.3 mysqld, create a file C:\my-opts2.cnf that looks like this:

    [mysqld]
    basedir = C:/mysql-5.7.3
    port = 3308
    enable-named-pipe
    socket = mypipe2

    Install the services as follows (enter each command on a single line):

    C:\> C:\mysql-5.5.9\bin\mysqld --install mysqld1
               --defaults-file=C:\my-opts1.cnf
    C:\> C:\mysql-5.7.3\bin\mysqld --install mysqld2
               --defaults-file=C:\my-opts2.cnf
    

    When you install a MySQL server as a service and use a --defaults-file option, the service name must precede the option.

    After installing the services, start and stop them the same way as in the preceding example.

To remove multiple services, use mysqld --remove for each one, specifying a service name following the --remove option. If the service name is the default (MySQL), you can omit it.

5.6.3. Running Multiple MySQL Instances on Unix

One way is to run multiple MySQL instances on Unix is to compile different servers with different default TCP/IP ports and Unix socket files so that each one listens on different network interfaces. Compiling in different base directories for each installation also results automatically in a separate, compiled-in data directory, log file, and PID file location for each server.

Assume that an existing 5.6 server is configured for the default TCP/IP port number (3306) and Unix socket file (/tmp/mysql.sock). To configure a new 5.7.3 server to have different operating parameters, use a CMake command something like this:

shell> cmake . -DMYSQL_TCP_PORT=port_number \
             -DMYSQL_UNIX_ADDR=file_name \
             -DCMAKE_INSTALL_PREFIX=/usr/local/mysql-5.7.3

Here, port_number and file_name must be different from the default TCP/IP port number and Unix socket file path name, and the CMAKE_INSTALL_PREFIX value should specify an installation directory different from the one under which the existing MySQL installation is located.

If you have a MySQL server listening on a given port number, you can use the following command to find out what operating parameters it is using for several important configurable variables, including the base directory and Unix socket file name:

shell> mysqladmin --host=host_name --port=port_number variables

With the information displayed by that command, you can tell what option values not to use when configuring an additional server.

If you specify localhost as the host name, mysqladmin defaults to using a Unix socket file connection rather than TCP/IP. To explicitly specify the connection protocol, use the --protocol={TCP|SOCKET|PIPE|MEMORY} option.

You need not compile a new MySQL server just to start with a different Unix socket file and TCP/IP port number. It is also possible to use the same server binary and start each invocation of it with different parameter values at runtime. One way to do so is by using command-line options:

shell> mysqld_safe --socket=file_name --port=port_number

To start a second server, provide different --socket and --port option values, and pass a --datadir=path option to mysqld_safe so that the server uses a different data directory.

Alternatively, put the options for each server in a different option file, then start each server using a --defaults-file option that specifies the path to the appropriate option file. For example, if the option files for two server instances are named /usr/local/mysql/my.cnf and /usr/local/mysql/my.cnf2, start the servers like this: command:

shell> mysqld_safe --defaults-file=/usr/local/mysql/my.cnf
shell> mysqld_safe --defaults-file=/usr/local/mysql/my.cnf2

Another way to achieve a similar effect is to use environment variables to set the Unix socket file name and TCP/IP port number:

shell> MYSQL_UNIX_PORT=/tmp/mysqld-new.sock
shell> MYSQL_TCP_PORT=3307
shell> export MYSQL_UNIX_PORT MYSQL_TCP_PORT
shell> mysql_install_db --user=mysql
shell> mysqld_safe --datadir=/path/to/datadir &

This is a quick way of starting a second server to use for testing. The nice thing about this method is that the environment variable settings apply to any client programs that you invoke from the same shell. Thus, connections for those clients are automatically directed to the second server.

Section 2.12, “Environment Variables”, includes a list of other environment variables you can use to affect MySQL programs.

On Unix, the mysqld_multi script provides another way to start multiple servers. See Section 4.3.4, “mysqld_multi — Manage Multiple MySQL Servers”.

5.6.4. Using Client Programs in a Multiple-Server Environment

To connect with a client program to a MySQL server that is listening to different network interfaces from those compiled into your client, you can use one of the following methods:

  • Start the client with --host=host_name --port=port_number to connect using TCP/IP to a remote server, with --host=127.0.0.1 --port=port_number to connect using TCP/IP to a local server, or with --host=localhost --socket=file_name to connect to a local server using a Unix socket file or a Windows named pipe.

  • Start the client with --protocol=TCP to connect using TCP/IP, --protocol=SOCKET to connect using a Unix socket file, --protocol=PIPE to connect using a named pipe, or --protocol=MEMORY to connect using shared memory. For TCP/IP connections, you may also need to specify --host and --port options. For the other types of connections, you may need to specify a --socket option to specify a Unix socket file or Windows named-pipe name, or a --shared-memory-base-name option to specify the shared-memory name. Shared-memory connections are supported only on Windows.

  • On Unix, set the MYSQL_UNIX_PORT and MYSQL_TCP_PORT environment variables to point to the Unix socket file and TCP/IP port number before you start your clients. If you normally use a specific socket file or port number, you can place commands to set these environment variables in your .login file so that they apply each time you log in. See Section 2.12, “Environment Variables”.

  • Specify the default Unix socket file and TCP/IP port number in the [client] group of an option file. For example, you can use C:\my.cnf on Windows, or the .my.cnf file in your home directory on Unix. See Section 4.2.3.3, “Using Option Files”.

  • In a C program, you can specify the socket file or port number arguments in the mysql_real_connect() call. You can also have the program read option files by calling mysql_options(). See Section 21.8.7, “C API Function Descriptions”.

  • If you are using the Perl DBD::mysql module, you can read options from MySQL option files. For example:

    $dsn = "DBI:mysql:test;mysql_read_default_group=client;"
            . "mysql_read_default_file=/usr/local/mysql/data/my.cnf";
    $dbh = DBI->connect($dsn, $user, $password);

    See Section 21.10, “MySQL Perl API”.

    Other programming interfaces may provide similar capabilities for reading option files.

5.7. Tracing mysqld Using DTrace

The DTrace probes in the MySQL server are designed to provide information about the execution of queries within MySQL and the different areas of the system being utilized during that process. The organization and triggering of the probes means that the execution of an entire query can be monitored with one level of probes (query-start and query-done) but by monitoring other probes you can get successively more detailed information about the execution of the query in terms of the locks used, sort methods and even row-by-row and storage-engine level execution information.

The DTrace probes are organized so that you can follow the entire query process, from the point of connection from a client, through the query execution, row-level operations, and back out again. You can think of the probes as being fired within a specific sequence during a typical client connect/execute/disconnect sequence, as shown in the following figure.

Figure 5.1. The MySQL Architecture Using Pluggable Storage Engines

DTrace Probe Structure in mysqld

Global information is provided in the arguments to the DTrace probes at various levels. Global information, that is, the connection ID and user/host and where relevant the query string, is provided at key levels (connection-start, command-start, query-start, and query-exec-start). As you go deeper into the probes, it is assumed either you are only interested in the individual executions (row-level probes provide information on the database and table name only), or that you will combine the row-level probes with the notional parent probes to provide the information about a specific query. Examples of this will be given as the format and arguments of each probe are provided.

For more information on DTrace and writing DTrace scripts, read the DTrace User Guide.

MySQL 5.7 includes support for DTrace probes on Solaris 10 Update 5 (Solaris 5/08) on SPARC, x86 and x86_64 platforms. Probes are also supported on Mac OS X 10.4 and higher. Enabling the probes should be automatic on these platforms. To explicitly enable or disable the probes during building, use the -DENABLE_DTRACE=1 or -DENABLE_DTRACE=0 option to CMake.

5.7.1. mysqld DTrace Probe Reference

MySQL supports the following static probes, organized into groups of functionality.

Table 5.9. MySQL DTrace Probes

GroupProbes
Connectionconnection-start, connection-done
Commandcommand-start, command-done
Queryquery-start, query-done
Query Parsingquery-parse-start, query-parse-done
Query Cachequery-cache-hit, query-cache-miss
Query Executionquery-exec-start, query-exec-done
Row Levelinsert-row-start, insert-row-done
 update-row-start, update-row-done
 delete-row-start, delete-row-done
Row Readsread-row-start, read-row-done
Index Readsindex-read-row-start, index-read-row-done
Lockhandler-rdlock-start, handler-rdlock-done
 handler-wrlock-start, handler-wrlock-done
 handler-unlock-start, handler-unlock-done
Filesortfilesort-start, filesort-done
Statementselect-start, select-done
 insert-start, insert-done
 insert-select-start, insert-select-done
 update-start, update-done
 multi-update-start, multi-update-done
 delete-start, delete-done
 multi-delete-start, multi-delete-done
Networknet-read-start, net-read-done, net-write-start, net-write-done
Keycachekeycache-read-start, keycache-read-block, keycache-read-done, keycache-read-hit, keycache-read-miss, keycache-write-start, keycache-write-block, keycache-write-done

Note

When extracting the argument data from the probes, each argument is available as argN, starting with arg0. To identify each argument within the definitions they are provided with a descriptive name, but you must access the information using the corresponding argN parameter.

5.7.1.1. Connection Probes

The connection-start and connection-done probes enclose a connection from a client, regardless of whether the connection is through a socket or network connection.

connection-start(connectionid, user, host)
connection-done(status, connectionid)
  • connection-start: Triggered after a connection and successful login/authentication have been completed by a client. The arguments contain the connection information:

    • connectionid: An unsigned long containing the connection ID. This is the same as the process ID shown as the Id value in the output from SHOW PROCESSLIST.

    • user: The username used when authenticating. The value will be blank for the anonymous user.

    • host: The host of the client connection. For a connection made using UNIX sockets, the value will be blank.

  • connection-done: Triggered just as the connection to the client has been closed. The arguments are:

    • status: The status of the connection when it was closed. A logout operation will have a value of 0; any other termination of the connection has a nonzero value.

    • connectionid: The connection ID of the connection that was closed.

The following D script will quantify and summarize the average duration of individual connections, and provide a count, dumping the information every 60 seconds:

#!/usr/sbin/dtrace -s


mysql*:::connection-start
{
  self->start = timestamp;
}

mysql*:::connection-done
/self->start/
{
  @ = quantize(((timestamp - self->start)/1000000));
  self->start = 0;
}

tick-60s
{
  printa(@);
}

When executed on a server with a large number of clients you might see output similar to this:

  1  57413                        :tick-60s

           value  ------------- Distribution ------------- count
              -1 |                                         0
               0 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 30011
               1 |                                         59
               2 |                                         5
               4 |                                         20
               8 |                                         29
              16 |                                         18
              32 |                                         27
              64 |                                         30
             128 |                                         11
             256 |                                         10
             512 |                                         1
            1024 |                                         6
            2048 |                                         8
            4096 |                                         9
            8192 |                                         8
           16384 |                                         2
           32768 |                                         1
           65536 |                                         1
          131072 |                                         0
          262144 |                                         1
524288 |                                         0        

5.7.1.2. Command Probes

The command probes are executed before and after a client command is executed, including any SQL statement that might be executed during that period. Commands include operations such as the initialization of the DB, use of the COM_CHANGE_USER operation (supported by the MySQL protocol), and manipulation of prepared statements. Many of these commands are used only by the MySQL client API from various connectors such as PHP and Java.

command-start(connectionid, command, user, host)
command-done(status)
  • command-start: Triggered when a command is submitted to the server.

    • connectionid: The connection ID of the client executing the command.

    • command: An integer representing the command that was executed. Possible values are shown in the following table.

      ValueNameDescription
      00COM_SLEEPInternal thread state
      01COM_QUITClose connection
      02COM_INIT_DBSelect database (USE ...)
      03COM_QUERYExecute a query
      04COM_FIELD_LISTGet a list of fields
      05COM_CREATE_DBCreate a database (deprecated)
      06COM_DROP_DBDrop a database (deprecated)
      07COM_REFRESHRefresh connection
      08COM_SHUTDOWNShutdown server
      09COM_STATISTICSGet statistics
      10COM_PROCESS_INFOGet processes (SHOW PROCESSLIST)
      11COM_CONNECTInitialize connection
      12COM_PROCESS_KILLKill process
      13COM_DEBUGGet debug information
      14COM_PINGPing
      15COM_TIMEInternal thread state
      16COM_DELAYED_INSERTInternal thread state
      17COM_CHANGE_USERChange user
      18COM_BINLOG_DUMPUsed by a replication slave or mysqlbinlog to initiate a binary log read
      19COM_TABLE_DUMPUsed by a replication slave to get the master table information
      20COM_CONNECT_OUTUsed by a replication slave to log a connection to the server
      21COM_REGISTER_SLAVEUsed by a replication slave during registration
      22COM_STMT_PREPAREPrepare a statement
      23COM_STMT_EXECUTEExecute a statement
      24COM_STMT_SEND_LONG_DATAUsed by a client when requesting extended data
      25COM_STMT_CLOSEClose a prepared statement
      26COM_STMT_RESETReset a prepared statement
      27COM_SET_OPTIONSet a server option
      28COM_STMT_FETCHFetch a prepared statement
    • user: The user executing the command.

    • host: The client host.

  • command-done: Triggered when the command execution completes. The status argument contains 0 if the command executed successfully, or 1 if the statement was terminated before normal completion.

The command-start and command-done probes are best used when combined with the statement probes to get an idea of overall execution time.

5.7.1.3. Query Probes

The query-start and query-done probes are triggered when a specific query is received by the server and when the query has been completed and the information has been successfully sent to the client.

query-start(query, connectionid, database, user, host)
query-done(status)
  • query-start: Triggered after the query string has been received from the client. The arguments are:

    • query: The full text of the submitted query.

    • connectionid: The connection ID of the client that submitted the query. The connection ID equals the connection ID returned when the client first connects and the Id value in the output from SHOW PROCESSLIST.

    • database: The database name on which the query is being executed.

    • user: The username used to connect to the server.

    • host: The hostname of the client.

  • query-done: Triggered once the query has been executed and the information has been returned to the client. The probe includes a single argument, status, which returns 0 when the query is successfully executed and 1 if there was an error.

You can get a simple report of the execution time for each query using the following D script:

#!/usr/sbin/dtrace -s

#pragma D option quiet

dtrace:::BEGIN
{
   printf("%-20s %-20s %-40s %-9s\n", "Who", "Database", "Query", "Time(ms)");
}

mysql*:::query-start
{
   self->query = copyinstr(arg0);
   self->connid = arg1;
   self->db    = copyinstr(arg2);
   self->who   = strjoin(copyinstr(arg3),strjoin("@",copyinstr(arg4)));
   self->querystart = timestamp;
}

mysql*:::query-done
{
   printf("%-20s %-20s %-40s %-9d\n",self->who,self->db,self->query,
          (timestamp - self->querystart) / 1000000);
}

When executing the above script you should get a basic idea of the execution time of your queries:

shell> ./query.d
Who                  Database             Query                                    Time(ms)
root@localhost       test                 select * from t1 order by i limit 10     0
root@localhost       test                 set global query_cache_size=0            0
root@localhost       test                 select * from t1 order by i limit 10     776
root@localhost       test                 select * from t1 order by i limit 10     773
root@localhost       test                 select * from t1 order by i desc limit 10 795 

5.7.1.4. Query Parsing Probes

The query parsing probes are triggered before the original SQL statement is parsed and when the parsing of the statement and determination of the execution model required to process the statement has been completed:

query-parse-start(query)
query-parse-done(status)
  • query-parse-start: Triggered just before the statement is parsed by the MySQL query parser. The single argument, query, is a string containing the full text of the original query.

  • query-parse-done: Triggered when the parsing of the original statement has been completed. The status is an integer describing the status of the operation. A 0 indicates that the query was successfully parsed. A 1 indicates that the parsing of the query failed.

For example, you could monitor the execution time for parsing a given query using the following D script:

#!/usr/sbin/dtrace -s

#pragma D option quiet

mysql*:::query-parse-start
{
   self->parsestart = timestamp;
   self->parsequery = copyinstr(arg0);
}

mysql*:::query-parse-done
/arg0 == 0/
{
   printf("Parsing %s: %d microseconds\n", self->parsequery,((timestamp - self->parsestart)/1000));
}

mysql*:::query-parse-done
/arg0 != 0/
{
   printf("Error parsing %s: %d microseconds\n", self->parsequery,((timestamp - self->parsestart)/1000));
}

In the above script a predicate is used on query-parse-done so that different output is generated based on the status value of the probe.

When running the script and monitoring the execution:

shell> ./query-parsing.d
Error parsing select from t1 join (t2) on (t1.i = t2.i) order by t1.s,t1.i limit 10: 36 ms
Parsing select * from t1 join (t2) on (t1.i = t2.i) order by t1.s,t1.i limit 10: 176 ms

5.7.1.5. Query Cache Probes

The query cache probes are fired when executing any query. The query-cache-hit query is triggered when a query exists in the query cache and can be used to return the query cache information. The arguments contain the original query text and the number of rows returned from the query cache for the query. If the query is not within the query cache, or the query cache is not enabled, then the query-cache-miss probe is triggered instead.

query-cache-hit(query, rows)
query-cache-miss(query)
  • query-cache-hit: Triggered when the query has been found within the query cache. The first argument, query, contains the original text of the query. The second argument, rows, is an integer containing the number of rows in the cached query.

  • query-cache-miss: Triggered when the query is not found within the query cache. The first argument, query, contains the original text of the query.

The query cache probes are best combined with a probe on the main query so that you can determine the differences in times between using or not using the query cache for specified queries. For example, in the following D script, the query and query cache information are combined into the information output during monitoring:

#!/usr/sbin/dtrace -s

#pragma D option quiet

dtrace:::BEGIN
{
   printf("%-20s %-20s %-40s %2s %-9s\n", "Who", "Database", "Query", "QC", "Time(ms)");
}

mysql*:::query-start
{
   self->query = copyinstr(arg0);
   self->connid = arg1;
   self->db    = copyinstr(arg2);
   self->who   = strjoin(copyinstr(arg3),strjoin("@",copyinstr(arg4)));
   self->querystart = timestamp;
   self->qc = 0;
}

mysql*:::query-cache-hit
{
   self->qc = 1;
}

mysql*:::query-cache-miss
{
   self->qc = 0;
}

mysql*:::query-done
{
   printf("%-20s %-20s %-40s %-2s %-9d\n",self->who,self->db,self->query,(self->qc ? "Y" : "N"),
          (timestamp - self->querystart) / 1000000);
}

When executing the script you can see the effects of the query cache. Initially the query cache is disabled. If you set the query cache size and then execute the query multiple times you should see that the query cache is being used to return the query data:

shell> ./query-cache.d
root@localhost       test                 select * from t1 order by i limit 10     N  1072
root@localhost                            set global query_cache_size=262144       N  0
root@localhost       test                 select * from t1 order by i limit 10     N  781
root@localhost       test                 select * from t1 order by i limit 10     Y  0 

5.7.1.6. Query Execution Probes

The query execution probe is triggered when the actual execution of the query starts, after the parsing and checking the query cache but before any privilege checks or optimization. By comparing the difference between the start and done probes you can monitor the time actually spent servicing the query (instead of just handling the parsing and other elements of the query).

query-exec-start(query, connectionid, database, user, host, exec_type)
query-exec-done(status)
Note

The information provided in the arguments for query-start and query-exec-start are almost identical and designed so that you can choose to monitor either the entire query process (using query-start) or only the execution (using query-exec-start) while exposing the core information about the user, client, and query being executed.

  • query-exec-start: Triggered when the execution of a individual query is started. The arguments are:

    • query: The full text of the submitted query.

    • connectionid: The connection ID of the client that submitted the query. The connection ID equals the connection ID returned when the client first connects and the Id value in the output from SHOW PROCESSLIST.

    • database: The database name on which the query is being executed.

    • user: The username used to connect to the server.

    • host: The hostname of the client.

    • exec_type: The type of execution. Execution types are determined based on the contents of the query and where it was submitted. The values for each type are shown in the following table.

      ValueDescription
      0Executed query from sql_parse, top-level query.
      1Executed prepared statement
      2Executed cursor statement
      3Executed query in stored procedure
  • query-exec-done: Triggered when the execution of the query has completed. The probe includes a single argument, status, which returns 0 when the query is successfully executed and 1 if there was an error.

5.7.1.7. Row-Level Probes

The *row-{start,done} probes are triggered each time a row operation is pushed down to a storage engine. For example, if you execute an INSERT statement with 100 rows of data, then the insert-row-start and insert-row-done probes will be triggered 100 times each, for each row insert.

insert-row-start(database, table)
insert-row-done(status)

update-row-start(database, table)
update-row-done(status)

delete-row-start(database, table)
delete-row-done(status)
  • insert-row-start: Triggered before a row is inserted into a table.

  • insert-row-done: Triggered after a row is inserted into a table.

  • update-row-start: Triggered before a row is updated in a table.

  • update-row-done: Triggered before a row is updated in a table.

  • delete-row-start: Triggered before a row is deleted from a table.

  • delete-row-done: Triggered before a row is deleted from a table.

The arguments supported by the probes are consistent for the corresponding start and done probes in each case:

  • database: The database name.

  • table: The table name.

  • status: The status; 0 for success or 1 for failure.

Because the row-level probes are triggered for each individual row access, these probes can be triggered many thousands of times each second, which may have a detrimental effect on both the monitoring script and MySQL. The DTrace environment should limit the triggering on these probes to prevent the performance being adversely affected. Either use the probes sparingly, or use counter or aggregation functions to report on these probes and then provide a summary when the script terminates or as part of a query-done or query-exec-done probes.

The following example script summarizes the duration of each row operation within a larger query:

#!/usr/sbin/dtrace -s

#pragma D option quiet

dtrace:::BEGIN
{
   printf("%-2s %-10s %-10s %9s %9s %-s \n",
          "St", "Who", "DB", "ConnID", "Dur ms", "Query");
}

mysql*:::query-start
{
   self->query = copyinstr(arg0);
   self->who   = strjoin(copyinstr(arg3),strjoin("@",copyinstr(arg4)));
   self->db    = copyinstr(arg2);
   self->connid = arg1;
   self->querystart = timestamp;
   self->rowdur = 0;
}

mysql*:::query-done
{
   this->elapsed = (timestamp - self->querystart) /1000000;
   printf("%2d %-10s %-10s %9d %9d %s\n",
          arg0, self->who, self->db,
          self->connid, this->elapsed, self->query);
}

mysql*:::query-done
/ self->rowdur /
{
   printf("%34s %9d %s\n", "", (self->rowdur/1000000), "-> Row ops");
}

mysql*:::insert-row-start
{
   self->rowstart = timestamp;
}

mysql*:::delete-row-start
{
   self->rowstart = timestamp;
}

mysql*:::update-row-start
{
   self->rowstart = timestamp;
}

mysql*:::insert-row-done
{
   self->rowdur += (timestamp-self->rowstart);
}

mysql*:::delete-row-done
{
   self->rowdur += (timestamp-self->rowstart);
}

mysql*:::update-row-done
{
   self->rowdur += (timestamp-self->rowstart);
}

Running the above script with a query that inserts data into a table, you can monitor the exact time spent performing the raw row insertion:

St Who        DB            ConnID    Dur ms Query
 0 @localhost test              13     20767 insert into t1(select * from t2)
4827 -> Row ops

5.7.1.8. Read Row Probes

The read row probes are triggered at a storage engine level each time a row read operation occurs. These probes are specified within each storage engine (as opposed to the *row-start probes which are in the storage engine interface). These probes can therefore be used to monitor individual storage engine row-level operations and performance. Because these probes are triggered around the storage engine row read interface, they may be hit a significant number of times during a basic query.

read-row-start(database, table, scan_flag)
read-row-done(status)
  • read-row-start: Triggered when a row is read by the storage engine from the specified database and table. The scan_flag is set to 1 (true) when the read is part of a table scan (that is, a sequential read), or 0 (false) when the read is of a specific record.

  • read-row-done: Triggered when a row read operation within a storage engine completes. The status returns 0 on success, or a positive value on failure.

5.7.1.9. Index Probes

The index probes are triggered each time a a row is read using one of the indexes for the specified table. The probe is triggered within the corresponding storage engine for the table.

index-read-row-start(database, table)
index-read-row-done(status)
  • index-read-row-start: Triggered when a row is read by the storage engine from the specified database and table.

  • index-read-row-done: Triggered when an indexed row read operation within a storage engine completes. The status returns 0 on success, or a positive value on failure.

5.7.1.10. Lock Probes

The lock probes are called whenever an external lock is requested by MySQL for a table using the corresponding lock mechanism on the table as defined by the table's engine type. There are three different types of lock, the read lock, write lock, and unlock operations. Using the probes you can determine the duration of the external locking routine (that is, the time taken by the storage engine to implement the lock, including any time waiting for another lock to become free) and the total duration of the lock/unlock process.

handler-rdlock-start(database, table)
handler-rdlock-done(status)

handler-wrlock-start(database, table)
handler-wrlock-done(status)

handler-unlock-start(database, table)
handler-unlock-done(status)
  • handler-rdlock-start: Triggered when a read lock is requested on the specified database and table.

  • handler-wrlock-start: Triggered when a write lock is requested on the specified database and table.

  • handler-unlock-start: Triggered when an unlock request is made on the specified database and table.

  • handler-rdlock-done: Triggered when a read lock request completes. The status is 0 if the lock operation succeeded, or >0 on failure.

  • handler-wrlock-done: Triggered when a write lock request completes. The status is 0 if the lock operation succeeded, or >0 on failure.

  • handler-unlock-done: Triggered when an unlock request completes. The status is 0 if the unlock operation succeeded, or >0 on failure.

You can use arrays to monitor the locking and unlocking of individual tables and then calculate the duration of the entire table lock using the following script:

#!/usr/sbin/dtrace -s

#pragma D option quiet

mysql*:::handler-rdlock-start
{
   self->rdlockstart = timestamp;
   this->lockref = strjoin(copyinstr(arg0),strjoin("@",copyinstr(arg1)));
   self->lockmap[this->lockref] = self->rdlockstart;
   printf("Start: Lock->Read   %s.%s\n",copyinstr(arg0),copyinstr(arg1));
}

mysql*:::handler-wrlock-start
{
   self->wrlockstart = timestamp;
   this->lockref = strjoin(copyinstr(arg0),strjoin("@",copyinstr(arg1)));
   self->lockmap[this->lockref] = self->rdlockstart;
   printf("Start: Lock->Write  %s.%s\n",copyinstr(arg0),copyinstr(arg1));
}

mysql*:::handler-unlock-start
{
   self->unlockstart = timestamp;
   this->lockref = strjoin(copyinstr(arg0),strjoin("@",copyinstr(arg1)));
   printf("Start: Lock->Unlock %s.%s (%d ms lock duration)\n",
          copyinstr(arg0),copyinstr(arg1),
          (timestamp - self->lockmap[this->lockref])/1000000);
}

mysql*:::handler-rdlock-done
{
   printf("End:   Lock->Read   %d ms\n",
          (timestamp - self->rdlockstart)/1000000);
}

mysql*:::handler-wrlock-done
{
   printf("End:   Lock->Write  %d ms\n",
          (timestamp - self->wrlockstart)/1000000);
}

mysql*:::handler-unlock-done
{
   printf("End:   Lock->Unlock %d ms\n",
          (timestamp - self->unlockstart)/1000000);
}

When executed, you should get information both about the duration of the locking process itself, and of the locks on a specific table:

Start: Lock->Read   test.t2
End:   Lock->Read   0 ms
Start: Lock->Unlock test.t2 (25743 ms lock duration)
End:   Lock->Unlock 0 ms
Start: Lock->Read   test.t2
End:   Lock->Read   0 ms
Start: Lock->Unlock test.t2 (1 ms lock duration)
End:   Lock->Unlock 0 ms
Start: Lock->Read   test.t2
End:   Lock->Read   0 ms
Start: Lock->Unlock test.t2 (1 ms lock duration)
End:   Lock->Unlock 0 ms
Start: Lock->Read   test.t2
End:   Lock->Read   0 ms

5.7.1.11. Filesort Probes

The filesort probes are triggered whenever a filesort operation is applied to a table. For more information on filesort and the conditions under which it occurs, see Section 8.2.1.15, “ORDER BY Optimization”.

filesort-start(database, table)
filesort-done(status, rows)
  • filesort-start: Triggered when the filesort operation starts on a table. The two arguments to the probe, database and table, will identify the table being sorted.

  • filesort-done: Triggered when the filesort operation completes. Two arguments are supplied, the status (0 for success, 1 for failure), and the number of rows sorted during the filesort process.

An example of this is in the following script, which tracks the duration of the filesort process in addition to the duration of the main query:

#!/usr/sbin/dtrace -s

#pragma D option quiet

dtrace:::BEGIN
{
   printf("%-2s %-10s %-10s %9s %18s %-s \n",
          "St", "Who", "DB", "ConnID", "Dur microsec", "Query");
}

mysql*:::query-start
{
   self->query = copyinstr(arg0);
   self->who   = strjoin(copyinstr(arg3),strjoin("@",copyinstr(arg4)));
   self->db    = copyinstr(arg2);
   self->connid = arg1;
   self->querystart = timestamp;
   self->filesort = 0;
   self->fsdb = "";
   self->fstable = "";
}

mysql*:::filesort-start
{
  self->filesort = timestamp;
  self->fsdb = copyinstr(arg0);
  self->fstable = copyinstr(arg1);
}

mysql*:::filesort-done
{
   this->elapsed = (timestamp - self->filesort) /1000;
   printf("%2d %-10s %-10s %9d %18d Filesort on %s\n",
          arg0, self->who, self->fsdb,
          self->connid, this->elapsed, self->fstable);
}

mysql*:::query-done
{
   this->elapsed = (timestamp - self->querystart) /1000;
   printf("%2d %-10s %-10s %9d %18d %s\n",
          arg0, self->who, self->db,
          self->connid, this->elapsed, self->query);
}

Executing a query on a large table with an ORDER BY clause that triggers a filesort, and then creating an index on the table and then repeating the same query, you can see the difference in execution speed:

St Who        DB            ConnID       Dur microsec Query
 0 @localhost test              14           11335469 Filesort on t1
 0 @localhost test              14           11335787 select * from t1 order by i limit 100
 0 @localhost test              14          466734378 create index t1a on t1 (i)
0 @localhost test              14              26472 select * from t1 order by i limit 100

5.7.1.12. Statement Probes

The individual statement probes are provided to give specific information about different statement types. For the start probes the string of the query is provided as a the only argument. Depending on the statement type, the information provided by the corresponding done probe will differ. For all done probes the status of the operation (0 for success, >0 for failure) is provided. For SELECT, INSERT, INSERT ... (SELECT FROM ...), DELETE, and DELETE FROM t1,t2 operations the number of rows affected is returned.

For UPDATE and UPDATE t1,t2 ... statements the number of rows matched and the number of rows actually changed is provided. This is because the number of rows actually matched by the corresponding WHERE clause, and the number of rows changed can differ. MySQL does not update the value of a row if the value already matches the new setting.

select-start(query)
select-done(status,rows)

insert-start(query)
insert-done(status,rows)

insert-select-start(query)
insert-select-done(status,rows)

update-start(query)
update-done(status,rowsmatched,rowschanged)

multi-update-start(query)
multi-update-done(status,rowsmatched,rowschanged)

delete-start(query)
delete-done(status,rows)

multi-delete-start(query)
multi-delete-done(status,rows)
  • select-start: Triggered before a SELECT statement.

  • select-done: Triggered at the end of a SELECT statement.

  • insert-start: Triggered before a INSERT statement.

  • insert-done: Triggered at the end of an INSERT statement.

  • insert-select-start: Triggered before an INSERT ... SELECT statement.

  • insert-select-done: Triggered at the end of an INSERT ... SELECT statement.

  • update-start: Triggered before an UPDATE statement.

  • update-done: Triggered at the end of an UPDATE statement.

  • multi-update-start: Triggered before an UPDATE statement involving multiple tables.

  • multi-update-done: Triggered at the end of an UPDATE statement involving multiple tables.

  • delete-start: Triggered before a DELETE statement.

  • delete-done: Triggered at the end of a DELETE statement.

  • multi-delete-start: Triggered before a DELETE statement involving multiple tables.

  • multi-delete-done: Triggered at the end of a DELETE statement involving multiple tables.

The arguments for the statement probes are:

  • query: The query string.

  • status: The status of the query. 0 for success, and >0 for failure.

  • rows: The number of rows affected by the statement. This returns the number rows found for SELECT, the number of rows deleted for DELETE, and the number of rows successfully inserted for INSERT.

  • rowsmatched: The number of rows matched by the WHERE clause of an UPDATE operation.

  • rowschanged: The number of rows actually changed during an UPDATE operation.

You use these probes to monitor the execution of these statement types without having to monitor the user or client executing the statements. A simple example of this is to track the execution times:

#!/usr/sbin/dtrace -s

#pragma D option quiet

dtrace:::BEGIN
{
   printf("%-60s %-8s %-8s %-8s\n", "Query", "RowsU", "RowsM", "Dur (ms)");
}

mysql*:::update-start, mysql*:::insert-start,
mysql*:::delete-start, mysql*:::multi-delete-start,
mysql*:::multi-delete-done, mysql*:::select-start,
mysql*:::insert-select-start, mysql*:::multi-update-start
{
    self->query = copyinstr(arg0);
    self->querystart = timestamp;
}

mysql*:::insert-done, mysql*:::select-done,
mysql*:::delete-done, mysql*:::multi-delete-done, mysql*:::insert-select-done
/ self->querystart /
{
    this->elapsed = ((timestamp - self->querystart)/1000000);
    printf("%-60s %-8d %-8d %d\n",
           self->query,
           0,
           arg1,
           this->elapsed);
    self->querystart = 0;
}

mysql*:::update-done, mysql*:::multi-update-done
/ self->querystart /
{
    this->elapsed = ((timestamp - self->querystart)/1000000);
    printf("%-60s %-8d %-8d %d\n",
           self->query,
           arg1,
           arg2,
           this->elapsed);
    self->querystart = 0;
}

When executed you can see the basic execution times and rows matches:

Query                                                        RowsU    RowsM    Dur (ms)
select * from t2                                             0        275      0
insert into t2 (select * from t2)                            0        275      9
update t2 set i=5 where i > 75                               110      110      8
update t2 set i=5 where i < 25                               254      134      12
delete from t2 where i < 5                                   0        0        0

Another alternative is to use the aggregation functions in DTrace to aggregate the execution time of individual statements together:

#!/usr/sbin/dtrace -s

#pragma D option quiet


mysql*:::update-start, mysql*:::insert-start,
mysql*:::delete-start, mysql*:::multi-delete-start,
mysql*:::multi-delete-done, mysql*:::select-start,
mysql*:::insert-select-start, mysql*:::multi-update-start
{
    self->querystart = timestamp;
}

mysql*:::select-done
{
        @statements["select"] = sum(((timestamp - self->querystart)/1000000));
}

mysql*:::insert-done, mysql*:::insert-select-done
{
        @statements["insert"] = sum(((timestamp - self->querystart)/1000000));
}

mysql*:::update-done, mysql*:::multi-update-done
{
        @statements["update"] = sum(((timestamp - self->querystart)/1000000));
}

mysql*:::delete-done, mysql*:::multi-delete-done
{
        @statements["delete"] = sum(((timestamp - self->querystart)/1000000));
}

tick-30s
{
        printa(@statements);
}

The script just shown aggregates the times spent doing each operation, which could be used to help benchmark a standard suite of tests.

 delete                                                            0
  update                                                            0
  insert                                                           23
  select                                                         2484

  delete                                                            0
  update                                                            0
  insert                                                           39
  select                                                        10744

  delete                                                            0
  update                                                           26
  insert                                                           56
  select                                                        10944

  delete                                                            0
  update                                                           26
  insert                                                         2287
select                                                        15985

5.7.1.13. Network Probes

The network probes monitor the transfer of information from the MySQL server and clients of all types over the network. The probes are defined as follows:

net-read-start()
net-read-done(status, bytes)
net-write-start(bytes)
net-write-done(status)
  • net-read-start: Triggered when a network read operation is started.

  • net-read-done: Triggered when the network read operation completes. The status is an integer representing the return status for the operation, 0 for success and 1 for failure. The bytes argument is an integer specifying the number of bytes read during the process.

  • net-start-bytes: Triggered when data is written to a network socket. The single argument, bytes, specifies the number of bytes written to the network socket.

  • net-write-done: Triggered when the network write operation has completed. The single argument, status, is an integer representing the return status for the operation, 0 for success and 1 for failure.

You can use the network probes to monitor the time spent reading from and writing to network clients during execution. The following D script provides an example of this. Both the cumulative time for the read or write is calculated, and the number of bytes. Note that the dynamic variable size has been increased (using the dynvarsize option) to cope with the rapid firing of the individual probes for the network reads/writes.

#!/usr/sbin/dtrace -s

#pragma D option quiet
#pragma D option dynvarsize=4m

dtrace:::BEGIN
{
   printf("%-2s %-30s %-10s %9s %18s %-s \n",
          "St", "Who", "DB", "ConnID", "Dur microsec", "Query");
}

mysql*:::query-start
{
   self->query = copyinstr(arg0);
   self->who   = strjoin(copyinstr(arg3),strjoin("@",copyinstr(arg4)));
   self->db    = copyinstr(arg2);
   self->connid = arg1;
   self->querystart = timestamp;
   self->netwrite = 0;
   self->netwritecum = 0;
   self->netwritebase = 0;
   self->netread = 0;
   self->netreadcum = 0;
   self->netreadbase = 0;
}

mysql*:::net-write-start
{
   self->netwrite += arg0;
   self->netwritebase = timestamp;
}

mysql*:::net-write-done
{
   self->netwritecum += (timestamp - self->netwritebase);
   self->netwritebase = 0;
}

mysql*:::net-read-start
{
   self->netreadbase = timestamp;
}

mysql*:::net-read-done
{
   self->netread += arg1;
   self->netreadcum += (timestamp - self->netreadbase);
   self->netreadbase = 0;
}

mysql*:::query-done
{
   this->elapsed = (timestamp - self->querystart) /1000000;
   printf("%2d %-30s %-10s %9d %18d %s\n",
          arg0, self->who, self->db,
          self->connid, this->elapsed, self->query);
   printf("Net read: %d bytes (%d ms) write: %d bytes (%d ms)\n",
               self->netread, (self->netreadcum/1000000),
               self->netwrite, (self->netwritecum/1000000));
}

When executing the above script on a machine with a remote client, you can see that approximately a third of the time spent executing the query is related to writing the query results back to the client.

St Who                            DB            ConnID       Dur microsec Query
 0 root@::ffff:192.168.0.108      test              31               3495 select * from t1 limit 1000000
Net read: 0 bytes (0 ms) write: 10000075 bytes (1220 ms)

5.7.1.14. Keycache Probes

The keycache probes are triggered when using the index key cache used with the MyISAM storage engine. Probes exist to monitor when data is read into the keycache, cached key data is written from the cache into a cached file, or when accessing the keycache.

Keycache usage indicates when data is read or written from the index files into the cache, and can be used to monitor how efficient the memory allocated to the keycache is being used. A high number of keycache reads across a range of queries may indicate that the keycache is too small for size of data being accessed.

keycache-read-start(filepath, bytes, mem_used, mem_free)
keycache-read-block(bytes)
keycache-read-hit()
keycache-read-miss()
keycache-read-done(mem_used, mem_free)
keycache-write-start(filepath, bytes, mem_used, mem_free)
keycache-write-block(bytes)
keycache-write-done(mem_used, mem_free)

When reading data from the index files into the keycache, the process first initializes the read operation (indicated by keycache-read-start), then loads blocks of data (keycache-read-block), and then the read block is either matches the data being identified (keycache-read-hit) or more data needs to be read (keycache-read-miss). Once the read operation has completed, reading stops with the keycache-read-done.

Data will be read from the index file into the keycache only when the specified key is not already within the keycache.

  • keycache-read-start: Triggered when the keycache read operation is started. Data is read from the specified filepath, reading the specified number of bytes. The mem_used and mem_avail indicate memory currently used by the keycache and the amount of memory available within the keycache.

  • keycache-read-block: Triggered when the keycache reads a block of data, of the specified number of bytes, from the index file into the keycache.

  • keycache-read-hit: Triggered when the block of data read from the index file matches the key data requested.

  • keycache-read-miss: Triggered when the block of data read from the index file does not match the key data needed.

  • keycache-read-done: Triggered when the keycache read operation has completed. The mem_used and mem_avail indicate memory currently used by the keycache and the amount of memory available within the keycache.

Keycache writes occur when the index information is updated during an INSERT, UPDATE, or DELETE operation, and the cached key information is flushed back to the index file.

  • keycache-write-start: Triggered when the keycache write operation is started. Data is written to the specified filepath, reading the specified number of bytes. The mem_used and mem_avail indicate memory currently used by the keycache and the amount of memory available within the keycache.

  • keycache-write-block: Triggered when the keycache writes a block of data, of the specified number of bytes, to the index file from the keycache.

  • keycache-write-done: Triggered when the keycache write operation has completed. The mem_used and mem_avail indicate memory currently used by the keycache and the amount of memory available within the keycache.