mysql5.5 Installation and configuration (Pro-test version)

Source: Internet
Author: User
Tags mysql client reserved unique id

mysql5.5.x Compilation and InstallationDescription: 5.5 and 5.1 Almost, just add a cmake at compile time, the other basic same

CentOS 6 mysql5.5 installation configuration

1 Installing the required software

2 Installing CMake

3 tar.gz form Install MySQL

4 Configuration and startup

5 rpm Format MySQL installation

6 MySQL Configuration parameters detailed description

MySQL has been using the CMake compilation tool since the 5.5 release.

Start: 1 system library related library files required for installation Yum install-y gcc gcc-c++ gcc-g77 autoconf automake zlib* fiex* libxml* ncurses-devel libmcrypt* libtool-ltdl-devel*< /c2> 2. Installing CMake 1yum Installation: Yum install-y cmake or the source code to compile the installation:   wget http://www.cmake.org/files/v2.8/cmake-2.8.5.tar.gz tar-zxvf cmake-2.8.5.tar.gz Make &make Install   3 Installing MySQL   cd/usr/local/src  wget http://mirrors.sohu.com/mysql/MySQL-5.5/mysql-5.5.46.tar.gz mkdir-p/usr/local/mysql mkdir-p/data/mysql/ Groupadd mysql/usr/sbin/useradd-g mysql mysql chown-r mysql:mysql/data/mysql/tar-zxvf mysql-5.5.46.tar.gzcd mysql-5 .5.46 Compiling cmake-dcmake_install_prefix=/usr/local/mysql-dmysql_unix_addr=/data/mysql/mysql.sock-ddefault_charset= Utf8-ddefault_collation=utf8_general_ci-dwith_extra_charsets:string=utf8,gbk-dwith_myisam_storage_engine=1- Dwith_innobase_storage_engine=1-dwith_memory_storage_engine=1-dwith_readline=1

-denabled_local_infile=1-dmysql_datadir=/data/mysql-dmysql_user=mysql-dmysql_tcp_port=3306

Make

Make install

CP SUPPORT-FILES/MY-MEDIUM.CNF/ETC/MY.CNF

chmod 755 scripts/mysql_install_db./scripts/mysql_install_db--user=mysql--basedir=/usr/local/mysql--datadir=/ DATA/MYSQL/CP Support-files/mysql.server/etc/init.d/mysqld  chmod 755/etc/init.d/mysql boot: chkconfig--add mysqld chkconfig mysqld on service mysqld start join pathvim/etc/p Rofile path= $PATH:/usr/local/mysql/bin source/etc/profile----------------Successful------------------------Some key parameters about it:

6. MYSQ Configuration Parameters Detailed description

MySQL maximum number of concurrent | Linux modify MySQL max concurrent connectionsThe first step is to see the maximum number of connections for the current MySQL[Email protected] ~]#/usr/local/mysql/bin/mysqladmin-uroot-ppassword variables |grep max_connections (Note that Root is replaced with your database, but the general default is Root,password is the database password, enter the above command will display the following message, this is the maximum number of connections is 100 | max_connections | 100//default is 100The second step is to modify the maximum number of connections to [[email protected] ~]# NANO/ETC/MY.CNF Enter the above command will enter the My.cnf file content, add the following line of code max_connections=200 use the up and down arrows to move the cursor, enter and press CTRL + o after the combination of saving, save the time to press ENTER to determine, this place is also I started not to pay attention to the place, OK after pressing ctrl+x key to exit back to the command lineThe last step is to restart MySQL.[[Email protected] ~]# service mysqld restart//restart MySQL commandMySQL my.cnf Chinese reference  #BEGIN CONFIG info #DESCR: 4GB RAM, using only InnoDB, ACID, a small number of connections, high queue load #TYPE: SYSTEM #END CONFIG Info # # This MySQL configuration file example for 4G memory # Mostly use InnoDB # to handle complex queues and connect a small number of MySQL servers # # to copy this file to/ETC/MY.CNF as the global setting, # MYSQL-DATA-DIR/MY.CNF as the server specifies the setting # (@[email&nb Sp;protected] for this installation) or put # ~/.MY.CNF as user settings. # # In this profile, you can use the long options supported by all programs. # If you want to know all the options supported by the program # Please run the program with the "--help" parameter after the program. # # More details about the standalone option can be found in the manual # # # # The following options are read by the MySQL client app. Note Only the client application included with MySQL guarantees that this content can be read. # If you want your own MySQL application to get these values # need to specify these options when the MySQL client library is initialized # [client] #password     = [Your_password] port      = @[email protected] socket     = @[email protected] # * * * Application Customization Options * * * #  MySQL Server # [MYSQLD] # General configuration options port     = @[email protected] socket    &nb sp;= @[email protected] # Back_log is the number of connections that the operating system can hold in the listening queue, and the # queue holds the connection before the MySQL Connection Manager thread is processed. # If you have a very high connection rate and the "connection refused" error occurs, you should add the value here. # Check your operating system documentation to get the maximum value for this variable. # if theBack_log is set to a value higher than your operating system limit, it will not be effective Back_log =# Do not listen on TCP/IP ports. # If all the processes are connected to the local mysqld on the same server, # This will be an enhanced security method # All MYSQLD connections are made through UNIX sockets or named pipes. Note Under Windows If you do not open the Named pipes option and just use this item # (via the "enable-named-pipe" option) will cause the MySQL service to have no effect! #skip-networking# The maximum number of simultaneous sessions allowed by the MySQL service # One of the connections will be logged in as an administrator with Super permissions reserved. # even if the maximum number of connections has been reached.max_connections =# The maximum number of errors per client connection is allowed if this limit is reached. # This client will be blocked by the MySQL service until the "FLUSH HOSTS" is executed or the service restarts # illegal passwords and other errors at link time will increase this value. # View the "aborted_connects" status to get the global counters.max_connect_errors = Ten# The number of tables opened by all threads. # Increasing this value increases the number of file descriptors required for Mysqld # so you need to confirm that the "open-files-limit" variable in [Mysqld_safe] is set to open the number of files allowed at least 4096Table_cache = 2048# Allow external file-level locks. Opening a file lock can negatively affect performance # So use this option only if you are running multiple DB instances on the same file (note that there are still other constraints!) # or you use some other software dependencies on the file level to lock the MyISAM table #external-locking# The maximum size of the request package that the service can handle and the maximum request size that the service can handle (when working with a large blob field) # Each connection is independent of size. Dynamic Increase in sizeMax_allowed_packet = 16M# in a transaction binlog in order to record the cache size held by the SQL State # If you often use large, multi-declarative transactions, you can increase this value to get greater performance. # All the status of the service will be buffered in the Binlog buffer and then written to Binlog once committed. # if the transaction is larger than this value, the temporary file on disk will be used instead. # This buffer is created when the status of each connected transaction is updated the first timebinlog_cache_size = 1M# The maximum capacity allowed for a separate memory table. # This option prevents accidental creation of a large memory table resulting in all memory resources being exhausted.max_heap_table_size = 64M# sort buffers are used to handle sort order by and group by queue # If the sorted data cannot be placed in the sort buffer, # a replacement disk-based merge classification is used # to view the "sort_merge_passes" state variable. # assigned by each thread when sorting occurssort_buffer_size = 8M# This buffer is used to optimize federation (full JOINs without index union). # Similar unions have very poor performance in most cases, but setting this value to large can mitigate the performance impact. # View the number of full unions through the "Select_full_join" state variable # When a full union occurs, it is allocated in each threadjoin_buffer_size = 8M# How many threads we keep in the cache for reuse # When a client disconnects, if the thread in the cache is less than thread_cache_size, # The client thread is put into the cache. # This can greatly reduce the overhead of thread creation when you need a lot of new connections # (Generally speaking, if you have a good threading model, this will not have a noticeable performance boost.)thread_cache_size = 8# This allows the application to give the thread system a hint at the same time to give the desired number of threads to be run. # This value is only meaningful for systems that support the thread_concurrency () function (for example, Sun Solaris). # You can try to use [CPU number]* (2..4) as the value of thread_concurrencythread_concurrency = 8# query buffering is often used to buffer the results of a SELECT and no longer executes the direct return result at the next same query. # Open query buffering can greatly improve server speed if you have a large number of identical queries and rarely modify tables. # Check the "qcache_lowmem_prunes" status variable to see if the current value is high enough for your load. Note: In situations where your table is constantly changing or if your query text is different every time, # query buffering may cause performance degradation rather than performance gains.query_cache_size = 64M# only results smaller than this setting will be buffered # This setting protects the query buffer, preventing a large result set from overwriting all other query results.query_cache_limit = 2M# The smallest word length indexed by the full-text search. # You may want to reduce it if you need to search for shorter words. # Note After you modify this value, # you need to rebuild your fulltext indexFt_min_word_len = 4# If your system supports the Memlock () function, you might want to turn this option on to allow MySQL to be locked in memory and prevent it from being swapping out # when running in high-memory tension # This option is good for performance #Memlock# When creating a new table as the default table type, # This value will be used if there is no special execution table type in the Create representationDefault_table_type = MYISAM# The heap size used by the thread. The memory for this capacity is reserved on each connection. # MySQL itself often does not need more than 64K of memory # If you use your own UDF function that requires a lot of heap # or your operating system needs more heap for some operations, # You may need to set it up a bit higher.Thread_stack = 192K# Set the default transaction isolation level. The available levels are as follows: # read-uncommitted, read-committed, Repeatable-read, SERIALIZABLEtransaction_isolation = Repeatable-read# internal (in-memory) the maximum size of the temporary table # If a table grows larger than this value, it will be automatically converted to a disk-based table. # This limit is for a single table, not a sum.tmp_table_size = 64M# turn on the binary logging feature. # in Replication (replication) configuration, this key must be opened as Master Master server # If you need to do a point-in-time recovery from your last backup, you also need a binary log.Log-bin=mysql-bin# If you are using a chained copy mode from the server structure (A-&GT;B-&GT;C), # You need to open this item on server B. # This option opens the log of the update that was re-made on the thread, # and writes it to the binary log from the server. #log_slave_updates# Open the full query log. All queries received by the server (even queries for an error syntax) will be recorded. This is useful for debugging, which is often turned off in a production environment. #Log# Print out the warning to the error log file. If you have any problems with MySQL # You should open the warning log and review the error log carefully to isolate possible causes. #log_warnings# Record slow queries. A slow query is a query that consumes more time than the definition of "long_query_time". # If Log_long_format is opened, queries that do not use the index are also logged. # If you frequently add new queries to existing systems. Generally speaking, this is a good idea,log_slow_queries# All queries that use more than this time (in seconds) will be considered slow queries. # do not use "1" here, otherwise it will result in all queries, even very fast query pages are recorded (due to the accuracy of MySQL current time can only reach the second level).Long_query_time = 2# record more information in the slow log. # generally this is best opened. # Opening this entry will record that queries that do not use the index are also attached to the slow log as a slow queryLog_long_format# This directory is used by MySQL to save temporary files. For example, # It is used to handle large disk-based sorting, as in internal sorting. # and a simple temporary table. # If you don't create very large temporary files, it might be better to put them on the Swapfs/tmpfs file system # Another option is that you can also place them on separate disks. # you can use ";" To place multiple paths # They will be polled in accordance with the Roud-robin method. #tmpdir =/tmp# * * * * Copy the unique service identification number for the setting #, the value is between 1 and 2^32-1. # This value needs to be set on both master and slave. # if "Master-host" is not set, the default is 1, but if this option is omitted, MySQL will not take effect as master. Server-id = 1 # copy of slave (remove comments from MASTER segment to make it effective) # # In order to configure this host as a replicated slave server, you can choose between two methods: # # 1) Use the Change MASTER to command (complete in our manual) Description)-# syntax is as follows: # # change MASTER to Master_host=, master_port=, # master_user=, master_password=; # # You need to replace, wait for the field surrounded by angle brackets and the port number to use master (default 3306). # # # # # # # # # # # # change MASTER to master_host= ' 192.168.1.29  ', master_port=3306, #    master_user= ' Joe ', master_password= ' secret '; # # # # # 2) set the following variables. Anyway, in case you choose this method, and then first start replication (even if unsuccessful, #     for example if you enter the wrong password in the Master-password field and slave cannot connect), #     slave will create a master.info file, and any subsequent changes to the parameters contained within this file will be ignored #    and overwritten by the contents of the Master.info file. Unless you close the slave service, remove master.info and restart the slave service. #    for this reason, you may not want to touch the configuration (commented out) and use change Master to (view above) instead of # # # The unique ID number required is between 2 and 2^32-1 # (and the and MASTER does not Same) # if Master-host is set. The default value is 2 # but if omitted, it will not take effect #server-id = 2 # # Copy the master in the structure-must #master-host = # # # When connected to master, slave is used to recognize Username-must #master-user = # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Slave The password used to authenticate when connected to master-must #master-password # optional-default is 3306 #master-port = # makes slave read only. Only the user has super privileges and the slave thread above can modify the data. # You can use this to ensure that no application accidentally modifies the data on slave instead of on master #read_only #*** MyISAM related options # keyword buffer size, commonly used to buffer the index blocks of MyISAM tables. # do not set it greater than 30% of your available memory, # because part of the memory is also used by the OS to buffer line data # Even if you don't use the MyISAM table, you also need to set up 8-64m memory because it will also be internally temporarilyDisk table usage. key_buffer_size = 32M# used to do MyISAM table full table scan buffer size. # When a full table scan is required, it is allocated in the corresponding thread.read_buffer_size = 2M# when the rows are read from a sorted sequence after sorting, the row data is read from the buffer to prevent the disk from seeking the path. # If you increase this value, you can improve the performance of many order by. # assigned by each thread when neededread_rnd_buffer_size = 16M# MyISAM uses a special tree-like cache to make a burst insert # (these insertions are, insert ... SELECT, INSERT ... VALUES (...), (...), ..., and LOAD DATA # INFILE) are faster. This variable limits the number of bytes in the buffer tree per process. # set to 0 turns off this optimization. # for optimization do not set this value greater than "Key_buffer_size". # This buffer will be allocated when burst insertion is detected.bulk_insert_buffer_size = 64M# This buffer is assigned when MySQL needs to be allocated in REPAIR, OPTIMIZE, ALTER, and LOAD DATA INFILE to an empty table causing the rebuild index. # This is assigned in each thread. So be careful when setting large values.myisam_sort_buffer_size = 128M# The size of the maximum temporary file allowed when MySQL rebuilds the index (when REPAIR, ALTER TABLE or LOAD DATA INFILE). # If the file size is larger than this value, the index will be created by the key buffer (slower)myisam_max_sort_file_size = 10G# Use the key-value buffering method if the temporary file used to create the index using a faster index is greater than the value that is set. # This is primarily used to force the long string key in large tables to use a slow key-value buffer method to create an index.myisam_max_extra_sort_file_size = 10G# If a table has more than one index, MyISAM can use more than one thread to fix them by using parallel sorting. # This is a good choice for users with multiple CPUs and a lot of memory.myisam_repair_threads = 1# automatically check and fix MyISAM tables without proper shutdown. Myisam_recover # default off Federated skip-federated # * * * BDB Related OPTIONS * * * If you run a MySQL service with BDB support but you are not ready to use When you use this option. This saves memory and can speed things up. SKIP-BDB # * * * * * * * * * * * INNODB Related Options # * * If your MySQL service contains INNODB support but not intended to be used, # using this option saves memory and disk space, and accelerates portions #skip-innodb # Additional memory pools are INNODB Used to save metadata information # If INNODB needs more memory for this purpose, it will begin to request memory from the OS. # Since this operation is fast enough on most modern operating systems, you generally do not need to modify this value. The # show INNODB STATUS command displays the number of yourselves used.innodb_additional_mem_pool_size = 16M# InnoDB uses a buffer pool to hold indexes and raw data, unlike MyISAM. # The larger you set up, the less disk I/O you need to access the data in the table. # on a standalone database server, you can set this variable to the server physical memory size of 80% # do not set too large, otherwise, due to the competition of physical memory may cause the operating system to break the page. # Note on 32-bit systems you may be limited to 2-3.5g user-level memory limits on each process, # so don't set it too high.innodb_buffer_pool_size = 2G# InnoDB saves data in one or more data files as a tablespace. # If you only have a single logical drive to save your data, a single self-increment file is good enough. # in other cases. A file Per device is generally a good choice. # You can also configure InnoDB to use bare disk partitioning-please refer to the manual for more informationInnodb_data_file_path = Ibdata1:10m:autoextend# Set this option if you want the InnoDB tablespace file to be saved in another partition. # is saved in MySQL DataDir by default. #Innodb_data_home_dir = # The number of IO threads used to synchronize IO operations. This value is hardcoded under UNIX to 4, but the Windows disk I/O may perform better under a large value.innodb_file_io_threads = 4# If you find that the InnoDB tablespace is corrupted, setting this value to a value other than 0 may help you export your table. # Start from 1 and increase this value to know you can successfully export the table. #Innodb_force_recovery=1# The number of allowed threads within the InnoDB core. # The optimal value depends on how the application, hardware, and operating system are scheduled. # High values can cause mutually exclusive bumps on the thread.innodb_thread_concurrency =# If set to 1, InnoDB will flush (fsync) The transaction log to disk after each commit, # This provides complete acid behavior. # If you are willing to compromise on transactional security and you are running a small food, you can set this value to 0 or two and reduce the disk I/O caused by the transaction log # 0 to represent the log only about every second written to the log file and the log files are flushed to disk. # 2 means that log files are written to the log file after each commit, but the log files are flushed to disk only about once per second.innodb_flush_log_at_trx_commit = 1# speed up the InnoDB shutdown. This prevents InnoDB from doing a full purge and inserting a buffer merge when it is closed. # This can greatly increase the shutdown time, but instead the InnoDB may do so at the next boot. #Innodb_fast_shutdown# The size of the buffer used to buffer the log data. # When this value is nearly full, INNODB will have to flush the data to disk. # because it's basically refreshed every second, there's no need to set this value too large (even for long transactions)innodb_log_buffer_size = 8M# The size of each log file in the log group. # You should set the total size of the log file to the size of your buffer pool 25%~100% # To avoid unnecessary buffer pool refresh behavior in log file overwrite. # Regardless, note that a large log file size increases the time that is required for the recovery process.innodb_log_file_size = 256M# The total number of files in the log group. # usually it's better to have one or both.Innodb_log_files_in_group = 3# InnoDB log files are in the same location. The default is MySQL's datadir. # You can assign it to a standalone hard drive or a RAID1 volume to improve its performance #Innodb_log_group_home_dir# The scale of the maximum allowable dirty pages in the InnoDB buffer pool. # If the limit is reached, InnoDB will start refreshing them to prevent them from interfering with the Clean data page. # This is a soft limit and is not guaranteed to be absolutely enforced.innodb_max_dirty_pages_pct =# InnoDB The method used to refresh the log. # tablespace always uses the double write Refresh method # The default value is "Fdatasync" and the other is "O_dsync". #Innodb_flush_method=o_dsync # How long a InnoDB transaction should wait for a lock to be approved before being rolled back. # InnoDB automatically detects the transaction deadlock in its own lock table and rolls back the transaction. # If you use the lock TABLES directive, or use a storage engine other than InnoDB for transaction security in the same transaction # then a deadlock can occur and InnoDB cannot notice. # This timeout value in this case is very helpful for solving this problem. innodb_lock_wait_timeout = [mysqldump] # do not cache the entire result in memory before it is written to disk. This is required when exporting very large tables. Quick Max_allowed_packet = 16M [MySQL] No-auto-rehash # only allows the use of key values UPDATEs and Deletes. #safe-updates [isamchk] Key_buffer = 512M Sort_buffer_size = 512M Read_buffer = 8M Write_buffer = 8M [Myisamchk] Key_buffe r = 512M Sort_buffer_size = 512M Read_buffer = 8M Write_buffer = 8M [Mysqlhotcopy] interactive-timeout [Mysqld_safe] # Add each The number of open files for a process. # Warning: Make sure you have set the system limit high enough! # Opening a large number of tables requires this value to be set high Open-files-limit = 8192

mysql5.5 Installation and configuration (Pro-test version)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.