PostgreSQL 9.4.4 Installation

Source: Internet
Author: User
Tags bz2 mysql functions psql iptables



Because recently to do a data study, need to put the data into storage, open source database, students may generally choose MySQL, but some of the MySQL functions can be too tired, so I replaced with Oracle more similar to the PostgreSQL, recently this database is also extremely popular, The speed of ascent is fast.



The following is a compilation and installation of the introduction, the following basic information from Germany to share (http://blog.163.com/[email protected]/) of the document, thanks to with sharing.






PostgreSQL Download URL:



http://www.postgresql.org/ftp/source/






Here we choose the latest stable version of the postgresql-9.4.4 version of the source version



https://ftp.postgresql.org/pub/source/v9.4.4/postgresql-9.4.4.tar.bz2



Unzip after download


Tar jxvf postgresql-9.4.4.tar.bz2


Because PostgreSQL cannot be started with the root user, you need to re-create a new user for him



Create user





Useradd pg944





Enter the downloaded directory


CD postgresql-9.4.4


Compile, set the installation directory here


./configure--prefix=/home/pg944/pgsql


Execute after completion


Gmake Worldgmake Install-world


The following initializes the database-related configuration



First, modify the Linux kernel related parameter adjustment





Vi/etc/sysctl.conf





At the bottom, add


kernel.shmmni = 4096
kernel.sem = 50100 64128000 50100 1280
fs.file-max = 7672460
net.ipv4.ip_local_port_range = 9000 65000
net.core.rmem_default = 1048576
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576





And then continue to modify





Vi/etc/security/limits.conf





At the bottom, add


* soft nofile 131072
* hard nofile 131072
* soft nproc 131072
* hard nproc 131072
* soft core unlimited
* hard core unlimited
* soft memlock 50000000
* hard memlock 50000000





Set up a firewall





Vi/etc/sysconfig/iptables





Add the port, remember that the setting in Port 22 is added, do not add to the bottom





-A rh-firewall-1-input-p tcp-m tcp--dport 1999-j ACCEPT





Restarting the firewall





Service Iptables Restart





Now initialize the configuration of the PostgreSQL user





su-pg944





Adding environment variables


VI ~/.bash_profile
Plus
export PGPORT=1999 
export PGDATA=/home/pg944/pg_root
export LANG=en_US.utf8
export PGHOME=/home/pg944/pgsql
export LD_LIBRARY_PATH=$PGHOME/lib:/lib64:/usr/lib64:/usr/local/lib64:/lib:/usr/lib:/usr/local/lib:$LD_LIBRARY_PATH
export DATE=`date +"%Y%m%d%H%M"`
export PATH=$PGHOME/bin:$PATH:.
export MANPATH=$PGHOME/share/man:$MANPATH
export PGUSER=postgres
export PGHOST=$PGDATA
alias rm='rm  -i'
alias ll='ls -lh'
export PGDATABASE=postgres


Save Settings





SOURCE ~/.bash_profile





Initializing the database





Initdb-d $PGDATA-E UTF8--locale=c-  u postgres-w





Here to enter the password, and enter the confirmation password, the error will let you re-initialization






Under test





[Email protected] ~]$ psql-vpsql (PostgreSQL) 9.4.4





This time $pgdata here to see the database files. Here are a few important words to say


ll $ PGDATA
base-this directory corresponds to the pg_default tablespace
global --This directory corresponds to the pg_global tablespace, which stores shared object examples in the cluster, the pg_database table, (select relkind, relname from pg_class where reltablespace = (select oid from pg_tablespace where spcname = ‘pg_global’)
order by 1), including control files.
pg_clog-store transaction commit status data
pg_hba.conf-database access control file
pg_log-database log directory (depending on the configuration definition, this directory may not exist)
pg_multixact-transaction state data for shared row locks
PG_VERSION-database version
pg_xlog-store WAL files
postgresql.conf-configuration file
postmaster.opts-log command-line options when the database starts
postmaster.pid-The main process information file for database startup (including $ PGDATA directory, database startup time, listening port IPC information, etc.) 


Go on





CD $PGDATA





Modify the pg_hba.conf and postgresql.conf under $pgdata before starting the database



Pg_hba.conf used to configure the source of control access to the database



Here for the convenience of setting up all addresses open, very insecure





Host All 0.0.0.0/0 MD5





Postgresql.conf is the main configuration file of the database, it is better to adjust the Linux kernel parameters.



The following content will be more





VI postgresql.conf





*****************************************************



Listen for all IP IPv4.



listen_addresses = ' 0.0.0.0 '






Maximum allowable 1000 connections (test 100 enough, increase the number of connections and need to adjust shared buffer).



max_connections = 100






Keep 10 available connections for super users.



Superuser_reserved_connections = 10






The default UNIX socket file is placed in/TMP and modified to $pgdata to ensure the security of local access.



Unix_socket_directory = '. '






The default access permission is 0777, and the change to 0700 is more secure.



Unix_socket_permissions = 0700






The TCP session heartbeat package under Linux defaults to 2 hours. If you have modified the kernel parameters of the system, you do not need to modify this here.



To prevent the network devices between the client and the server from actively shutting down idle TCP sessions, set the following parameters.



Tcp_keepalives_idle = 60



Tcp_keepalives_interval = 10



Tcp_keepalives_count = 6






Large shared_buffers require large checkpoint_segments and require more system V shared memory resources. and increase the overhead of shared memory management.



This value does not need to be set too large, because PostgreSQL also relies on the operating system's file system cache to improve read performance, in addition, the database with frequent write operations is too large to increase checkpoint pressure.



Support for Mmap and huge page table is increased in version 9.4 to reduce the overhead of memory management.



Shared_buffers = 512MB






The higher the value, the faster the VACUUM, create INDEX, of course, to a certain extent the bottleneck is not in memory, it may be the CPU for example to create an index.



This value is the upper limit of the memory usage of an operation, not a one-time allocation. And be aware that if autovacuum is turned on, the maximum possible



Autovacuum_max_workers*maintenance_work_mem memory is consumed by the system.



Maintenance_work_mem = 512MB






Generally set to a slightly less than system limit, ulimit-a: Stack size (Kbytes,-s) 10240



Max_stack_depth = 8MB






Manual vacuum operation, the default is no pause execution to the end, in order to prevent vacuum operation consumes too much database server hardware resources, this value refers to the vacuum in the amount of resources after the pause how much



So that other operations can use more hardware resources.



Vacuum_cost_delay = 10ms



Vacuum_cost_limit = 10000 # 1-10000 Credits






The default Bgwriter process executes once and then pauses 200ms and then wakes up to perform the next operation, when the database writes frequently, 200ms may be too long, causing other processes to take too much time to



The operation of the Bgwriter. A short pause is more conducive to flush the dirty blocks in the Sharedbuffer to disk, reducing the backend active flush to request shared memory. You'll talk about using explain later.



Bgwriter_delay = 10ms






There are also several parameters related to writing dirty blocks, that is, how many dirty blocks are written and the rest begins.



If you need to do a database Wal log backup at least set to archive level, if need to do hot_standby then need to set to Hot_standby, because this value modification needs to restart the database, so first



It's better to set it into Hot_standby. Of course Hot_standby means that the Wal record is more detailed if there is no intention to do hot_standby set the lower performance the better.



Wal_level = Hot_standby






Wal buffers Default is-1 automatically adjusts shared_buffers*3% based on the settings of the shared_buffers. The maximum limit is segment_size of Xlog.



Wal_buffers = 16384kB






How many Xlog file is generated after the checkpoint operation is started,



The larger the value, the more frequently accessed dirty data in the Shared_buffer is allowed to be stored longer. To some extent, database performance can be improved. But too big will cause the database to occur checkpoint



Processing more dirty data leads to long IO overhead (also consider the presence of bgwriter).



Too small will lead to more Wal files (because full page writes=on, checkpoint after the first block changes to write the whole block, checkpoint more frequent, the more data update to write the whole block causes the production



More Wal).



Checkpoint_segments = 32






This is the same as the checkpoint_segments effect, but the trigger condition is the time condition.



Checkpoint_timeout = 5min






Changes to the archive parameter also require a restart of the database, so open it first.



Archive_mode = On






This is the command for the archive call, I use date instead, so the file is called when the output time instead of copying the Wal files.



Archive_command = '/bin/date ' # ' CP%p/arch/%f '






If you want to do hot standby this must be greater than 0, and after the modification to restart the database, set to 32 first.



Represents the number of connections that are allowed to be established for a stream replication.



Max_wal_senders = 32






This is a standby database parameter, in order to facilitate the role switch, I generally all the database has set him to on.



Hot_standby = On






This parameter is how many times the cost of a random page access in the database is Seq_page_cost, and the Seq_page_cost default is 1. The other costs are multiples of seq_page_cost.



These are used for cost-based execution plan selection. The following is a detailed description of the cost factor in the tuning.



Random_page_cost = 2.0






Effective_cache_size is only a measure, not the memory value used by the actual allocation.



Indicates how much memory the system has to serve as the operating system cache. The larger the database, the more likely it is to use the index as an execution plan for random access.



The general setting is memory size minus the shared_buffer of the database and subtracting the memory required by the system and other software.



Effective_cache_size = 12000MB






The following is the configuration of the log output.



log_destination = ' Csvlog '



Logging_collector = On



log_directory = ' Pg_log ' #这里建议做修改, but you have to build the directory in advance and set permissions



Log_truncate_on_rotation = On



Log_rotation_age = 1d



Log_rotation_size = 10MB






This parameter adjusts the SQL to log that records execution time of more than 1 seconds, and is typically used to track which SQL execution time is long.



log_min_duration_statement = 1s






Log each time checkpoint to the logs.



Log_checkpoints = On






Record locks wait more than 1 seconds, and are typically used to troubleshoot problems on business logic.



Log_lock_waits = On



deadlock_timeout = 1s






Logging connections and port connections can reflect problems with short connections, and can also be used as a connection audit log.



Log_connections = On



Log_disconnections = On






Open the output of the code location information to reflect what function the log information was output from.



You can use \set verbosity verbose to open in a session



log_error_verbosity = verbose






The DDL statements are logged, but it is important to note that the statements that create the user and modify the password are also logged, so it is recommended that the audit be closed in session before sensitive SQL is executed.



log_statement = ' DDL '






This was originally 1024 to indicate that the tracked SQL was truncated in 1024, and that more than 1024 would not be able to display full SQL. Modifying to 2048 consumes more memory (which can be ignored), but can show longer SQL.



Track_activity_query_size = 2048






The default autovacuum is open, log_autovacuum_min_duration = 0 records all autovacuum operations.



Autovacuum = On



log_autovacuum_min_duration = 0






*****************************************************************




Log in to the database with the Psql command





Psql-h 192.168.137.3-p 1999-u Postgres








Since I am focused on development, I like to use the IDE to write SQL, there is a PostgreSQL third-party development tool, similar to PL/SQL



http://www.pgadmin.org/download/windows.php



Download installation, enter database RELATED links





Successful connection







Click on the SQL icon to start writing SQL



Like what



Create a table space below, where the address corresponds to the folder in the server. The Pg_test file directory must be created in advance.





Create Tablespace "pgtest" Location '/home/pg944/pg_root/pg_test ';






Ok, the database is loaded, now you can happily lead the data for development.







Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.



PostgreSQL 9.4.4 Installation


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.