PostgreSQL Stress test Tool usage and parameter interpretation

Source: Internet
Author: User
Tags epoch time server port


Pgbench is a database stress test tool that comes with PostgreSQL, supports the Tpc-b test model, or customizes the test model. Custom test models support meta-commands, invoke shell scripts, set random numbers, variables, and so on. 3 asynchronous interfaces supported [simple| Extended|prepared] Parameter Detailed list
// Initial data parameters of the TPC-B test model.
Initialization options:
-i invokes initialization mode //Initial parameters
-F NUM fill factor
-n do not run VACUUM after initialization // When using a custom test model, please bring this parameter.
-q quiet logging (one message each 5 seconds)
-s NUM scaling factor
--foreign-keys create foreign key constraints between tables //Add primary and foreign keys between tables; closer to online environment
-index-tablespace=TABLESPACE create indexes in the specified tablespace
--tablespace=TABLESPACE create tables in the specified tablespace
--unlogged-tables create tables as unlogged tables

// Pressure test related parameters
Benchmarking options:
-c NUM number of concurrent database clients (default: 1) // Specify the number of connections pgbench connects to the database
-C establish new connection for each transaction // Whether to use a short connection
-D VARNAME=VALUE // Set the variable to use in the custom script: varname reference. Multiple variables can be set using multiple -D.
-f FILENAME read transaction script from FILENAME // Specify a custom test file (consisting of metacommands and SQL), multiple files can be specified with multiple -f, each file as a transaction, randomly selected each time a transaction is executed A file is executed.
-j NUM number of threads (default: 1) // pgbench's worker thread. Used together with -c; a divisor of -c
-l write transaction times to log file // Open transaction statistics, output file name format pgbench_log.$PID.$threadID, (when -j >= 2, threadID starts from 1)
-M simple|extended|prepared // libpq interface default: simple
-n do not run VACUUM before tests // The vacuum switch, when using a custom file, use -n to turn off the vacuum.
-N do not update tables "pgbench_tellers" and "pgbench_branches" // TPC-B is not the default test mode, there are two updates to the table.
-r report average latency per command // Reports the average execution delay for each command (including metacommand and SQL) in the test file.
-s NUM report this scale factor in output // When testing with a custom script, specify the output of the scale. There is no real meaning.
-S perform SELECT-only transactions // TPC-B is not the default test mode, only query.
-t NUM number of transactions each client runs (default: 10) // Specify the number of execution transactions per connection.
-T NUM duration of benchmark test in seconds // Specify the total stress test time. Cannot be used simultaneously with -t.
-v vacuum all four standard tables before tests // Before the test, vacuum 4 and tpc-b related tables.
--aggregate-interval=NUM // Output aggregated transaction statistics. Used with -l

//Common options:
  -d, --debug print debugging output
  -h, --host=HOSTNAME database server host or socket directory
  -p, --port=PORT database server port number
  -U, --username=USERNAME connect as specified database user
  -V, --version output version information, then exit
  -?, --help show this help, then exit 


One initialization; Here I use my own database mydb; Data user Lottu


 --foreign-keys --unlogged-tables-u Lottu mydb


The following table is generated in the database after the operation


mydb=> \dt pgbench*
             List of relations
 Schema |       Name       | Type  | Owner 
--------+------------------+-------+-------
 public | pgbench_accounts | table | lottu
 public | pgbench_branches | table | lottu
 public | pgbench_history  | table | lottu
 public | pgbench_tellers  | table | lottu
(4 rows)


Two-pressure test demo


[postgres@sdserver40_210 ~]$ pgbench -M prepared -r -c 10 -j 2 -T 10 -U lottu mydb
Starting vacuum...end. -- default is non-n mode
Transaction type: TPC-B (sort of)
Scaling factor: 10 -- consistent with the above initialization - s 10
Query mode: prepared -- -M prepared defaults to simple
Number of clients: 10 -- 10 client connections -c 10
Number of threads: 2 -- threads are 2 -j2
Duration: 10 s -- time is 10s -T 10
Number of transactions actually processed: 146890
Latency average: 0.681 ms
Tps = 14687.531247 (including connections establishing)
Tps = 14690.762892 (excluding connections establishing)
Statement latencies in milliseconds:
        0.001784 \set nbranches 1 * :scale
        0.000633 \set ntellers 10 * :scale
        0.000483 \set naccounts 100000 * :scale
        0.001076 \setrandom aid 1 :naccounts
        0.000647 \setrandom bid 1 :nbranches
        0.000654 \setrandom tid 1 :ntellers
        0.000762 \setrandom delta -5000 5000
        0.034427 BEGIN;
        0.094350 UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
        0.064211 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
        0.110995 UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
        0.154680 UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
        0.155552 INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
        0.052141 END;

--Use the -l parameter to combine --aggregate-interval demo
[postgres@sdserver40_210 ~]$ pgbench -M extended --aggregate-interval 2 -l -c 2 -j 2 -T 10 -U lottu mydb
Starting vacuum...end.
Transaction type: TPC-B (sort of)
Scaling factor: 10
Query mode: extended
Number of clients: 2
Number of threads: 2
Duration: 10 s
Number of transactions actually processed: 22613
Latency average: 0.884 ms
Tps = 2261.073893 (including connections establishing)
Tps = 2261.413071 (excluding connections establishing)

-- this will generate a file
[postgres@sdserver40_210 ~]$ ll pgbench_log.5160*
-rw-rw-r-- 1 postgres postgres 211 Jun 28 16:37 pgbench_log.5160
-rw-rw-r-- 1 postgres postgres 210 Jun 28 16:37 pgbench_log.5160.1

Check the contents of the file to judge the test results!
[postgres@sdserver40_210 ~]$ cat pgbench_log.5160
1467103061 1864 1663000 1739000000 0 9000
1467103063 2256 1998000 2000000000 0 2000
1467103065 2268 1997000 2001000000 0 2000
1467103067 2271 1996000 1998000000 0 2000
1467103069 2250 1996000 2268000000 0 17000
These 5 columns represent
Interval_start // epoch time, which refers to the start time of this statistic.
Num_of_transactions // How many "transactions" are run in this statistic section, referring to the number of independent file runs.
Latency_sum // The total time spent on the transaction of this statistic, in microseconds.
Latency_2_sum // The sum of the time-consuming squares of the transaction execution of this statistic, in microseconds.
Min_latency // The minimum time spent for a single transaction in this statistic.
Max_latency // The maximum time spent on a single transaction in this statistic. 


--refer to the video commentary of German Columbia



PostgreSQL Stress test Tool usage and parameter interpretation


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.