Configure jumbo frame for RAC Optimization)
First, let's talk about the concept of MTU: in network communication, there is a concept of MTU (max transmission unit), that is, the maximum frame size in network transmission, the default value is 1500 bytes (the Ethernet Variable frame size is 46-1500byte ).
[[Email protected] ~] # Ifconfig bond0
Bond0 link encap: Ethernet hwaddr C8: 1f: 66: FB: 6f: CD
Inet ADDR: 10.10.10.105 bcast: 10.10.10.255 mask: 255.255.255.0
Inet6 ADDR: fe80: ca1f: 66ff: fefb: 6fcd/64 scope: Link
Up broadcast running master multicast MTU: 1500 Metric: 1
RX packets: 353 errors: 29 dropped: 0 overruns: 0 frame: 29
TX packets: 254 errors: 0 dropped: 0 overruns: 0 carrier: 0
Collisions: 0 txqueuelen: 0
RX Bytes: 250669 (244.7 kib) TX Bytes: 160443 (156.6 kib)
Therefore, in the default configuration, if the size of the data transmitted exceeds bytes, the system splits the frame into several frames for transmission:
[[Email protected] ~] # Traceroute-F node2-priv 1500
Trace to node2-priv (10.10.10.106), 30 hops Max, 1500 byte packets
1 node2-priv.localdomain (10.10.10.106) 0.234 MS 0.217 MS 0.204 MS
[[Email protected] ~] # Traceroute-F node2-priv 1501
Trace to node2-priv (10.10.10.106), 30 hops Max, 1501 byte packets
1 node1-priv.localdomain (10.10.105) 0.024 MS! F-1500 0.005 MS! F-1500 0.004 MS! F-1500
[[Email protected] ~] #
In the RAC environment, we need to pay attention to one place.
RAC private networks are mainly used for network heartbeat communication between nodes, but in addition, nodes often need to transmit data blocks through private networks. In Oracle, the size of a database is 8192 bytes. Therefore, according to the default configuration, a data block has to be divided into several frames for transmission, which increases the load on the private network, therefore, we recommend that you set the MTU of the private Nic to 9000 in Oracle.
Let's take a look at my modification steps:
1) view the private NIC (executed on two nodes)
[[Email protected] ~] # Oifcfg getif
Em1 192.168.10.0 global public
Bond0 10.10.10.0 global cluster_interconnect
2) set the private Nic MTU (executed on two nodes)
[[Email protected] ~] # Ifconfig-s bond0 MTU 9000
The settings have been set successfully. We can test the settings through traceroute or Ping:
1) Traceroute
[[Email protected] ~] # Traceroute-F node2-priv 9000
Trace to node2-priv (10.10.10.106), 30 hops Max, 9000 byte packets
1 node2-priv.localdomain (10.10.10.106) 0.346 MS 0.364 MS 0.413 MS
[[Email protected] ~] # Traceroute-F node2-priv 9001
Trace to node2-priv (10.10.10.106), 30 hops Max, 9001 byte packets
1 node1-priv.localdomain (10.10.105) 0.043 MS! F-9000 0.010 MS! F-9000 0.010 MS! F-9000
[[Email protected] ~] #
2) Ping
[[Email protected] ~] # Ping-C 2-M do-s 8972 node2-priv
Ping node2-priv.localdomain (10.10.10.106) 8972 (9000) bytes of data.
8980 bytes from node2-priv.localdomain (10.10.10.106): icmp_seq = 1 TTL = 64 time = 0.552 MS
8980 bytes from node2-priv.localdomain (10.10.10.106): icmp_seq = 2 TTL = 64 time = 0.551 MS
--- Node2-priv.localdomain Ping statistics ---
2 packets transmitted, 2 bytes ed, 0% packet loss, time 1001 Ms
RTT min/AVG/max/mdev = 0.551/0.551/0.552/0.023 MS
[[Email protected] ~] # Ping-C 2-M do-s 8973 node2-priv
Ping node2-priv.localdomain (10.10.10.106) 8973 (9001) bytes of data.
From node1-priv.localdomain (10.10.10.105) icmp_seq = 1 frag needed and DF set (MTU = 9000)
From node1-priv.localdomain (10.10.10.105) icmp_seq = 1 frag needed and DF set (MTU = 9000)
--- Node2-priv.localdomain Ping statistics ---
0 packets transmitted, 0 encapsulated ed, + 2 errors
[[Email protected] ~] #
Configure jumbo frame for RAC Optimization)