Part 1:adaptive Compression
Configure > Optimization > Performance
Detects LZ data compression performance for a connection dynamically && turns it off (sets the compression level to 0) Momentarily if it is not achieving optimal result.
Improves end-to-end throughput over the LAN by maximiing the WAN throughput.
By default, this setting is disabled.
Part 2:admission control-connection Counts
Occurs when optimized connection count exceeds model thresholds.
continues to optimize existing connections but new connections PT until Connectoin counts falls below the "enable" Thresho Ld.
KBase: "Admission Control connection"
Logs:
Nov 3 09:38:35 sh01 sport[4539]: [Admission_control. NOTICE]-{--} Connection limit achieved. Nov 3 09:38:35 sh01 sport[4539]: [Admission_control. NOTICE]-{--} Memory usage:295 current Connections:136nov 3 09:38:35 sh01 sport[4539]: [Admission_control. WARN]-{--} pausing intercept ... Nov 3 09:38:45 sh01 statsd[4511]: [STATSD. NOTICE]: Alarm triggered for rising error for event Admission_conn
To automatically generate a sysdump:
(CONIFG) #debug alarm admission_conn Enable
Part 3:auto discovery/enhanced Auto Discovery
RiOS Auto-discovery Process
Step 1:client send a SYN to the SteelHead.
Step 2:steelhead Add a TCP option 0x4c to the SYN and it becomes syn+ than sent to the Server Side SteelHead. Also it send
Step 3:the Server Side SteelHead See the option 0x4c (*) also known as TCP probe. It respond an syn/ack+ back. This time the inner TCP Session has been establised.
Step 4:the Server Side SteelHead sends a SYN to the server.
Step 5:the Server respond the Syn/ack to the SteelHead. Outer TCP Session
Step 6:the client SteelHead through Inner TCP session to the client and send a syn/ack to the client. This time, the Outer TCP session has been established.
TCP Option:
The TCP option used for Auto-discovery is 0x4c (76).
The Client-side SteelHead appliance attaches a, byte option to the TCP header;
The Server-side SteelHead appliance attaches a, byte option in return.
Note:only done on the initial discovery process and not during Connecton setup between the SteelHead appliances and outer TCP sessions.
Enhanced Auto-discovery
Automatically finds and optimizes between most distant SteelHead pair.
Eliminates the need for manual peering rules
Also called "Auto peering"
By default, automatic peering is enabled.
Supports Unlimited steelheads in transit between Client and Server
Part 4:connection Pooling
Connection pooling enhances network performance by providing a pool of pre-existing idle connections instead of HA Ving Tsun The SteelHead create a new connection for every request. This feature are useful to application protocols, such as HTTP, that's use many rapidly created, short lived TCP connections.
By default,RiOS establishes inner channelsPlusAn Out-of-band channel between, Steelhead appliancesWhen they first communicate. RiOS uses the standard TCP keep-alive mechanism to monitor the state of these channels and the availability of the other S Teelhead Appliance. These TCP keep-alive packets isAt least bytes per packet(IP header, TCP header and timestamp TCP options) and isgenerated every secondsFor the OOB channel andevery seconds for the inner channels. Over the period of a hour, this would generate 840 packets (including return packets), resulting in at least 43680 bytes O F traffic for the connections to each remote Steelhead. Additionally, the total size of the packet over your WAN media would also depend on the particular link-layer (such as EThe Rnet header), as well as any other TCP options, the other WAN devices in your network add to these packets.
Part 5:high speed && mx-tcp
HSTCP is a feature you can enable in Steelhead appliances to help reduce WAN data transfers inefficiencies this is caused By limitations with regular TCP. Enabling the HSTCP feature allows for more complete utilization of these "long fat pipes". HSTCP is a IETF defined RFC standard (defined in RFC 3649 and RFC 3742), and have been shown to provide significant perfor Mance improvements in networks with high BDP values.
Part 6:in-path Rules
Applied on the Client Side SteelHead.
5 Types of rules:
-Pass-through rules (Define traffic to Pass-thorugh, not optimize)
-Auto-discovery rules (Define traffic to auto-discovery, optimize)
-Fixed-target rules (manually define traffic and steelheads to optimize, no auto-discovery)
-Discard (Packets is silently dropped)
-Deny (the connection is reset)
Rules was processed top down until there is a match
In-path rules is only inspected when the SYNs arrive on LAN ports.
Part 7:peering Rules
Used to configure how SteelHead respond to auto-discovery probes.
Can is used to pass probes when SteelHead is connected serially.
Enhanced Auto-discovery'll automatically discovery the end Steelheads
Can is used to define which steelheads we'll accept connectoins from
3 types:
-auto-automatically determine the best response
-Accept-accept the peering requests that match the rule
-Pass-pass-thorugh peering requests that match the rule.
Part 8:pre-population (CIFS & MAPI)
CIFS prepopulation
Configure > Optimization > CIFs prepopulation page.
The Prepopulation operations effectively performs the first SteelHead appliance read the data on the prepopulation shar E. Subsequently, the SteelHead appliance handles read and write requests as effectively as with a warm data transfer. With warm transfers, only new or modified data are sent, dramatically increasing the rate of data transfer over the WAN.
There is and reasons why the is important.
1. The CIFS Pre-pop request always ingress and egres the SH using the primary port.
2. The data store is not being warmed if the traffic doesn ' t pass through the SH on it to the primary port.
MAPI prepopulation
Without MAPI prepopulation, if a user closes Microsoft Outlook or switches off the workstation the TCP sessions is broken . With MAPI Prepopulation, the Steelhaed appliance can start acting as if it is the mail client. If the client closes the connection, the Client-side Steelhead appliance would keep an open connection to the Server-side s Teelhead Appliance and ther server-side steelhead appliance would keep the connection open to the server. This allows is pushed through the data store before the user logs on to the server again. The default timer is set to $ hours, after that, the connection would be reset.
Part 9:SDR (Default, M and Adaptive)
Scable Data referecning, SDR
Bandwidth Optimizatoin is delivered thorugh SDR (scalable Data referencing). SDR uses a proprietary algorithm to broke up TCP data streams into data chunks that is stored in the hard disk (d ATA store) of the steelhead appliances. Each data chunk is assigned a unique integer label (reference) before it's sent to the peer Steelhead appliance across the WAN. If the same byte sequence is seen again in the TCP data stream and then the reference is sent across the WAN instead of the R AW data chunk. The peer Steelhead appliance uses this reference to reconstruct the roiginal data chunk and the TCP data stream. Data and references is maintained in persistent storage on the data store within each SteelHead appliance. There is no consistency issues even in the presence of replicated data.
How Does SDRs work?
When data was sent for the first time across a network (no commonality with any file ever sent before), all data and refere NCEs is new and is sent to the Steelhead appliance on the far side of the network. This new data and the accompanying references is compressed using conventional algorithms so as to improve performance, E Ven on the first transfer.
When the data is changed, the new data and references is created. Thereafter, whenever new requests is sent across the network, the references created is compared with those that already exist in the local data store. Any data, the Steelhead appliance determines already exists on the far side of the network is not sent-only the refer Ences is sent across the network.
As files are copied, edited, renamed, and otherwise changed or moved, the Steelhead appliance continually builds the data Store to include more and more data and references. References can is shared by different files and by files in different applications if the underlying bits is common to bo Th. Since SDR can operate on all tcp-based protocols, data commonality across protocols can is leveraged so long as the binary Representation of that data does is not between the protocols. For example, when a file transferred via FTP was then transferred using WFS (Windows file System), the binary representatio N of the file is basically the same and thus references can being sent for that file.
SDR Flavors (Adaptive Data streamlining)
Default
-Disk Based data store
-Excellent BW Reductoin
Sdr-m
-RAM Based data store
-Excellent LAN side throught
Sdr-adaptive
-Blended Data store/compression model
-Monitors Both read and write disk I/O response and, based on statistical trends
-Good LAN side throughput and BW reduction
Part 10:streamlining techniques (Data, Transport & application)
Data Streamlining
Data Reductoin
-Eliminate redundant data on the WAN
-60%-95% reduction in bandwidth utilization
Compression
-Lz-compression for "new" data segments
-Useful for data transferred on first pass
Qos
-(Optional) prioritize data on bandwidth and latency
-compatible with existing QoS implementations
Disaster Recovery Intelligence
-Automatically adapt algorithms to large-scale DR transfers
-Optimize reads, writes, and segment handling for massive loads.
Transport Streamlinig
SSL acceleration
-Supports end-to-end acceleration of secure traffic
-maintains the prefered trust model
WAN Optimizatoin-wan Optimization Technology