FortiOS 5.2 Expert Recipe: SLBC Dual Mode with Four FortiController-5103Bs and two Chassis

Source: Internet
Author: User
Keywords chassis SLBC
Tags forticloud fortios slbc fortigate vlan sandbox chassis

This example describes how to setup an active-passive session-aware load balancing cluster (SLBC) consisting of two FortiGate-5000 chassis, four FortiController-5103Bs two in each chassis, and six FortiGate-5001Bs acting as workers, three in each chassis. This SLBC configuration can have up to seven redundant 10Gbit network connections.

The FortiControllers operate in active-passive HA mode for redundancy. The FortiController in chassis 1 slot 1 will be configured to be the primary unit, actively processing sessions. The other FortiControllers become the subordinate units.

In active-passive HA with two chassis and four FortiControllers, both chassis have two FortiControllers in active-passive HA mode and the same number of workers. Network connections are duplicated to the redundant FortiControllers in each chassis and between chassis for a total of four redundant data connections to each network.

All traffic is processed by the primary unit. If the primary unit fails, all traffic fails over to the chassis with two functioning FortiControllers and one of these FortiControllers becomes the new primary unit and processes all traffic. If the primary unit in the second chassis fails as well, one of the remaining FortiControllers becomes the primary unit and processes all traffic.

Heartbeat and base control and management communication is established between the chassis using the FortiController B1 and B2 interfaces. Only one heartbeat connection is required but redundant connections are recommended. Connect all of the B1 and all of the B2 interfaces together using switches. This example shows using one switch for the B1 connections and another for the B2 connections. You could also use one switch for both the B1 and B2 connections but using separate switches provides more redundancy.

The following VLAN tags and subnets are used by traffic on the B1 and B2 interfaces:

  • Heartbeat traffic uses VLAN 999.

  • Base control traffic on the 10.101.11.0/255.255.255.0 subnet uses VLAN 301.

  • Base management on the 10.101.10.0/255.255.255.0 subnet uses VLAN 101

This example also includes a FortiController session sync connection between the FortiControllers using the FortiController F4 front panel interface (resulting in the SLBC having a total of seven redundant 10Gbit network connections). (You can use any fabric front panel interface, F4 is used in this example to make the diagram clearer.) In a two chassis A-P mode cluster with two or four FortiControllers, the session sync ports of all FortiControllers must be connected to the same broadcast domain. You can do this by connecting all of the F4 interfaces to the same switch.

FortiController-5103B session sync traffic uses VLAN 2000.

This example sets the device priority of the FortiController in chassis 1 slot 1 higher than the device priority of the other FortiControllers to make sure that the FortiController in chassis 1 slot 1 becomes the primary FortiController for the cluster. Override is also enabled on the FortiController in chassis 1 slot 1. Override may cause the cluster to negotiate more often to select the primary unit. This makes it more likely that the unit that you select to be the primary unit will actually be the primary unit; but enabling override can also cause the cluster to negotiate more often.

For more information about SLBC go here.

1. Hardware setup

Install two FortiGate-5000 series chassis and connect them to power. Ideally each chassis should be connected to a separate power circuit. Install FortiControllers in slot 1 and 2 of each chassis. Install the workers in slots 3, 4, and 5 of each chassis. The workers must be installed in the same slots in both chassis. Power on both chassis.

Check the chassis, FortiController, and FortiGate LEDs to verify that all components are operating normally (to check normal operation LED status, see the FortiGate-5000 series documents available here).

Create redundant connections from all four FortiController front panel interfaces to the Internet and to the internal network.

Create a heartbeat link by connecting the FortiController B1 interfaces together. Create a backup heartbeat link by connecting the FortiController B2 interfaces together.

Create FortiController session sync connections between the chassis by connecting the FortiController F4 interfaces together as described above and shown in the diagram.

Connect the mgmt interfaces of all of the FortiControllers to the internal network or any network from which you want to manage the cluster.

Check the FortiSwitch-ATCA release notes and install the latest supported firmware on the FortiControllers and on the workers. Get FortiController firmware from the Fortinet Support site. Select the FortiSwitch-ATCA product.

2. Configuring the FortiController in Chassis 1 Slot 1

This will become the primary FortiController. To make sure this is the primary FortiController it will be assigned the highest device priority and override will be enabled. Connect to the GUI (using HTTPS) or CLI (using SSH) of the FortiController in chassis 1 slot 1 with the default IP address (http://192.168.1.99) or connect to the FortiController CLI through the console port (Bits per second: 9600, Data bits: 8, Parity: None, Stop bits: 1, Flow control: None).

From the Dashboard System Information widget, set the Host Name to ch1-slot1. Or enter this command.

config system global
  set hostname ch1-slot1
end

Add a password for the admin administrator account. You can either use the Administrators widget on the GUI or enter this command.

config admin user
  edit admin
  set password 
end

Change the FortiController mgmt interface IP address. Use the GUI Management Port widget or enter this command.

config system interface
  edit mgmt 
  set ip 172.20.120.151/24
end

If you need to add a default route for the management IP address, enter this command.

config route static
  edit 1
  set gateway 172.20.120.2
end

Set the chassis type that you are using.

config system global
  set chassis-type fortigate-5140
end

Enable FortiController session sync.

config load-balance setting 
  set session-sync enable
end

Configure Active-Passive HA. From the FortiController GUI System Information widget, beside HA Status select Configure.

Set Mode to Active-Passive, set the Device Priority to 250, change the Group ID, select Enable Override, enable Chassis Redundancy, set Chassis ID to 1 and move the b1 and b2 interfaces to the Selected column and select OK.

Enter this command to use the FortiController front panel F4 interface for FortiController session sync communication between FortiControllers.

config system ha
  set session-sync-port f4
end

You can also enter the complete HA configuration with this command.

config system ha
  set mode active-passive
  set groupid 15
  set priority 250
  set override enable
  set chassis-redundancy enable
  set chassis-id 1
  set hbdev b1 b2
  set session-sync-port f4
end

If you have more than one cluster on the same network, each cluster should have a different Group ID. Changing the Group ID changes the cluster interface virtual MAC addresses. If your group ID setting causes a MAC address conflict you can select a different Group ID. The default Group ID of 0 is not a good choice and normally should be changed.

You can also adjust other HA settings. For example, you could change the VLAN to use for HA heartbeat traffic if it conflicts with a VLAN on your network. You can also adjust the Heartbeat Interval and Number of Heartbeats lost to adjust how quickly the cluster determines one of the FortiControllers has failed.

3. Configuring the FortiController in Chassis 1 Slot 2

Log into the FortiController in chassis 1 slot 2.

Enter these commands to set the host name to ch1-slot2, to configure the mgmt interface, and to duplicate the HA configuration of the FortiController in slot 1. Except, do not select Enable Override and set the Device Priority to a lower value (for example, 10).

All other configuration settings are synchronized from the primary FortiController when the cluster forms.

config system global
  set hostname ch1-slot2
end
config system interface
  edit mgmt 
  set ip 172.20.120.152/24
end
config system ha
  set mode active-passive
  set groupid 15
  set priority 10
  set chassis-redundancy enable
  set chassis-id 1
  set hbdev b1 b2
  set session-sync-port f4
end

4. Configuring the FortiController in Chassis 2 Slot 1

Log into the FortiController in chassis 2 slot 1.

Enter these commands to set the host name to ch2-slot1, to configure the mgmt interface, and to duplicate the HA configuration of the FortiController in chassis 1 slot 1. Except, do not select Enable Override and set the Device Priority to a lower value (for example, 10), and set the Chassis ID to 2.

All other configuration settings are synchronized from the primary FortiController when the cluster forms.

config system global
  set hostname ch2-slot1
end
config system interface
  edit mgmt 
  set ip 172.20.120.251/24
end
config system ha
  set mode active-passive
  set groupid 15
  set priority 10
  set chassis-redundancy enable
  set chassis-id 2
  set hbdev b1 b2
  set session-sync-port f4
end

5. Configuring the FortiController in Chassis 2 Slot 2

Log into the FortiController in chassis 2 slot 2.

Enter these commands to set the host name to ch2-slot2, to configure the mgmt interface, and to duplicate the HA configuration of the FortiController in chassis 1 slot 1. Except, do not select Enable Override and set the Device Priority to a lower value (for example, 10), and set the Chassis ID to 2.

All other configuration settings are synchronized from the primary FortiController when the cluster forms.

config system global
  set hostname ch2-slot2
end
config system interface
  edit mgmt 
  set ip 172.20.120.252/24
end
config system ha
  set mode active-passive
  set groupid 15
  set priority 10
  set chassis-redundancy enable
  set chassis-id 2
  set hbdev b1 b2
  set session-sync-port f4
end

6. Configuring the cluster

After a short time the FortiControllers restart in HA mode and form an active-passive SLBC. All of the FortiControllers must have the same HA configuration and at least one heartbeat link (the B1 and B2 interfaces) must be connected. If the FortiControllers are unable to form a cluster, check to make sure that they all have the same HA configuration. Also they can’t form a cluster if the heartbeat interfaces (B1 and B2) are not connected.

With the configuration described in the previous steps, the FortiController in chassis 1 slot 1 should become the primary unit and you can log into the cluster using the management IP address that you assigned to this FortiController.

The other FortiControllers become backup FortiControllers. You cannot log into or manage the backup FortiControllers until you configure the cluster External Management IP and add workers to the cluster. Once you do this you can use the External Management IP address and a special port number to manage the backup FortiControllers. This is described below. (You can also connect to any backup FortiController CLI using their console port.)

You can confirm that the cluster has been formed by viewing the FortiController HA configuration. The display should show both FortiControllers in the cluster.

You can also go to Load Balance > Status to see the status of the primary FortiController (slot icon colored green).

Go to Load Balance > Config to add the workers to the cluster by selecting Edit and moving the slots that contain workers to the Members list.

The Config page shows the slots in which the cluster expects to find workers. If the workers have not been configured for SLBC operation their status will be Down.

Configure the External Management IP/Netmask. Once you have connected workers to the cluster, you can use this IP address to manage and configure all of the devices in the cluster.

You can also enter this command to add slots 3, 4, and 5 to the cluster.

config load-balance setting
  config slots
    edit 3
    next
    edit 4
    next
    edit 5
    end
  end

You can also enter this command to set the External Management IP and configure management access.

config load-balance setting
  set base-mgmt-external-ip 172.20.120.100 255.255.255.0
  set base-mgmt-allowaccess https ssh ping
end

Enable base management traffic between FortiControllers.

config load-balance setting
  config base-mgmt-interfaces
    edit b1
    next
    edit b2
    end
  end

Enable base control traffic between FortiControllers.config load-balance setting

  config base-ctrl-interfaces
    edit b1
    next
    edit b2
    end
  end

7. Adding the workers to the cluster

Reset each worker to factory default settings.

If the workers are going to run FortiOS Carrier, add the FortiOS Carrier license instead. This will reset the worker to factory default settings.

execute factoryreset

Give the mgmt1 or mgmt2 interface of each worker an IP address and connect these interfaces to your network. This step is optional but useful because when the workers are added to the cluster, these IP addresses are not synchronized, so you can connect to and manage each worker separately.

config system interface
  edit mgmt1
    set ip 172.20.120.120
  end

Optionally give each worker a different hostname. The hostname is also not synchronized and allows you to identify each worker.

config system global
  set hostname worker-chassis-1-slot-3
end

Register each worker and apply licenses to each worker before adding the workers to the cluster. This includes FortiCloud activation and FortiClient licensing, and entering a license key if you purchased more than 10 Virtual Domains (VDOMs). You can also install any third-party certificates on the primary worker before forming the cluster. Once the cluster is formed third-party certificates are synchronized to all of the workers. FortiToken licenses can be added at any time because they are synchronized to all of the workers.

Log into the CLI of each worker and enter this command to set the worker to operate in FortiController mode. The worker restarts and joins the cluster.

config system elbc
  set mode forticontroller
end

8. Managing the cluster

After the workers have been added to the cluster you can use the External Management IP to manage the the primary worker. This includes access to the primary worker GUI or CLI, SNMP queries to the primary worker, and using FortiManager to manage the primary worker. As well SNMP traps and log messages are sent from the primary worker with the External Management IP as their source address. And finally connections to FortiGuard for updates, web filtering lookups and so on, all originate from the External Management IP.

You can use the external management IP followed by a special port number to manage individual devices in the cluster. The special port number identifies the protocol and the chassis and slot number of the device you want to connect to. In fact this is the only way to manage the backup FortiControllers. The special port number begins with the standard port number for the protocol you are using and is followed by two digits that identify the chassis number and slot number. The port number is determined using the following formula:

    service_port x 100 + (chassis_id – 1) x 20 + slot_id

service_port is the normal port number for the management service (80 for HTTP, 443 for HTTPS, 22 for SSH, 23 for Telnet, 161 for SNMP). chassis_id is the Chassis ID part of the FortiController HA configuration and can be 1 or 2. slot_id is the number of the chassis slot.

Some examples:

  • HTTPS, chassis 1 slot 2: 443 x 100 + (1 – 1) x 20 + 2 = 44300 + 0 + 2 = 44302, browse to: https://172.20.120.100:44302

  • HTTP, chassis 2, slot 4: 80 x 100 + (2 – 1) x 20 + 4 = 8000 + 20 + 4 = 8024, browse to: http://172.20.120.100/8024

  • HTTPS, chassis 1, slot 10: 443 x 100 + (1 – 1) x 20 + 10 = 44300 + 0 + 10 = 44310, browse to https://172.20.120.100/44310

  • HTTPS, chassis 2, slot 10: 443 x 100 + (2 – 1) x 20 + 10 = 44300 + 20 + 10 = 44330, browse to https://172.20.120.100/44330

  • SNMP query port, chassis 1, slot 4: 161 x 100 + (1 – 1) x (20 + 4) = 16100 + 0 + 4 = 16104

  • Telnet to connect to the CLI of the worker in chassis 2 slot 4: telnet 172.20.120.100 2324

  • To use SSH to connect to the CLI the worker in chassis 1 slot 5: ssh admin@172.20.120.100 -p2205

You can also manage the primary FortiController using the IP address of its mgmt interface, set up when you first configured the primary FortiController. You can also manage the workers by connecting directly to their mgmt1 or mgmt2 interfaces if you set them up. However, the only way to manage the backup FortiControllers is by using its special port number (or a serial connection to the Console port).

To manage a FortiController using SNMP you need to load the FORTINET-CORE-MIB.mib file into your SNMP manager. You can get this MIB file from the Fortinet support site, in the same location as the current FortiController firmware (select the FortiSwitchATCA product).

On the primary FortiController GUI go to Load Balance > Status. As the workers in chassis 1 restart they should appear in their appropriate slots.

The primary FortiController should be the FortiController in chassis 1 slot 1. The primary FortiController status display includes a Config Master link that you can use to connect to the primary worker.

Log into a backup FortiController GUI (for example by browsing to https://172.20.120.100:44321 to log into the FortiController in chassis 2 slot 1) and go to Load Balance > Status. If the workers in chassis 2 are configured correctly they should appear in their appropriate slots.

The backup FortiController Status page shows the status of the workers in chassis 2 and does not include the Config Master link.

9. Results – Configuring the workers

Configure the workers to process the traffic they receive from the FortiController front panel interfaces. By default all FortiController front panel interfaces are in the worker root VDOM. You can keep them in the root VDOM or create additional VDOMs and move interfaces into them.

For example, if you connect the Internet to FortiController front panel interface 2 (fctrl/f2 on the worker GUI and CLI) and the internal network to FortiController front panel interface 6 (fctrl/f6) you can access the root VDOM and add a policy to allow users on the Internal network to access the Internet.

10. Results – Primary FortiController cluster status

Log into the primary FortiController CLI and enter this command to view the system status of the primary FortiController.

For example, you can use SSH to log into the primary FortiController CLI using the external management IP:

ssh admin@172.20.120.100 -p2201
get system status
Version: FortiController-5103B v5.0,build0024,140815
Branch Point: 0024
Serial-Number: FT513B3912000029
BIOS version: 04000009
System Part-Number: P08442-04
Hostname: ch1-slot1
Current HA mode: a-p, master
System time: Sun Sep 14 08:16:25 2014
Daylight Time Saving: Yes
Time Zone: (GMT-8:00)Pacific Time(US&Canada)

Enter this command to view the load balance status of the primary FortiController and its workers. The command output shows the workers in slots 3, 4, and 5, and status information about each one.    

get load-balance status
ELBC Master Blade: slot-3
Confsync Master Blade: slot-3
Blades:
   Working: 3 [ 3 Active 0 Standby]
   Ready:   0 [ 0 Active 0 Standby]
   Dead:    0 [ 0 Active 0 Standby]
  Total:    3 [ 3 Active 0 Standby]
Slot 3: Status:Working   Function:Active
   Link:     Base: Up        Fabric: Up
   Heartbeat: Management: Good  Data: Good
   Status Message:"Running"
Slot 4: Status:Working   Function:Active
   Link: Base: Up            Fabric: Up
   Heartbeat: Management: Good  Data: Good
   Status Message:"Running"
Slot 5: Status:Working   Function:Active
   Link: Base: Up            Fabric: Up
   Heartbeat: Management: Good  Data: Good
   Status Message:"Running"

Enter this command from the primary FortiController to show the HA status of the FortiControllers. The command output shows a lot of information about the cluster including the host names and chassis and slot locations of the FortiControllers, the number of sessions each FortiController is processing (this case 0 for each FortiController) the number of failed workers (0 of 3 for each FortiController), the number of FortiController front panel interfaces that are connected (2 for each FortiController) and so on. The final two lines of output also show that the B1 interfaces are connected (status=alive) and the B2 interfaces are not (status=dead). The cluster can still operate with a single heartbeat connection, but redundant heartbeat interfaces are recommended.

diagnose system ha status  
mode: a-p
minimize chassis failover: 1
ch1-slot1(FT513B3912000029), Master(priority=0), ip=169.254.128.121, uptime=4416.18, chassis=1(1)
    slot: 1
    sync: conf_sync=1, elbc_sync=1
    session: total=0,  session_sync=in sync
    state: worker_failure=0/3, intf_state=(port up:)=0
 force-state(0:none)    hbdevs: local_interface=        b1 best=yes
            local_interface=        b2 best=no
ch2-slot1(FT513B3912000051), Slave(priority=2), ip=169.254.128.123, uptime=1181.62, chassis=2(1)
    slot: 1
    sync: conf_sync=1, elbc_sync=1, conn=3(connected)
    session: total=0,  session_sync=in sync
    state: worker_failure=0/3, intf_state=(port up:)=0
 force-state(0:none)    hbdevs: local_interface=        b1 last_hb_time= 4739.97   status=alive
            local_interface=        b2 last_hb_time=    0.00   status=dead
ch2-slot2(FT513B3913000168), Slave(priority=3), ip=169.254.128.124, uptime=335.79, chassis=2(1)
    slot: 2
    sync: conf_sync=1, elbc_sync=1, conn=3(connected)
    session: total=0,  session_sync=in sync
    state: worker_failure=0/3, intf_state=(port up:)=0
 force-state(0:none)    hbdevs: local_interface=        b1 last_hb_time= 4739.93   status=alive
            local_interface=        b2 last_hb_time=    0.00   status=dead
ch1-slot2(FT513B3914000006), Slave(priority=1), ip=169.254.128.122, uptime=4044.46, chassis=1(1)
    slot: 2
    sync: conf_sync=1, elbc_sync=1, conn=3(connected)
    session: total=0,  session_sync=in sync
    state: worker_failure=0/3, intf_state=(port up:)=0
 force-state(0:none)    hbdevs: local_interface=        b1 last_hb_time= 4740.03   status=alive
            local_interface=        b2 last_hb_time=    0.00   status=dead

11. Results – Chassis 1 Slot 2 FortiController status

Log into the chassis 1 slot 2 FortiController CLI and enter this command to view the status of this backup FortiController.

To use SSH:

ssh admin@172.20.120.100 -p2202
get system status
Version: FortiController-5103B v5.0,build0024,140815
Branch Point: 0024
Serial-Number: FT513B3914000006
BIOS version: 04000010
System Part-Number: P08442-04
Hostname: ch1-slot2
Current HA mode: a-p, backup
System time: Sun Sep 14 12:44:58 2014
Daylight Time Saving: Yes
Time Zone: (GMT-8:00)Pacific Time(US&Canada)

Enter this command to view the status of this backup FortiController and its workers.

get load-balance status       
  ELBC Master Blade: slot-3
  Confsync Master Blade: slot-3
  Blades:
     Working:  3 [  3 Active  0 Standby]
     Ready:    0 [  0 Active  0 Standby]
     Dead:     0 [  0 Active  0 Standby]
    Total:     3 [  3 Active  0 Standby]
     Slot  3: Status:Working   Function:Active 
       Link:      Base: Up          Fabric: Up  
       Heartbeat: Management: Good   Data: Good  
       Status Message:"Running"
     Slot  4: Status:Working   Function:Active 
       Link:      Base: Up          Fabric: Up  
       Heartbeat: Management: Good   Data: Good  
       Status Message:"Running"
     Slot  5: Status:Working   Function:Active 
       Link:      Base: Up          Fabric: Up  
       Heartbeat: Management: Good   Data: Good  
       Status Message:"Running"

Enter this command from the FortiController in chassis 1 slot 2 to show the HA status of the FortiControllers. Notice that the FortiController in chassis 1 slot 2 is shown first.

diagnose system ha status  
mode: a-p
minimize chassis failover: 1
ch1-slot2(FT513B3914000006), Slave(priority=1), ip=169.254.128.122, uptime=4292.69, chassis=1(1)
    slot: 2
    sync: conf_sync=1, elbc_sync=1
    session: total=0,  session_sync=in sync
    state: worker_failure=0/3, intf_state=(port up:)=0
 force-state(0:none)    hbdevs: local_interface=        b1 best=yes
            local_interface=        b2 best=no
ch1-slot1(FT513B3912000029), Master(priority=0), ip=169.254.128.121, uptime=4664.49, chassis=1(1)
    slot: 1
    sync: conf_sync=1, elbc_sync=1, conn=3(connected)
    session: total=0,  session_sync=in sync
    state: worker_failure=0/3, intf_state=(port up:)=0
 force-state(0:none)    hbdevs: local_interface=        b1 last_hb_time= 4958.88   status=alive
            local_interface=        b2 last_hb_time=    0.00   status=dead
ch2-slot1(FT513B3912000051), Slave(priority=2), ip=169.254.128.123, uptime=1429.99, chassis=2(1)
    slot: 1
    sync: conf_sync=1, elbc_sync=1, conn=3(connected)
    session: total=0,  session_sync=in sync
    state: worker_failure=0/3, intf_state=(port up:)=0
 force-state(0:none)    hbdevs: local_interface=        b1 last_hb_time= 4958.88   status=alive
            local_interface=        b2 last_hb_time=    0.00   status=dead
ch2-slot2(FT513B3913000168), Slave(priority=3), ip=169.254.128.124, uptime=584.20, chassis=2(1)
    slot: 2
    sync: conf_sync=1, elbc_sync=1, conn=3(connected)
    session: total=0,  session_sync=in sync
    state: worker_failure=0/3, intf_state=(port up:)=0
 force-state(0:none)    hbdevs: local_interface=        b1 last_hb_time= 4958.88   status=alive
            local_interface=        b2 last_hb_time=    0.00   status=dead

12. Results – Chassis 2 Slot 1 FortiController status

Log into the chassis 2 slot 1 FortiController CLI and enter this command to view the status of this backup FortiController.

To use SSH:

ssh admin@172.20.120.100 -p2221
get system status
Version: FortiController-5103B v5.0,build0024,140815
Branch Point: 0024
Serial-Number: FT513B3912000051
BIOS version: 04000009
System Part-Number: P08442-04
Hostname: ch2-slot1
Current HA mode: a-p, backup
System time: Sun Sep 14 12:53:09 2014
Daylight Time Saving: Yes
Time Zone: (GMT-8:00)Pacific Time(US&Canada)

Enter this command to view the status of this backup FortiController and its workers.

get load-balance status       
  ELBC Master Blade: slot-3
  Confsync Master Blade: N/A
  Blades:
     Working:  3 [  3 Active  0 Standby]
     Ready:    0 [  0 Active  0 Standby]
     Dead:     0 [  0 Active  0 Standby]
    Total:     3 [  3 Active  0 Standby]
     Slot  3: Status:Working   Function:Active 
       Link:      Base: Up          Fabric: Up  
       Heartbeat: Management: Good   Data: Good  
       Status Message:"Running"
     Slot  4: Status:Working   Function:Active 
       Link:      Base: Up          Fabric: Up  
       Heartbeat: Management: Good   Data: Good  
       Status Message:"Running"
     Slot  5: Status:Working   Function:Active 
       Link:      Base: Up          Fabric: Up  
       Heartbeat: Management: Good   Data: Good  
       Status Message:"Running"

Enter this command from the FortiController in chassis 2 slot 1 to show the HA status of the FortiControllers. Notice that the FortiController in chassis 2 slot 1 is shown first.

diagnose system ha status
mode: a-p
minimize chassis failover: 1
ch2-slot1(FT513B3912000051), Slave(priority=2), ip=169.254.128.123, uptime=1858.71, chassis=2(1)
    slot: 1
    sync: conf_sync=1, elbc_sync=1
    session: total=0,  session_sync=in sync
    state: worker_failure=0/3, intf_state=(port up:)=0
 force-state(0:none)    hbdevs: local_interface=        b1 best=yes
            local_interface=        b2 best=no
ch1-slot1(FT513B3912000029), Master(priority=0), ip=169.254.128.121, uptime=5093.30, chassis=1(1)
    slot: 1
    sync: conf_sync=1, elbc_sync=1, conn=3(connected)
    session: total=0,  session_sync=in sync
    state: worker_failure=0/3, intf_state=(port up:)=0
 force-state(0:none)    hbdevs: local_interface=        b1 last_hb_time= 2074.15   status=alive
            local_interface=        b2 last_hb_time=    0.00   status=dead
ch2-slot2(FT513B3913000168), Slave(priority=3), ip=169.254.128.124, uptime=1013.01, chassis=2(1)
    slot: 2
    sync: conf_sync=1, elbc_sync=1, conn=3(connected)
    session: total=0,  session_sync=in sync
    state: worker_failure=0/3, intf_state=(port up:)=0
 force-state(0:none)    hbdevs: local_interface=        b1 last_hb_time= 2074.15   status=alive
            local_interface=        b2 last_hb_time=    0.00   status=dead
ch1-slot2(FT513B3914000006), Slave(priority=1), ip=169.254.128.122, uptime=4721.60, chassis=1(1)
    slot: 2
    sync: conf_sync=1, elbc_sync=1, conn=3(connected)
    session: total=0,  session_sync=in sync
    state: worker_failure=0/3, intf_state=(port up:)=0
 force-state(0:none)    hbdevs: local_interface=        b1 last_hb_time= 2074.17   status=alive
            local_interface=        b2 last_hb_time=    0.00   status=dead

13. Results – Chassis 2 Slot 2 FortiController status

Log into the chassis 2 slot 2 FortiController CLI and enter this command to view the status of this backup FortiController.

To use SSH:

ssh admin@172.20.120.100 -p2222
get system status
Version: FortiController-5103B v5.0,build0024,140815
Branch Point: 0024
Serial-Number: FT513B3913000168
BIOS version: 04000010
System Part-Number: P08442-04
Hostname: ch2-slot2
Current HA mode: a-p, backup
System time: Sun Sep 14 12:56:45 2014
Daylight Time Saving: Yes
Time Zone: (GMT-8:00)Pacific Time(US&Canada)

Enter this command to view the status of the backup FortiController and its workers.

get load-balance status       
  ELBC Master Blade: slot-3
  Confsync Master Blade: N/A
  Blades:
     Working:  3 [  3 Active  0 Standby]
     Ready:    0 [  0 Active  0 Standby]
     Dead:     0 [  0 Active  0 Standby]
    Total:     3 [  3 Active  0 Standby]
     Slot  3: Status:Working   Function:Active 
       Link:      Base: Up          Fabric: Up  
       Heartbeat: Management: Good   Data: Good  
       Status Message:"Running"
     Slot  4: Status:Working   Function:Active 
       Link:      Base: Up          Fabric: Up  
       Heartbeat: Management: Good   Data: Good  
       Status Message:"Running"
     Slot  5: Status:Working   Function:Active 
       Link:      Base: Up          Fabric: Up  
       Heartbeat: Management: Good   Data: Good  
       Status Message:"Running"

Enter this command from the FortiController in chassis 2 slot 2 to show the HA status of the FortiControllers. Notice that the FortiController in chassis 2 slot 2 is shown first.

diagnose system ha status  
mode: a-p
minimize chassis failover: 1
ch2-slot2(FT513B3913000168), Slave(priority=3), ip=169.254.128.124, uptime=1276.77, chassis=2(1)
    slot: 2
    sync: conf_sync=1, elbc_sync=1
    session: total=0,  session_sync=in sync
    state: worker_failure=0/3, intf_state=(port up:)=0
 force-state(0:none)    hbdevs: local_interface=        b1 best=yes
            local_interface=        b2 best=no
ch1-slot1(FT513B3912000029), Master(priority=0), ip=169.254.128.121, uptime=5356.98, chassis=1(1)
    slot: 1
    sync: conf_sync=1, elbc_sync=1, conn=3(connected)
    session: total=0,  session_sync=in sync
    state: worker_failure=0/3, intf_state=(port up:)=0
 force-state(0:none)    hbdevs: local_interface=        b1 last_hb_time= 1363.89   status=alive
            local_interface=        b2 last_hb_time=    0.00   status=dead
ch2-slot1(FT513B3912000051), Slave(priority=2), ip=169.254.128.123, uptime=2122.58, chassis=2(1)
    slot: 1
    sync: conf_sync=1, elbc_sync=1, conn=3(connected)
    session: total=0,  session_sync=in sync
    state: worker_failure=0/3, intf_state=(port up:)=0
 force-state(0:none)    hbdevs: local_interface=        b1 last_hb_time= 1363.97   status=alive
            local_interface=        b2 last_hb_time=    0.00   status=dead
ch1-slot2(FT513B3914000006), Slave(priority=1), ip=169.254.128.122, uptime=4985.27, chassis=1(1)
    slot: 2
    sync: conf_sync=1, elbc_sync=1, conn=3(connected)
    session: total=0,  session_sync=in sync
    state: worker_failure=0/3, intf_state=(port up:)=0
 force-state(0:none)    hbdevs: local_interface=        b1 last_hb_time= 1363.89   status=alive
            local_interface=        b2 last_hb_time=    0.00   status=dead
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.