Running your own cloudfoundry based on your IAAs. Part 2

Source: Internet
Author: User
Tags network function cloudstack

(Next, Part 1's work)


Step.3 configure the new VM created by template

After a single-node cloudfoundry is installed, we can use VMC to test whether the component can be started normally. After the test, we can use the template function of IAAs to make a template for the Virtual Machine Installed with full cloudfoundry, which will be used for cluster creation. In this step, you can use your favorite IAAs, such as cloudstack, openstack, and Amazon EC2. Here I use cloudstack as an example.

1. Create a snapshot for the root volume of the template virtual machine. (Or shut down, and then directly create a template for this VM)

2. Create a template based on Snapshot

3. You can use this snapshot to create a new VM. Here, I use 2 GB memory and 20 GB hard drive.


However, the CF in the newly created virtual machine cannot be started directly. Because the IP address has changed. We need to configure it.

Here I wrote a script to change the IP address in the cf configuration of the VM to the IP address of the new Vm and restart the key postgresql service. (Also related to IP addresses)

echo -e "\033[32m================== Reconfiguring the CloudFoundry now ===================\n \033[0m"localip=`/sbin/ifconfig -a|grep inet|grep -v 127.0.0.1|grep -v inet6|awk '{print $2}'|tr -d "addr:"`grep 172.17.4.221 -rl /root/cloudfoundry/.deployments/devbox/configif [ $? -ne 0 ]; then     echo -e "Nothing need to be done here \n"else    sed -i "s/172.17.4.221/${localip}/g" `grep 172.17.4.221 -rl /root/cloudfoundry/.deployments/devbox/config`    echo -e "\033[33m\nThe IP address of this CloudFoundry node has been set to ${localip} \033[0m\n"figrep 172.17.4.221 -rl /etc/postgresqlif [ $? -ne 0  ]; then    echo -e "Nothing need to be done here \n"else      sed -i "s/172.17.4.221/${localip}/g" `grep 172.17.4.221 -rl /etc/postgresql`    echo -e "\033[33m\nThe IP address of postgresql node has been set to ${localip} \033[0m\n"fiecho -e "\033[34mRestarting PostgreSQL ...\n\033[0m"/etc/init.d/postgresql-8.4 stop/etc/init.d/postgresql-8.4 start/etc/init.d/postgresql stop/etc/init.d/postgresql startecho -e "\033[32m\nReconfiguration successed!\n\033[0m"echo -e "\033[32m\nYou can use export CLOUD_FOUNDRY_EXCLUDED_COMPONENT=\"comp1|comp2|...\" to choose services\n\033[0m"

172.17.4.221 is the IP address of the VM template. This script is very simple, so it has limited functions:

1. The phrase grep captures the IP address of the local machine may cause problems for VMS with multiple NICs (or with vmplayer installed ).

2. The cloudfoundry path is hard to write.

Therefore, you can write a better copy by yourself.


Step 4. Configure and connect the VMS together

After the IP address is modified, a new full-function cf node can work. The cloudfoundry design is very concise, and the coupling between modules is very low. It is designed to build clusters. So our next job is very simple: Just start the required components on these VMS and connect them with Nats!

This is our simplest multi-node Deployment Solution:


Node0: loadbalancer (nginx)
Node1: CC, uaa, router0
Node2: DEA 0, mysql_node0
Node3: DEA 1, mysql_node1
Node4: NATs, hm
Node5: router1, mysql_gateway

This is actually the deployment of Multi-router, multi-DEA and multi-MySQL.

All right, clone these nodes one by one, and then do the following simple work:

1. login to each Vm, such as node1

2, find./devbox/config/cloud_controller.yml NATs: // NATs: nats@172.17.4.219:4222

3. Change the IP address to node4,

4. Do this for other nodes and start the components required on the node (.../vcap_dev start xxx ...)

Special handling:

1. Hm and CC need to share the database, so if your Hm is on an independent node, change this IP address to your cc ip address.

# This database is shared with the cloud controller.database_environment:  production:    database: cloud_controller    host: 172.17.13.86

In addition, change the CC external_url to api.yourdomain.com. Note that the URLs such as *. vcap. Me in all CF component configuration files must be changed. Therefore, you need to check the config for each component.

(Here the * .yourdomain.com domain name will eventually be bound to the LB, which will be explained soon)

2. When there are multiple service nodes, don't forget to give them the number (INDEX)

index: 1pid: /var/vcap/sys/run/mysql_node.pidnode_id: mysql_node_1#CloudFoundry need this index to distinguish those mysql nodes.

3. Independently start the NATs service on the NATs Node

/Etc/init. d/NATs-Server start


4. Multi-Routers: Here, our policy is to use one nginx to distribute traffic before multiple routers. The nginx node is independent and its configuration file needs to be written as follows:


Upstream cf_routers {
Server ip_of_router_0;
Server ip_of_router_1;
}

Server {
Listen 80;
# If you do not have a domain, try to use your hosts file.
SERVER_NAME * .yourdomain.com;
Server_name_in_redirect off;
Location /{
Access_log/root/cloudfoundry/. Deployments/devbox/log/nginx_access.log main;
Proxy_buffering off;
Proxy_set_header host $ host;
Proxy_set_header X-forwarded-for $ proxy_add_x_forwarded_for;
Proxy_redirect off;
Proxy_connect_timeout 10;
Proxy_send_timeout 30;
Proxy_read_timeout 30;

Proxy_pass http: // cf_routers;
}
}

Do not forget to restart after configuration:/etc/init. d/nginx restart

Finally, you can bind * .yourdomain.com to this lb in the network function of your IAAs layer, and then use your cluster environment as the target domain.

Step 5. Other things todo

By now, your cluster construction has been completed. However, there are still some follow-up things to do.


First, the CC and HM here are single-node. What if I want to create multiple nodes? In fact, these nodes need to share the following two directories:

Droplets:/var/vcap/shared/Droplets

Resources:/var/vcap/shared/Resources

Therefore, we need to create an NFS server, and then several CC nodes Mount files from the above two directories to their local storage. Here NFS is native. In fact, you can replace it with other file systems that support fuse. Cf supports this part of modification.

Of course, do not forget that multiple CC maps to the same external_url, that is, api.yourdomain.com. Therefore, do not forget to modify the respective configuration files of CC.

At this time, your request to CF target follows the process below:

VMC target api.yourdomain.com-> LB select a router-> router select a cloudcontroller


Second, a cross-node database needs to be shared among multiple CC and HM nodes (the configuration of this database can be seen from the CC & HM configuration file ), connect CC & HM to the master node of the database in the preceding configuration file.

By the way, our work is actually inadvertently imitating bosh. In our lab, an HTTP sever is created and started in the template VM to receive task requests from the client. This is actually similar to the work of the agent. However, compared with Bosh, our deployment method is only semi-automatic. However, The IAAs we use in our lab is not just cloudstack, so we do not need many bosh CPI sets. You can refer to official cloudfoundry materials to learn bosh.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.