This section continues to learn how to use PowerShell to manage the basic functions of IAM, primarily including the creation and configuration of user,group,role and policy.Create a groupNew-iamgroup-groupname "PowerUsers"650) this.width=650; "src=" Http://s2.51cto.com/wyfs02/M01/83/CB/wKiom1d8mRiAe1KZAADSFTZiU-c298.png "style=" float: none; "title=" 1.PNG "alt=" Wkiom1d8mriae1kzaadsftziu-c298.png "/>Create a new userNew-iamuser-username "Mynewuser"650) this.width=650; "src=" Http://s3.51cto.com
The S3cmd command is a very powerful tool that can operate on AWS S3, not only to download and upload files, but also to create catalogs and other features.
S3cmd's use scenarios are very rich, such as when you back up your local log files to S3, you can use the combination of s3cmd and cron to make regular backups. For example, when the storage period of the log file is 365 days, use S3cmd to delete the specified directory from S3, and so on.
Her
: Rdb:mysql;nosql:mangodb
Cache: Memcached, Redis
Content Published: Cdn,dns
Other: Lucene (full-Text Search tool)
3.2. Architectural Considerations
Performance
Highly Available
Scalability
Support high-speed growth of customer, business, access, and data
Difficult to plan and grow without limits
Performance cannot be affected when scaling
Seamless: Just a smooth increase in resources
Efficient: maintenance of low cost per use
If you open an AWS account and use Amazon's Web service, you may have already paid the bill by credit card. Recently I found that the current AWS billing system is getting more and more strange, but it should be closed first, so that he will not receive money in a cold manner. ⊙ B Khan
I have suffered from sweetness and bitterness in AWS. Let's talk about the swe
This is the third part of PowerShell's creation of the AWS high-availability blog, and let's look at how the post-half work is done.
Create EC2-S3 role, which is assigned to EC2 virtual machines so that they automatically have access to S3 content after they are created.
Create a VPC Network
Create 2 subnets of a VPC, located in different AZ
Create an Internet gateway
Configure the routing table
Create and configure the EC2 security Grou
AWS Opsworks is an application management service. You can use it to define your application in one stack as a collection of different layers. Each stack provides package information that needs to be installed and configured, while also deploying any AWS resources defined in the Opsworks layer. Depending on the load or the predefined plan, Opsworks can also extend your application as needed.
If you plan to
Index fields are indexed using automatic detection in ES, such as IP, date auto-detect (default on), Auto-detect (default off) for dynamic mapping to automatically index documents, and when specific types of fields need to be specified, you might use mapping to define mappings in index generation.The settings for the default index in Logstash are template-based.First we need to specify a default mapping file, the contents of the file are as follows:{
1. Configure Log4j.propertiesLog4j.rootlogger=info,debug,logstashlog4j.appender.logstash= org.apache.log4j.net.socketappenderlog4j.appender.logstash.port=4560log4j.appender.logstash.remotehost= 10.0.0.5log4j.appender.logstash.reconnetiondelay=60000log4j.appender.logstash.locationinfo=true2. Modify the Logstash Input component (favblog-log4j.conf) to output the log to Elasticsearchinput{log4j{host = "10.0.0.5" mode = "Server" type = "Log4j-json" port =
Benefits of the unified collection of real-time logs:1. Quickly locate the problem machine in the cluster2, no need to download the entire log file (often relatively large, download time is much)3, the log can be countedA, to find the most frequently occurring anomalies, for tuning processingB, Statistics crawler IPC, Statistical user behavior, do cluster analysis, etc.Based on the above requirements, I adopted the ELK (Elasticsearch + Logstash + kiba
Logstash-forward source core ideas include the following roles (modules):Prospector: Find the file in the Paths/globs file below, and start harvesters, submit the file to harvestersHarvester: Read the scan file and submit the appropriate event to spoolerSpooler: As a buffer buffer pool, reach the size or counter time to the event information inside the flush pool to PublisherPublisher: Connect the network (Connect is authenticated by SSL), transfer th
Benefits: The project log is written to Logstash and then sent to Elasticsearch, which makes it easy to view the search log, as well as report analysis.Logstash is a data acquisition tool, there are a variety of channels, such as files, TCP,UDP, etc., if it is to collect log files, you need to store files on the server, start a Logstash service, not easy to quickly deploy, and the way to adopt tcp/udp relat
Here I am demonstrating the operation under WindowsFirst download logstash-5.6.1, directly to the official website to download1. You need to create the following jdbc.conf and myes.sql two filesinput {stdin {} jdbc {jdbc_driver_library="D:\jdbcconfig\sqljdbc4-4.0.jar"Jdbc_driver_class="Com.microsoft.sqlserver.jdbc.SQLServerDriver"jdbc_connection_string="jdbc:sqlserver://127.0.0.1:1433;databasename=abtest"Jdbc_user="SA"Jdbc_password="123456"# Schedule=
Just put, it was two years ago that I used AWS for the first time. This was two years since the rapid development of cloud computing and big data technologies. During this period, the freetier instance has been running for nearly a year and will immediately enter the billing cycle. Although I have used a piece of aliyun product in the middle (A Lot Of Money), and now I contribute $5 to digitalocean every month, only
Aws vpc, Subnet, CIDR, awsvpccidr
What is CIDR?
CIDR is short for Classless Inter-Domain Routing. Chinese is Classless Inter-Domain Routing. It is a method for creating additional addresses on the Internet. These addresses are provided to service providers (ISPs ),
Then, the ISP assigns the service to the customer. CIDR aggregates routes so that an IP address represents thousands of IP addresses served by the primary backbone provider, thus reducing t
AWS EC2 is logged on by default using the Ec2-user account and has no permissions on many folders. How to execute a command with the root account is a problem. The solution is as follows: 1. Log in to the EC2 server according to the method provided by the official website (recommended for Windows users to use Putty connection) Host: Is the public DNS port for the server: 222. Create the root password and enter the following command:sudo passwd root3.
planning of this LAN will have 500 hosts, so requires 500 IP addresses, 500 IP addresses need this address is a Class B address, this class B address format is as follows192.168.0.0/255.255.0.0/11000000.10101000.00000000.00000000This class B address has 256*256=65536 host address, but only requires 500 host address, resulting in a waste of IP address, next to do the route aggregation, only 500 host address, so need to the latter 9 is the host addressThe latter 9-bit is the host address means th
ObjectiveLinux server uploads files to S3 via AWS Command lineConfigurationOpen your AWS console;To connect to your Linux server, follow the steps below:# install PIPYum-y Install Python-pip# Install AWSCLIPip Install Awscli# Initialize ConfigurationAWS Configure# This step will require you to enter "access key ID", "Private access Key", "Default zone Name", "Default output format", the first two will be ge
://s3.51cto.com/wyfs02/M02/83/AA/wKioL1d6CmWR2StsAADCflHoudk047.png "title=" 13.PNG "alt=" Wkiol1d6cmwr2stsaadcflhoudk047.png "/>In fact, the auto scaling group created the corresponding policy and alarm so I took a lot of effort. Because AWS does not know for what reason, his creation command is Write-asscalingpolicy, and the corresponding read command is get-aspolicy, the Delete command is Remove-aspolicy, completely does not conform to the naming s
Finally, see how to manage AWS DNS services with PowerShell.The launch of Route 53 is simple, you can register a new domain name on AWS, or you can register a new domain name on another site and then migrate over. Beans on GoDaddy already have a domain name beanxyz.com, the management interface moved to Route 53 is very simple, in Route 53 to create a new Hostedzone beanxyz.com, he will automatically genera
LINUX VPS does not have root privileges is very difficult to do, and password landing is also convenient.The Linux version of my AWS VPs is Ubuntu 13.10, first signed in with an AWS certificate verified account,1. Change the root passwordsudo passwd root2, sudo chmod 777/etc/ssh/sshd_configUse up and change the authority back.3, Vi/etc/ssh/sshd_configPermitrootloginThis line should readPermitrootlogin YesPa
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.