A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
First, the Java Foundation
1. Collection Framework
A) What are the advantages of generics in A collection?
1) HashMap---allow a key to be null, allow multiple values to be NULL, the default capacity is 16, the load factor is 0.75f, and each time it is scaled up, is an asynchronous thread insecure mapping.
2) Hashtable---does not allow the key value to be NULL, the default initial capacity is 11, the load factor is 0.75f, is a synchronous thread-safe mapping (a Hashtable can only be accessed by one thread at a time).
3) Concurrenthashmap---Asynchronous thread-safe mapping (one thread can access only one key-value pair in the map at a time, and different threads have access to different key-value pairs of the same map at the same time)
HashMap, LinkedList, ArrayList ground floor
Handwritten lingkedlist, ArrayList, Quenue, Stack
2. Design mode
1) Singleton mode-only one instance of the class exists in the global process.
You need to privatize the constructor, provide a static variable of this class, and get the unique static variable by the corresponding static method
A hungry man: there is no multithreading concurrency security problem that can lead to wasted memory
Lazy: Relatively memory-saving, there is a thread-safety problem
The creation of beans in the spring framework is by default a singleton
2) Decorative Mode---Use a unified parent class or interface, a subclass object as a parameter to construct another sub-class object, the method in which to change or enhance
3) Proxy mode
Static Proxy Generation Class
The dynamic proxy produces the object,
4) Factory mode
3. JDK Environment Configuration
Java_home points to the Java JDK installation directory, notifying some software how to find the JDK installation directory;
Classpath represents a class search path, which can be used with dots (.);
Path points to the bin directory of the JDK, and the Java, Javac, and other commands are installed in this directory
1. It provides a compile-time type-safety guarantee that you can only put objects of the correct type into the collection, avoiding classcastexception at run time.
2. Qualified wildcard and unqualified wildcard characters in generics
One is < Extends t> it sets the upper bounds of the type by ensuring that the type must be a subclass of T, and the other is < Super t> It sets the lower bound of a type by ensuring that the type must be the parent of T.
5. Which label of the file does the Tomcat port modify?
6. Multithreading + deadlock + production consumption model
7. Redis Basic Type
8. HTTP, TCP and other protocols
9. Bubble Sort,
New features of jdk1.8 1.9
Starting with JDK1.8, the interface allows you to define entity methods, but you must use the default modifier;
Static methods are also allowed in the interface;
Lambda expression: is actually rewriting an abstract method;
F5---Drill-in method, F6---Next step, f7--from the method to continue execution; F8---Execution to the next breakpoint
Static compilation: Determines the type at compile time, binds the object.
Dynamic compilation: The runtime determines the type, binding object. Dynamic compilation maximizes the flexibility of Java and embodies the application of polymorphism, which can reduce the coupling between classes.
And reflection is the use of dynamic compilation, in the program running state, through the reflection mechanism, to obtain the class bytecode, for any class or object, we can parse it, understand it, all can know its properties and methods; Can call any of its methods and properties.
Example: Factory Factory mode (fruit interface, many implementation classes such as Apple class ... ), if not reflected, factory will be very large, with reflection, by passing in the class name, create Objects)
Application of Reflection in SSM:
1) MyBatis in the Resultmap or resulttype mapping;
2) Creation of beans in spring configuration file
3) Springmvc Interceptor
Common methods: Class.forName ("java.util.List"); Newinstance (); GetMethod ();
GetConstructor (); Getdeclaredfield (String filedname)
Second, the JVM
Third, the database
1. SQL statement Execution order: From--where--group By--having--select--order by
2. Four features and implications of database transactions
Atomicity: All operations in the entire transaction are either completed or not completed;
Consistency: The integrity constraints of the database are not compromised until the transaction begins and after the transaction ends;
Isolation: At the same time only one request is used for the same data;
Persistence: After the transaction is completed, the changes made to the database by the firm are persisted in the database
3. SQL Optimization:
To optimize a query:
A) to avoid full-table scanning, first consider indexing on the columns involved in where and order by, but not as many indexes as possible, although indexes can improve the efficiency of the corresponding select, but also reduce the efficiency of insert and update, because insert or UPDA The index may be rebuilt when TE is
B) try to avoid in the WHERE clause
Null value (preferably do not leave the database null),
Use the! = or <> operator,
Use fuzzy queries like,
Perform an expression or function operation on a field
This will cause the engine to discard the full table scan using the index
C) try to avoid using or in the WHERE clause to join the query criteria, if a field has an index, a field is not indexed, will cause the engine to abandon the use of the index for full table scan, you can use union all to replace or;
D) use in and not in, which also results in a full table scan, for consecutive values, you can use between instead of in; many times you can use exists instead of in.
E) When querying, use specific fields instead of *
A) If you change only 1 or 2 fields, do not update all fields, otherwise frequent calls can cause significant performance cost, along with a large number of logs.
B) use Varchar/nvarchar as much as possible instead of Char/nchar, because the first variable length field storage space is small, can save storage space, second for the query, in a relatively small field search efficiency is obviously higher.
4. Database indexing
Creating an index statement: Create [unique]index index name on table name (field name 1, field name 2)
is a sort of data structure in the database management system to assist in quickly querying and updating data in database tables. The implementation of an index typically uses a B-tree and its variants, plus trees.
There are 4 common types of indexes:
Unique index: A unique index is an index that does not allow any two rows to have the same index value;
Primary KEY index: a database table often has a column or combination of columns whose values uniquely identify each row in the table,
The primary key will automatically create the primary key index;
Clustered index: The logical order of the key values in the index determines the physical order of the corresponding rows in the table;
Nonclustered indexes: The logical order of the indexes in the index is different from the physical storage order of the disk upstream.
5. mysql Paging query:
SELECT * FROM Student LIMIT (PageNo-1) * pagesize,pagesize;
SELECT * FROM
(SELECT ROWNUM RN, * from student WHERE ROWNUM <= PageNo * pageSize)
WHERE rn > (pageNo-1) * pageSize
6. SQL Stored Procedures
A stored procedure is a common or very complex work, pre-written with SQL statements and stored with a specified name, you can execute different SQL statements according to the conditions, then to call the database to provide a defined stored procedure with the same functionality of the service, just call execute, you can automatically complete the command.
create proc Query_book as SELECT * FROM book go
--Call the stored procedure exec query_book
1. The difference between interceptors and filters
The interceptor is based on the Java reflection mechanism, and the filter is based on the function callback.
The interceptor does not depend on the servlet container, and the filter relies on the servlet container.
The interceptor can only work on the action request, while the filter works on almost all requests.
? In the life cycle of an action, the interceptor can be called multiple times, and the filter can only be called once when the container is initialized.
The interceptor can access the object in the action context, and the filter cannot access it.
It is important that interceptors get the individual beans in the IOC container, and the filter does not, and that it is essential to inject a service into the interceptor to invoke the business logic.
2. Spring boot is a new framework provided by the pivotal team, designed to simplify the initial setup and development of new Spring applications. The framework uses a specific approach to configuration, which eliminates the need for developers to define boilerplate configurations.
1. RABBITMQ 5 Common modes of operation (6 in total):
Simple mode: Slightly
Working mode: Producers for Message Queuing produce messages, multiple consumers scramble to execute the right, who Rob who executes. such as the second kill business, Rob red envelopes, etc.
Publish Subscription Mode: Producer produces messages, sends messages to the switch, and the switch sends messages to the queue that subscribes to the current message, again with the consumers of each queue (similar to broadcasting)
Route Mode: The producer publishes a message that defines the specified route key (such as Key:error), in which case the switch is sent to the queue that satisfies the condition based on the routing key, and the message cannot be executed if there is no qualifying route key in the queue.
Theme mode: Similar to route mode, key=*xx*
2 important port Ports in 2.rabbitMQ
15672: Access to the RABBITMQ console; 5672: Port of program Connection RABBITMQ
3. ACK mechanism (message acknowledgement mechanism):
If you confirm with No-ack (each time consumer receives the data, regardless of whether the processing is complete, RabbitMQ server immediately marks the message as complete and then removes it from the queue), during consumer processing of the data, Consumer error, abnormal exit, and the data has not been processed to complete, then the data is lost.
To ensure that data is not lost, RABBITMQ supports the message acknowledgement mechanism (acknowledgments). In order to ensure that the data can be processed correctly and not only received by consumer, an ACK should be sent after processing the data. If an consumer exits unexpectedly, the data it processes can be processed by another consumer, so that the data is not lost in this case. After processing the data sent by the ACK, is to tell RABBITMQ data has been received, processing completed, RABBITMQ can go to safely delete it.
4. Several other message queues: MSMQ, ActiveMQ, ZeroMQ
5. Queue persistence: When RABBITMQ is not persisted, the message is lost when the machine is down. When persisted, the data is written to disk, and the messages are restored after the outage is restarted. Non-persisted information is saved to memory, and persisted information is saved to disk.
6.AMQP (Advanced Message Queuing Protocol): Spring has supported AMQP and is currently only RABBITMQ implemented, and RABBITMQ is an open source AMQP implementation
7. Priority practices for RABBITMQ
There are many well-known Message Queuing products in the current mature, such as RABBITMQ. It is relatively simple to use, the function is relatively rich, the general situation is fully sufficient. But one thing that's annoying is that it doesn't support precedence.
For example, a mail-sending task, some privileged users want its mail to be sent out more timely, at least more than the average user priority. By default RABBITMQ cannot be disposed of, and the task thrown to RABBITMQ is FIFO first-out. However, we can use some flexible techniques to support these priorities. Create multiple queues and set up the appropriate routing rules for RABBITMQ consumers.
We can define a consumer who specializes in high-priority queues and does not handle low-priority queue data when it is idle. This is similar to the VIP counter of the bank, ordinary customers in the bank to queue up, a VIP came he although not from the pick-up machine in front of the ordinary members of the ticket, but he still can go straight to the VIP channel faster.
8. Several important concepts in RABBITMQ
Broker: The Message Queuing server entity is simply the case.
Exchange: A message switch that specifies what rules the message is routed to and to which queue.
Queue: A message queue carrier in which each message is put into one or more queues.
Binding: Bind, which is the role of binding exchange and queue according to routing rules.
Routing key: The routing keyword, exchange messages are delivered based on this keyword.
Vhost: Virtual host, a broker can open multiple vhost, as a separate user permissions.
Producer: The message producer is the program that delivers the message.
Consumer: The message consumer is the program that receives the message.
Channel: The message channels, in each connection of the client, multiple channels can be established, each channel represents a session task. Up to three Exchange,queue,routingkey to determine a unique line from Exchange to Queue.
9. Comparison between RABBITMQ and ACTIVEMQ:
A) ACTIVEMQ is Java,rabbitmq is Erlang (born with high concurrency and high availability of Erlang language), theoretically, the performance of RABBITMQ is stronger than ACTIVEMQ, is the first choice of non-Java system;
B) RabbitMQ: AMQP-based protocol
The advanced Message Queuing protocol makes it possible to fully functional interoperability of client applications and messaging middleware servers that conform to this specification.
ActiveMQ: Based on Stomp Protocol (simple (stream) Text oriented message protocol)
C) both support message persistence
D) higher concurrency supported by RABBITMQ (inherently high concurrency and high availability of Erlang language)
1. Forward index: First load the article, and then according to the keywords in the article to specify the corresponding location;
Inverted index (LUCENE/SOLR): The index keyword is queried first, followed by the article in which the keyword is returned. The current mainstream search tool will generally use inverted index.
2. Lucene: is a set of open source libraries for full-text search and search, using Word element matching and segmentation words. For example: Java backend engineer, can be participle: Java, engineer, backend, etc., so we can search the time to retrieve. There is a word breaker is an IK Chinese word breaker, it has fine-grained segmentation and intelligent segmentation, that is, according to some intelligent algorithm.
SOLR: is a standalone enterprise Search application Server (developed with JAVA5), based on Lucene's full-text search server. SOLR provides a richer query language than Lucene, while it is configurable, extensible, and optimized for indexing and search performance. The full-text Indexing Service can be implemented with only the configuration required. Effectively reduces the stress on the database caused by frequent access to the database.
1. Nginx is a high-performance HTTP and reverse proxy server, but also an e-mail (IMAP/POP3) proxy Server
2. Nginx Load Balancing 5 configuration methods (I mainly use 3 kinds of)
Weight (weight weight=5),
Ip_hash (each request is allocated according to the hash result of the access IP, bind a backend server, can solve the problem of the session)
3. Define the IP and device status of the distributed device in the upstream,
The status of each of these devices can be set to:
1.down indicates that the current server is temporarily not participating in the load
2.weight by default, the larger the 1.weight, the greater the load weight.
3.max_fails: The number of times that a request failed is allowed defaults to 1. Returns the error defined by the Proxy_next_upstream module when the maximum number of times is exceeded
4.fail_timeout:max_fails the time of the pause after the failure.
5.backup: When all other non-backup machines are down or busy, request the backup machine. Equivalent to a spare tire.
4. Why is Nginx performance so high?
Due to its event handling mechanism: Asynchronous non-blocking event handling mechanism
5. Nginx and Apache Comparison
Nginx more lightweight, more than Apache occupy less memory and resources, can be done seconds off;
Nginx all requests are handled by one thread, while Apache processes a single request by a single thread;
Nginx processing request is asynchronous non-blocking, and Apache blocking type, so nginx more than Apache to combat high concurrency;
Nginx configuration concise, and Apache complex; nginx processing static files good;
But Nginx Bug relatively few, ultra-stable, more modules, rewrite powerful;
Web services that require performance, using Nginx. If performance is not required for stability, then Apache
6. Please explain what the master and worker processes on the Nginx server are respectively.
Master process: Read and evaluate configuration and maintenance
Worker process: Processing requests
Vii. Amoeba and database master-slave replication
1. Database Master and slave
1. When the main library updates the data, it is written to the binary log file in real time
2. From the IO thread of the library, listen to the binary files of the main library in real-time, and if the binaries change, start the thread to read the modified content.
3. Writes the read binary file to the relay log through the IO thread.
4.SQL threads read messages in the trunk log in real-time to "update operations" of the database
Binary log file Vim/etc/my.cnf:log-bin=mysql-bin
Hang on to the main database: Change Master to master_host= ...
2. Amoeba is a proxy that uses MySQL as the underlying data store and provides the MySQL protocol interface to the application. It responds centrally to the application's requests, sending SQL requests to a specific database based on the rules that the user has set beforehand. This allows for load balancing, read-write separation, and high availability.
3. There are also three load balancers for amoeba: Polling, weighting, and haship
4. Dbserver.xml: Information that is used to store and configure the database server that the amoeba proxies
DBServer Label configuration database properties, dbserver poolconfig configuration
Amoeba.xml: Used to configure the basic parameters of the amoeba service and the read/write separation of the database, such as amoeba host address, Port
1. Redis is a high-performance, memory-based Key-value database.
2. What are the benefits of using Redis?
(1) Fast, capable of processing more than 100,000 read and write operations per second, is the fastest known performance Key-value DB.
(2) Support rich data types, support String,list,set,zset (ordered set), hash and other data structure storage
(3) Support of transactions, operations are atomic
(4) Rich features: can be used for caching, messages, press key to set the expiration time, after expiration will be automatically deleted
3. Redis is one-process single-threaded, and Redis uses queue technology to turn concurrent access into serial access, eliminating the overhead of traditional database serial control;
5. How Redis Master and slave is selected
6. Comparison with memcached
(1) memcached All values are simple strings, Redis supports richer data types
(2) Redis is much faster than memcached
(3) Redis support data Landing, and memcached does not support
(4) Value size Redis can be up to 1GB, while Memcache is only 1MB
7. Redis Common performance issues and solutions:
(1) Master should not do any persistent work, such as RDB memory snapshots and aof log files
(2) If the data is more important, a slave turns on AOF backup data and the policy is set to synchronize once per second
(3) for the speed of master-slave replication and the stability of the connection, master and slave preferably within the same LAN
(4) Try to avoid adding from the library to the most stressful main library
(5) Master-slave replication do not use the graphic structure, with a unidirectional list structure more stable, that is: Master <-Slave1 <-Slave2
8. mysql has 2000w data, only 20w of data in Redis, how to ensure that the data in Redis are hot data.
When the Redis memory dataset size rises to a certain size, a data-retirement strategy is implemented. Redis offers 6 kinds of data-culling strategies:
(1) VOLATILE-LRU: Pick the least recently used data from the set of data sets (Server.db[i].expires) that have expired time
(2) Volatile-ttl: Select the data that will expire from the data set (Server.db[i].expires) that has an expiration time set
(3) Volatile-random: Choose data culling from data set (Server.db[i].expires) with set expiration time
(4) ALLKEYS-LRU: Select the least recently used data culling from the data set (SERVER.DB[I].DICT)
(5) Allkeys-random: Choose data culling from data set (SERVER.DB[I].DICT)
(6) No-enviction (expulsion): Prohibition of eviction data
Nine, Linux commonly used commands:
ls--lists the file names or directories under the current path
ll--lists the file names or directories under the current path for more information
pwd--Show current path location
Ping IP address--test network connection is unblocked
Vim file name--Modify file
i--into edit state
esc--exiting a previous state
: wq--Save Exit
: Q!—— indicates forced exit is not recommended for use
CP Original file name copy file name--copy file
MV A.txt b.txt--renamed A.txt to B.txt
MV A.txt aa/--move a.txt to the AA folder
clear--Clear Screen Command
Service iptables stop--shut down firewall, start on
RM a.txt--means to delete a single file
RM-RF aa--forcibly deleting files
TAR-XVF a.tar--Decompression A.tar
Interview question: Java interview basic direction!=!= not seen
Start building with 50+ products and up to 12 months usage for Elastic Compute Service