Java Key knowledge
Multithreading (thread state, thread concurrency, the difference between synchronized and lock and the underlying principle, commonly used locks and their usage scenarios and principles,
How volatile and threadlocal solve problems, CAS implementation in Java
Thread pool principle and implementation, blocking queues and thread-safe queues,
Inter-thread communication: Synchronized + wait, Notify/notifyall, Lock + Condition multiplexing,
Effects and usages of countdownlatch, cyclicbarrier and semaphore, usage scenarios)
JVM memory management mechanism and garbage collection mechanism (memory model, GC policy, algorithm, generational recycle GC type, full GC, Minor GC scope, and trigger condition)
JVM Memory Tuning (6 parameters for memory tuning, understanding what is going on, generally using more in the project process)
Design patterns (familiar with common design patterns, drawing class diagrams, commonly used: proxies, 2 factories, strategies, single cases, observers, adapters, combinations and decorations)
Java Collection Class Framework (understanding the relationships and differences between framework diagrams, HashMap, ArrayList, HashSet, etc., where HashMap's storage mechanism is almost always asked)
HashMap principle, underlying data structure, rehash process, pointer collision problem HASHMAP thread safety issues, why this thread security problem is generated CONCURRENTHASHMAP data structure, underlying principle, put and get is thread safe
Java Exception handling mechanism (classification of exceptions, what are common exceptions, use of Try catch finally)
JVM operating mechanism (understanding how the JVM works, understanding the class loading mechanism and the initialization order of classes)
Java's NiO 3 main concepts Channel, Buffer, Selector, why improve performance? Add-ons: familiar with Netty
Linux Basics (There are certain requirements for Linux in the interview written test, it is recommended that you build a Linux virtual machine and practice the commands you use frequently)
Framework
Spring
Spring IOC principle, bean generation and life cycle (Factory mode + reflection Generation + singleton), Spring used design pattern
Spring AOP Principles and applications (dynamic Agent vs. Cglib agent, the essential difference between using scenarios and proxies)
How does spring handle high concurrency? High concurrency, how to guarantee performance?
Singleton mode + ThreadLocal
Single-instance mode saves object creation and destruction, improves performance, and threadlocal is used to ensure thread safety
Spring singleton mode, using threadlocal to switch the parameters of different threads directly, with threadlocal is to ensure thread safety, in fact, threadlocal key is the thread instance of the current threads
In singleton mode, spring puts the parameter values for each thread that may have thread safety problems in threadlocal, although it is an instance, but the data under different threads is isolated from each other.
Because beans created and destroyed at run time are greatly reduced, in most scenarios, this approach consumes less memory resources, and the higher the concurrency, the more obvious the advantage
Special attention:
The controller for Spring MVC is not thread safe!!!
Spring MVC is based on method interception, finer granularity, and spring's controller is singleton by default, namely: Each request requests, the system will use the same controller to handle,
Spring MVC and Servlets are thread-safe at the method level, and if an instance variable exists in a single controller or servlet, it is thread insecure, and Struts2 is indeed thread-safe.
Pros: Reduce object creation and destruction without having to create a controller every time
Cons: Controller is singleton, variable thread inside controller is unsafe
Solution :
1. Use the threadlocal variable in the controller to encapsulate unsafe variables into threadlocal, use threadlocal to save class variables, save class variables in the variable domain of the thread, and isolate the different requests
2. Declare that the controller is a prototype scope= "prototype", and each request creates a new controller
Do not use instance variables in 3.Controller
What is the use and rationale of Spring transaction management? Propagation properties of a transaction
Declarative transaction management, adding @Transactional annotations on top of service or method of service
How does @Transactional work?
Spring starts by parsing the generation of related beans, which will look at the classes and methods that have the relevant annotations, generate proxies for those classes and methods, and inject the relevant configuration according to the relevant parameters of the @Transactional, so that the related transactions are disposed of in the agent ( Turn on normal commit TRANSACTION, exception ROLLBACK TRANSACTION) True database layer, transaction commit and rollback are implemented by Binlog and redo log
How does spring resolve circular dependency references for objects? (Only support singleton scope, setter way of cyclic dependence!) No support for constructor mode and prototype cyclic dependencies)
principle :
When you create a bean a, an instance of a is created with the parameterless constructor, and the property is empty, but the object reference is created and the reference to bean A is exposed earlier,
The B object is then created with the Setter B property, and a reference to the B object is constructed by using the parameterless constructor, exposing the B object reference.
Then B executes the setter method, goes to the pool to find a (because at this point, a has been exposed, there is a reference to the object), so dependent on the construction of B is completed, initialization is complete, and then the initialization of a and then the completion of the loop depends on the solution!
Summary: First create the object reference, and then through the setter () way, assign values to the property, layers to create objects!!!
When Bean a initializes, it initializes its dependent B and, through the default parameterless constructor, generates its own reference without invoking its setter () method.
When a B object is created, if it is also dependent on C, a reference to B is also generated through the parameterless constructor,
When a C object is created, if A is referenced, a reference to a is found in the object pool, and then the setter () is called to inject a to complete the creation of the C object
c After creation, B uses setter () method, injects C, completes B object creation,
When the B object scene is complete, a uses the setter () method, injects B, completes the A object creation,
Finally, complete the setter () Way of cyclic dependence!
Database
InnoDB and MyISAM differences and choices
1.InnoDB does not support indexes of type Fulltext.
The exact number of rows in the table is not saved in 2.InnoDB, that is, when you execute select COUNT () from table, InnoDB scans the entire table to calculate how many rows, but MyISAM simply reads the saved rows. Note that when the count () statement contains a where condition, the operation of the two tables is the same.
3. For a field of type auto_increment, InnoDB must contain only the index of that field, but in the MyISAM table, you can establish a federated index with other fields.
4.DELETE from table, InnoDB does not reestablish the table, but deletes one row at a time.
The 5.LOAD table from master operation has no effect on InnoDB, and the workaround is to first change the InnoDB table to a MyISAM table, import the data and then change it to a InnoDB table, but not for tables that use additional InnoDB features, such as foreign keys.
In addition, the row lock of the InnoDB table is not absolute, and if MySQL cannot determine the range to scan when executing an SQL statement, the InnoDB table also locks the full table, such as the Update table set num=1 where name like "%aaa%"
Any kind of table is not omnipotent, only appropriate for the business type to choose the appropriate table type, to maximize the performance advantage of MySQL.
Pessimistic lock and optimistic lock meaning (pessimistic lock: a real lock, allowing only one thread to manipulate the same record,
Optimistic lock: A collision detection mechanism, typically implemented by version number or timestamp, with less impact on performance)
Index usage and its indexing principle (index bottom implementation: B + Tree)
Query optimization
1.explain SQL view execution efficiency, locating performance bottlenecks for optimized objects
2. Always drive large result sets with small results
3. Complete the sorting in the index whenever possible
4. Remove only the column you want, not the *
5. Use the most effective filter conditions
6. Use table join instead of sub-query
7. When only one row of data is used, limit 1
8. Indexing a search field
9. Never order by RAND (), avoid select *
10. Use not NULL whenever possible
11. Turn on query caching and refine query statements for query caching
eg
Select username from user where add_time >= now ()
Attention:
1. Such statements do not use the query cache.
2. SQL functions such as now () and rand () or whatever, do not turn on the query cache because the return of these functions is variable. So all you need to do is use a variable instead of the MySQL function to turn on the cache
3. Modify, the now () processing, only take the date of YYYY-MM-DD, changed to a non-changing value
5 types of data structures and usage scenarios for Redis
The persistence mechanism of Redis
The bottom 2 implementations of the hash type in Redis (Compact table: Save memory and Skip table: query faster)
Redis is used as a distributed message queue, performance and attention points
Data structures and algorithms
The common sort algorithm does not say, need to understand its principle and write code, still have time space complexity also need to know
Queue, Stack: need to understand its access structure, and can be used in some scenarios
Binary tree: Tree traversal, tree depth, output by level, balanced binary tree, reverse print tree, etc.
Linked list: Reverse, merge two ordered list, judge whether the linked list is also ring, the list of the countdown k elements, etc.
String: KMP algorithm, dynamic programming (this is the focus, need to understand dynamic planning, common problems include: solving the longest palindrome substring, solving the longest common substring, etc.)
Massive data processing: Now a lot of big companies will be asked to deal with the massive amount of information, so need to master common processing methods, such as Bit-map, divide and conquer, hash mapping, can Baidu look at related articles, deepen understanding
Common algorithms
Bubble sort
Quick Sort
Insert Sort
Hill sort
Merge sort
Heap Sort
Bucket sort
Dynamic planning
The longest common child string
Longest palindrome substring
Maximum k values for an array
The sum of the largest contiguous subarray of numbers
Left rotation string
String matching algorithm: KMP algorithm
Two-point Search
Linked list
Single-linked list reverse order
Two sequential single-linked list merging
Whether two single-linked lists Intersect
The node at the intersection
Number of the countdown to the single-linked list
Single-linked list sorting
Stacks and queues
Design a stack that contains the Min function
Two queue implementation stacks
Two stacks to implement a queue
An array implementation stack and queue
Tree
Pre-order, middle-order, follow-up traversal
Finding the depth of a binary tree
Traverse a binary tree by hierarchy
Determine if a binary tree is a complete binary tree
Judging whether the binary tree is mirror symmetrical
Determine if two trees are equal
Design Pattern 6 Principles
1. Single responsibility principle (SRP)
Definition: As far as a class is concerned, there should be only one cause for it to change.
From this definition, it's hard to understand what it means, and it's common to say that we don't let a class take on too much responsibility. If a class takes on too much responsibility, it is tantamount to coupling these responsibilities, and a change in responsibility may weaken or inhibit the ability of the class to perform other duties. This coupling can lead to fragile designs that can be compromised when changes occur.
For example, I often see some Android development in the activity to write bean files, network data processing, if there is a list of adapter also written in the activity, asked them why in addition to find out there is no reason to split them into other classes is not more easy to find, If the activity is too bloated, it is obviously not a good thing, if we want to modify the bean file, network processing and adapter need this activity to modify, it will cause the activity to change too many reasons, we also have a headache in version maintenance. It is also a serious breach of the definition "for a class, there should be only one cause for it to change".
Of course, if you want to argue, this pattern can cause a lot of controversy, but remember that you write code not just for you but for others.
2. Open closure principle (ASD)
Definition: Classes, modules, functions, etc. should be scalable, but not modifiable.
Open closure has two meanings, one is open to expansion and the other is closed for modification. The need for development is certain to change, but the new requirements, we will have to re-change the class again this is obviously a headache, so we design the process in the face of changes in demand to ensure that the relative stability, as far as possible with the new code to expand to modify the requirements, rather than by modifying the original code to achieve.
Suppose we want to implement a list, initially only the function of the query, if the product has to add functionality, a few days to increase the deletion function, most of the practice is to write a method and then pass the different values to control the method to achieve different functions, but if we have to add new features we have to modify our methods. Using the development closure principle to solve is to add an abstract function class, let add and delete and query as the subclass of this abstract function class, so if we add the function again, you will find that we do not need to modify the original class, only need to add a function class subclass to implement the method of functional class.
3. Richter replacement principle (LSP)
Definition: All references to base classes (parent classes) must be able to transparently use objects of their subclasses
The Richter substitution principle tells us that in the software a base class object is replaced with its subclass object, the program will not produce any errors and exceptions, and the reverse is not true, if a software entity uses a subclass object, then it is not necessarily able to use the base class object.
The principle of substitution of the Richter scale is one of the important ways to realize the open and closed principle, because the subclass object can be used wherever the base class object is used, so as far as possible the base class type is used in the program to define the object, while at run time the subclass type is determined, and the child class object is substituted for the parent class object.
There are several issues to be aware of when using the Richter substitution principle:
All methods of a subclass must be declared in the parent class, or the child class must implement all the methods declared in the parent class. According to the principle of the substitution of the Richter scale, in order to ensure the extensibility of the system, the parent class is usually used in the program to define, if a method exists only in the subclass, does not provide the corresponding declaration in the parent class, the method cannot be used in the object defined in the parent class.
When we use the Richter substitution principle, we try to design the parent class as abstract class or interface, let the subclass inherit the parent class or implement the parent interface, and implement the method declared in the parent class, run, the subclass instance replaces the parent class instance, we can extend the function of the system conveniently, without modifying the code of the original subclass. Adding new functionality can be achieved by adding a new subclass. The principle of substitution on the Richter scale is one of the concrete means of realization.
In the Java language, during the compilation phase, the Java compiler checks whether a program conforms to the Richter substitution principle, which is an implementation-independent, purely syntactic check, but the Java compiler's check is limited.
4. Dependency inversion principle (DIP)
Definition: High-level modules should not rely on low-layer modules, two should be dependent on abstraction. Abstractions should not be dependent on detail, and detail should be dependent on abstraction.
In Java, abstract refers to an interface or abstract class, both of which are not directly instantiated, the details are the implementation of the class, implement the interface or inherit the abstract class is the details, that is, you can add a keyword new generation of objects. The high-level module is the caller, and the lower layer module is the concrete implementation class.
The principle of dependency inversion in Java is that it occurs through abstraction between modules, and that there is no direct dependency between classes, and their dependencies are generated through interfaces or abstract classes. If classes and classes are directly dependent on details, then they are directly coupled, and when modified, the dependency code is modified at the same time, which limits extensibility.
5. Dimitri Principles (LOD)
Definition: A software entity should interact with other entities as little as possible.
Also known as the least knowledge principle. If a system conforms to the Dimitri Law, then when one of the modules changes, it will minimize the impact of the other modules, the extension is relatively easy, this is the limit of communication between software entities, the Dimitri law requires limiting the width and depth of communication between software entities. The Dimitri law can reduce the coupling degree of the system, and keep the coupling between classes and classes loosely.
The Dimitri law requires us to minimize interaction between objects when designing a system, and if two objects do not have to communicate directly with each other, the two objects should not have any direct interaction, and if one of the objects needs to invoke a method of another object, the call can be forwarded through a third party. In short, it is to reduce the coupling between existing objects by introducing a reasonable third party.
When applying the Dimitri rule to system design, pay attention to the following points: in the division of classes, we should try to create loosely coupled classes, the lower the coupling between classes, the more conducive to reuse, a loose coupling in the class once modified, will not cause too much impact on the associated class, in the class structure design, Each class should minimize the access rights of its member variables and member functions; in the design of a class, a type should be designed to be immutable as long as it is possible; On references to other classes, one object's references to other objects should be minimized.
6. Interface Isolation principle (ISP)
Definition: A class's dependency on another class should be based on the smallest interface.
Establish a single interface, do not set up a huge bloated interface, as far as possible to refine the interface, the interface of the method as little as possible. That is, instead of trying to create a very large interface for all classes that depend on it, we need to create a dedicated interface for each class.
When using the interface isolation principle to constrain the interface, the following points should be noted:
The interface is as small as possible, but there are limits. The refinement of the interface can improve the programming flexibility, but if it is too small, it will cause too many interfaces and complicate the design. So be sure to be modest.
Customizing the service for an interface-dependent class exposes only the methods it needs to the calling class, and the methods it does not need are hidden. A minimal dependency can be established only by focusing on providing a customized service for a module.
Increase cohesion and reduce external interaction. Enable the interface to do the most things with the least amount of method.
The realization basis of concurrent package
Because Java's CAs have both volatile read and volatile write memory semantics, communication between Java threads now has the following four ways:
- A thread writes the volatile variable, and then the B thread reads the volatile variable.
- A thread writes the volatile variable, and then the B thread updates the volatile variable with CAs.
- A thread updates a volatile variable with CAS, and then the B thread updates the volatile variable with CAs.
- A thread updates a volatile variable with CAS, and then the B thread reads the volatile variable.
The CAs in Java use the high-efficiency machine-level atomic instructions available on modern processors that atomically perform read-and-write operations on memory, which is the key to achieving synchronization in a multiprocessor (essentially, a computer that supports atomic read-change-write instructions, is an asynchronous equivalent machine for calculating Turing machines sequentially, so any modern multiprocessor will support some atomic instruction that performs atomic read-and-write operations on memory. At the same time, the read/write and CAS of volatile variables can implement communication between threads. The integration of these features forms the cornerstone of the entire concurrent package. If we carefully analyze the source code implementation of the concurrent package, we will find a generalized implementation pattern:
First, declare the shared variable to be volatile;
Then, the synchronization between threads is realized by using the atomic condition update of CAs.
At the same time, the communication between threads is implemented with volatile read/write and the volatile reading and writing memory semantics of CAs.
AQS, Non-blocking data structures and atomic variable classes (classes in the Java.util.concurrent.atomic package), the underlying classes in these concurrent packages, are implemented using this pattern, and the high-level classes in the concurrent package are dependent on these base classes for implementation.
Write at the end: Welcome message discussion, add attention, continuous UPDATE!!!
Summary of the top Java advanced questions in the domestic first-line internet company