Spring-Blog: personal Blog (1)-Mybatis read/write splitting,

Source: Internet
Author: User

Spring-Blog: personal Blog (1)-Mybatis read/write splitting,
Overview:

  2018After a period of time in Ping (tou) Jing (lan), I began to find something to do. This time, I plan to develop a personal blog and complete my own technologies during the development process. This series of blogs only propose some valuable technical ideas and do not record the development process like writing a flow account.

In terms of technology stack, Spring Boot 2.0 will be used as the underlying framework, mainly to facilitate future access to Spring Cloud for learning and development. Spring Boot 2.0 is based on Spring5. You can also preview some new Spring 5 features in advance. Subsequent technologies will be posted in the relevant blog.

GitHub address: https://github.com/jaycekon/Spring-Blog

The following describes the directory structure:

  • Spring-Blog (ParentProject)
  • Spring-Blog-common (UtilModule)
  • Spring-Blog-business (RepositoryModule)
  • Spring-Blog-api (WebModule)
  • Spring-Blog-webflux (based onSpring Boot 2.0OfWebModule)

 

To help you better understand the content of this module, the Demo code will be stored in the Spring Boot project:

Github address: https://github.com/jaycekon/SpringBoot

1. DataSource

Before proceeding, we need to build the runtime environment.Spring BootIntroductionMybatisFor more information, seePortal.Here we will not go into detail. First, let's look at our directory structure:

 

UsedSpring BootWhen we have configured our database connection information in application. properties,Spring BootWill help us automatically loadDataSource. However, if we need to perform read/write splitting, we must know how to configure our own data sources.

First, let's take a look at the information in the configuration file:

Spring. datasource. url = jdbc: mysql: // localhost: 3306/charles_blog2spring.cece.username = rootspring. datasource. password = rootspring. datasource. driver-class-name = com. mysql. jdbc. driver # Alias scan directory mybatis. type-aliases-package = com. jaycekon. demo. model # Mapper. xml scan directory mybatis. mapper-locations = classpath: mybatis-mappers /*. xml # tkmapper help tool mapper. mappers = com. jaycekon. demo. myMappermapper. not-empty = falsemapper. identity = MYSQL

  

1.1 performancebuilder

First, let's take a look at using DataSourceBuilder to build the DataSource:

@ Configuration @ MapperScan ("com. jaycekon. demo. mapper ") @ EnableTransactionManagementpublic class SpringJDBCDataSource {/*** use Spring JDBC to quickly create DataSource * parameter format * spring. datasource. master. jdbcurl = jdbc: mysql: // localhost: 3306/charles_blog * spring. datasource. master. username = root * spring. datasource. master. password = root * spring. datasource. master. driver-class-name = com. mysql. jdbc. driver ** @ return DataSource */@ Bean @ ConfigurationProperties (prefix = "spring. datasource. master ") public DataSource () {return performancebuilder. create (). build ();}}

From the code, we can see that the method of using DataSourceBuilder to build DataSource is very simple, but it should be noted that:

  •     PerformancebuilderOnly jdbcurl, username, password, driver-class-name and other names in the configuration file can be automatically identified. Therefore, we need to add@ ConfigurationPropertiesAnnotation.
  • The database connection address variable name must be usedJdbcurl
  • Database connection pool usageCom. zaxxer. hikari. HikariDataSource

    When executing the unit test, we can see the process of creating and disabling DataSource.

 

1.2 DruidDataSource

In addition to the above Build method, we can choose to useDruidCreate a database connection poolDataSource

@Configuration@EnableTransactionManagementpublic class DruidDataSourceConfig {    @Autowired    private DataSourceProperties properties;    @Bean    public DataSource dataSoucre() throws Exception {        DruidDataSource dataSource = new DruidDataSource();        dataSource.setUrl(properties.getUrl());        dataSource.setDriverClassName(properties.getDriverClassName());        dataSource.setUsername(properties.getUsername());        dataSource.setPassword(properties.getPassword());        dataSource.setInitialSize(5);        dataSource.setMinIdle(5);        dataSource.setMaxActive(100);        dataSource.setMaxWait(60000);        dataSource.setTimeBetweenEvictionRunsMillis(60000);        dataSource.setMinEvictableIdleTimeMillis(300000);        dataSource.setValidationQuery("SELECT 'x'");        dataSource.setTestWhileIdle(true);        dataSource.setTestOnBorrow(false);        dataSource.setTestOnReturn(false);        dataSource.setPoolPreparedStatements(true);        dataSource.setMaxPoolPreparedStatementPerConnectionSize(20);        dataSource.setFilters("stat,wall");        return dataSource;    }}

UseDruidDataSourceAs a database connection pool, it may seem troublesome, but from another perspective, this is more controllable. We can usePerformancepropertiesTo obtain the configuration file in application. properties:

spring.datasource.url=jdbc:mysql://localhost:3306/charles_blog2spring.datasource.username=rootspring.datasource.password=rootspring.datasource.driver-class-name=com.mysql.jdbc.Driver

Note that the prefix of the configuration file read by performanceproperties is spring. datasource. We can go to the source code of performanceproperties to observe:

@ConfigurationProperties(prefix = "spring.datasource")public class DataSourceProperties        implements BeanClassLoaderAware, EnvironmentAware, InitializingBean

We can see that the prefix format has been marked by default in the source code.

 

In addition to using performanceproperties to obtain the configuration file, we can also use a common environment variable to read the class:

    @Autowired    private Environment env;
  
  
    env.getProperty("spring.datasource.write")

 

  

 

2. Configure multiple data sources

To configure multiple data sources, follow these steps:

2.1 DatabaseType Data Source Name

The enumerated types are used to distinguish between read data sources and write data sources.

public enum DatabaseType {    master("write"), slave("read");    DatabaseType(String name) {        this.name = name;    }    private String name;    public String getName() {        return name;    }    public void setName(String name) {        this.name = name;    }    @Override    public String toString() {        return "DatabaseType{" +                "name='" + name + '\'' +                '}';    }}

 

 

2.2 DatabaseContextHolder

This class is used to record the data sources used by the current thread.ThreadLocalRecord Data

public class DatabaseContextHolder {    private static final ThreadLocal<DatabaseType> contextHolder = new ThreadLocal<>();    public static void setDatabaseType(DatabaseType type) {        contextHolder.set(type);    }    public static DatabaseType getDatabaseType() {        return contextHolder.get();    }}

 

2.3 DynamicDataSource

This class inheritanceAbstractRoutingDataSourceUsed to manage our data sources.DetermineCurrentLookupKeyMethod.

The following describes how this class manages multiple data sources.

public class DynamicDataSource extends AbstractRoutingDataSource {    @Nullable    @Override    protected Object determineCurrentLookupKey() {        DatabaseType type = DatabaseContextHolder.getDatabaseType();        logger.info("====================dataSource ==========" + type);        return type;    }}

 

 

2.4 performanceconfig

The last step is to configure our data source and place the data sourceDynamicDataSourceMedium:

@ Configuration @ MapperScan ("com. jaycekon. demo. mapper ") @ EnableTransactionManagementpublic class cececonfig {@ Autowired private DataSourceProperties properties;/*** use Spring JDBC to quickly create DataSource * parameter format * spring. datasource. master. jdbcurl = jdbc: mysql: // localhost: 3306/charles_blog * spring. datasource. master. username = root * spring. datasource. master. password = root * spring. datasource. master. driver-class-name = com. mysql. jdbc. driver ** @ return DataSource */@ Bean (name = "masterDataSource") @ Qualifier ("masterDataSource") @ ConfigurationProperties (prefix = "spring. datasource. master ") public DataSource masterDataSource () {return performancebuilder. create (). build ();}/*** manually create DruidDataSource and read the configuration * parameter format * spring through performanceproperties. datasource. url = jdbc: mysql: // localhost: 3306/charles_blog * spring. datasource. username = root * spring. datasource. password = root * spring. datasource. driver-class-name = com. mysql. jdbc. driver ** @ return DataSource * @ throws SQLException */@ Bean (name = "slaveDataSource") @ Qualifier ("slaveDataSource") public DataSource slaveDataSource () throws SQLException {DruidDataSource dataSource = new DruidDataSource (); dataSource. setUrl (properties. getUrl (); dataSource. setDriverClassName (properties. getDriverClassName (); dataSource. setUsername (properties. getUsername (); dataSource. setPassword (properties. getPassword (); dataSource. setInitialSize (5); dataSource. setMinIdle (5); dataSource. setMaxActive (100); dataSource. setMaxWait (60000); dataSource. setTimeBetweenEvictionRunsMillis (60000); dataSource. setMinEvictableIdleTimeMillis (300000); dataSource. setValidationQuery ("SELECT 'x'"); dataSource. setTestWhileIdle (true); dataSource. setTestOnBorrow (false); dataSource. setTestOnReturn (false); dataSource. setPoolPreparedStatements (true); dataSource. setMaxPoolPreparedStatementPerConnectionSize (20); dataSource. setFilters ("stat, wall"); return dataSource ;} /*** construct a multi-data source connection pool * Master data source connection pool use HikariDataSource * Slave Data source connection pool use DruidDataSource * @ param master * @ param slave * @ return */@ Bean @ Primary public DynamicDataSource dataSource (@ Qualifier ("masterDataSource ") dataSource master, @ Qualifier ("slaveDataSource") DataSource slave) {Map <Object, Object> targetDataSources = new HashMap <> (); targetDataSources. put (DatabaseType. master, master); targetDataSources. put (DatabaseType. slave, slave); DynamicDataSource = new DynamicDataSource (); dataSource. setTargetDataSources (targetDataSources); // This method is the dataSource method of AbstractRoutingDataSource. setDefaultTargetDataSource (slave); // The default datasource is myTestDbDataSourcereturn dataSource;} @ Bean public SqlSessionFactory sqlSessionFactory (@ Qualifier ("masterDataSource") DataSource myTestDbDataSource, @ Qualifier ("slaveDataSource ") dataSource myTestDb2DataSource) throws Exception {SqlSessionFactoryBean fb = new SqlSessionFactoryBean (); fb. setDataSource (this. dataSource (myTestDbDataSource, myTestDb2DataSource); fb. setTypeAliasesPackage (env. getProperty ("mybatis. type-aliases-package "); fb. setMapperLocations (new PathMatchingResourcePatternResolver (). getResources (env. getProperty ("mybatis. mapper-locations "); return fb. getObject ();}}

 

The above code block is relatively long. Let's parse it:

  • MasterDataSource and slaveDataSource are mainly used to create data sources. Here, hikaridatasource and druidDataSource are used as data sources respectively.
  • In the DynamicDataSource method body, we mainly put both data sources into DynamicDataSource for unified management.
  • The SqlSessionFactory method manages all data sources (DynamicDataSource) in a unified manner.

 

2.5 UserMapperTest

Next, let's take a brief look at the DataSource creation process:

First, we can see that our two data sources have been built, respectively, using the HikariDataSource and DruidDataSource, and then we will put the two data sourcesTargetDataSourceAnd here we will talk aboutSlaveAs the default data sourceDefaultTargetDataSource

      

Then, obtain the data source:

It is mainly used to judge from the determineTargetDataSource () method in the AbstractRoutingDataSource class. Here we will call the method in DynamicDataSource to determine which data source to use. If no data source is set, the default data source will be used, which is the DruidDataSource data source we just set.

 

In the final code running result:

We can see that the default data source we set is used.

 

 

3. read/write splitting

After a great deal of experience, I finally came to our read/write splitting module. First, we need to add some of our configuration information:

spring.datasource.read = get,select,count,list,queryspring.datasource.write = add,create,update,delete,remove,insert

These two variables are mainly used to determine which part of the data source needs to be read and which needs to be written.

 

3.1 modify DynamicDataSource
public class DynamicDataSource extends AbstractRoutingDataSource {    static final Map<DatabaseType, List<String>> METHOD_TYPE_MAP = new HashMap<>();    @Nullable    @Override    protected Object determineCurrentLookupKey() {        DatabaseType type = DatabaseContextHolder.getDatabaseType();        logger.info("====================dataSource ==========" + type);        return type;    }    void setMethodType(DatabaseType type, String content) {        List<String> list = Arrays.asList(content.split(","));        METHOD_TYPE_MAP.put(type, list);    }}

Here we need to add a Map to record some read/write prefix information.

 

 

3.2 cececonfig Modification

In DataSourceConfig, when we set DynamicDataSource again, we set the prefix information.

@ Bean @ Primary public DynamicDataSource dataSource (@ Qualifier ("masterDataSource") DataSource master, @ Qualifier ("slaveDataSource") DataSource slave) {Map <Object, object> targetDataSources = new HashMap <> (); targetDataSources. put (DatabaseType. master, master); targetDataSources. put (DatabaseType. slave, slave); DynamicDataSource = new DynamicDataSource (); dataSource. setTargetDataSources (targetDataSources); // This method is the dataSource method of AbstractRoutingDataSource. setDefaultTargetDataSource (slave); // The default datasource is set to myTestDbDataSource String read = env. getProperty ("spring. datasource. read "); dataSource. setMethodType (DatabaseType. slave, read); String write = env. getProperty ("spring. datasource. write "); dataSource. setMethodType (DatabaseType. master, write); return dataSource ;}

 

3.3 performanceaspect

After configuring the prefix of the read/write method, we need to configure a section for the listener to set the data source before entering the Mapper method:

The main operation is DatabaseContextHolder. setDatabaseType (type). In combination with the method for obtaining data sources from multiple data sources above, this is the key to setting the read or write data source.

@ Aspect @ Component @ EnableAspectJAutoProxy (proxyTargetClass = true) public class performanceaspect {private static Logger logger = LoggerFactory. getLogger (performanceaspect. class); @ Pointcut ("execution (* com. jaycekon. demo. mapper. *. *(..)) ") public void aspect () {}@ Before (" aspect () ") public void before (JoinPoint point) {String className = point. getTarget (). getClass (). getName (); String method = point. getSignature (). getName (); String args = StringUtils. join (point. getArgs (), ","); logger.info ("className :{}, method :{}, args :{}", className, method, args ); try {for (DatabaseType type: DatabaseType. values () {List <String> values = DynamicDataSource. METHOD_TYPE_MAP.get (type); for (String key: values) {if (method. startsWith (key) {logger.info (">>{} the data source used by the method is: {}< <", method, key); DatabaseContextHolder. setDatabaseType (type); DatabaseType types = DatabaseContextHolder. getDatabaseType (); logger.info (">{} the data source used by the method is: {}< <", method, types) ;}}} catch (Exception e) {logger. error (e. getMessage (), e );}}}

    

   

3.4 UserMapperTest

After the method is started, enter the section and set the data source type according to methodName.

 

Then go to the determineTargetDataSource method to obtain the data source:

Running result:

 

 

 

 

4. Write it at the end

Hope you will find it helpful after reading this article, and help the blogger to click Start or fork on github.

Spring-Blog project GitHub address: https://github.com/jaycekon/Spring-Blog

Github address: https://github.com/jaycekon/SpringBoot

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.