For the persistence of the ACTIVEMQ message, we introduced it briefly in the second section, and today we will analyze the persistence process of activemq and the persistence of plug-ins in detail. In the production environment to ensure the reliability of the message, we are certainly faced with the problem of persistent messages, today, together to conquer him. 1. Introduction to the way of persistence
We also briefly mentioned the ACTIVEMQ provided by the plug-in message store, here again, there are mainly the following ways: AMQ message storage-file-based storage mode, is the ACTIVEMQ start version of the default message storage method; KAHADB message Storage-provides capacity enhancement and resilience, and is the default storage mode for today; JDBC Message store-messages are stored based on JDBC; Memory message store-memory based message store, because memory is not part of the persistence category, and if memory queues are used, Consider using more appropriate products, such as ZEROMQ. So memory storage is not in the discussion area.
The above methods of message storage are no different from the logic of the message store, but only in terms of performance and storage. But for the way messages are sent, Peer-to-peer and pub/sub two types of messages have different ways of persistence:
For point-to-point messages The message will be removed from the broker as soon as the consumer completes the consumption; For a message that publishes a subscription type, even if all subscribers have finished consuming, broker does not necessarily delete the unwanted message immediately, but instead retains the push history and then asynchronously clears the unwanted message. The offset that each subscriber consumes to which message will be recorded in broker, lest the next repeat consumption. Because the message is sequential consumption, advanced first out, so only need to record the last message consumption where it can be.
Since AMQ has now been replaced by KAHADB, we're talking about KAHADB,JDBC message storage in many large companies with high reliability requirements and low performance requirements, and we'll do a topic on the use of these two methods of persistence. 2. Kahadb
Speaking of KAHADB, we still have to mention his predecessor. AMQ,AMQ is a form of file storage, he has a fast write speed and easy to restore the characteristics of the message stored in a file, the file default size of 32M, more than this size message will be deposited in the next file. When a message in a file has been consumed all, then this file will be flagged I can delete, and in the next purge phase this file will be deleted.
If you need to use persistence, you need to add the following configuration to the configuration file Applicationcontext-activemq.xml in the previous article:
<persistenceAdapter>
<kahadb directory= "Activemq-data" journalmaxfilelength= "32MB"/>
</ Persistenceadapter>
KAHADB The following table for the property piece:
Property name |
Property value |
Describe |
Directory |
Activemq-data |
Storage directory for message files and logs |
Indexwritebatchsize |
1000 |
The size of a batch of indexes, updated to the message file when the amount of index to be updated reaches this value |
Indexcachesize |
1000 |
In memory, the page size of the index |
Enableindexwriteasync |
False |
Whether the index is written asynchronously to the message file |
Journalmaxfilelength |
32mb |
The size of a message file |
Enablejournaldisksyncs |
True |
Whether to write asynchronous messages to disk |
CleanupInterval |
30000 |
Clear operation Cycle, Unit MS |
Checkpointinterval |
5000 |
Index writes to the message file's cycle, Unit ms |
Ignoremissingjournalfiles |
False |
Ignore missing message file, False, when missing message file, start exception |
Checkforcorruptjournalfiles |
False |
Check that the message file is corrupted, true, and check that the corruption will attempt to repair |
Checksumjournalfiles |
False |
Generates a checksum to be able to detect damage to the journal file. |
Properties that are valid after version 5.4: |
|
|
Archivedatalogs |
False |
When true, archived message files are moved to directoryarchive instead of being deleted directly |
Directoryarchive |
Null |
Storing archived message file directories |
Databaselockedwaitdelay |
10000 |
Delay time to obtain file lock when load is used, unit MS |
Maxasyncjobs |
10000 |
The same producer produces an asynchronous message that waits for a write. |
Concurrentstoreanddispatchtopics |
False |
Whether to forward the topic message when writing a message |
Concurrentstoreanddispatchqueues |
True |
Whether to forward queue messages when writing a message |
Properties that are valid after version 5.6: |
|
|
Archivecorruptedindex |
False |
Whether to archive incorrect indexes |
Because in the version of Activemq v5.4+, KAHADB is the default persistent storage scenario. So even if you don't configure any of the KAHADB parameter information, ACTIVEMQ will also start kahadb. In this case, the kahadb file location is the/data/broker under your ACTIVEMQ installation path. NAME/KAHADB subdirectories. where {broker. NAME}/KAHADB subdirectories. where {broker. Name} represents the names of this ACTIVEMQ service node. Below I have just started the service and sent the message after the ACTIVEMQ installation directory opened to you to see:
A formal production environment or a recommendation to explicitly set the KAHADB working parameters in the main configuration file:
<broker xmlns= "Http://activemq.apache.org/schema/core" brokername= "broker" persistent= "true" useshutdownhook= " False "> ...
<persistenceAdapter>
<kahadb directory= "Activemq-data"
journalmaxfilelength= "32MB
" Concurrentstoreanddispatchqueues= "false"
concurrentstoreanddispatchtopics= "false"
/>
</ Persistenceadapter>
</broker>
3. Relational database Storage Scenario
Starting with the Activemq 4+ version, ACTIVEMQ supports persistent storage using relational databases-database connections through JDBC. The relational database that you can use includes the mainstream databases that are currently on the market.
Using JDBC to persist we have to modify the previous configuration file:
Configure this section:
<persistenceAdapter>
<kahadb directory= "${activemq.base}/data/kahadb"/>
</ Persistenceadapter>
Modify it to the following paragraph:
<persistenceAdapter>
<jdbcpersistenceadapter datasource= "# mysql-ds"/>
</ Persistenceadapter>
After the node, increase the configuration of the data source as follows:
<!--MySql DataSource Sample Setup--> <bean id= "Mysql-ds" class= "Org.apache.commons.dbcp.BasicDataSource" des troy-method= "Close" > <property name= "driverclassname" value= "Com.mysql.jdbc.Driver"/> <propert Y name= "url" value= "jdbc:mysql://localhost:3306/activemqdb?relaxautocommit=true&useunicode=true& Characterencoding=utf-8 "/> <property name=" username "value=" "root"/> <property name= "Password" value= "root"/> <property name= "poolpreparedstatements" value= "true"/> </bean> <!--Oracle D Atasource Sample Setup--> <bean id= "Oracle-ds" class= "Org.apache.commons.dbcp.BasicDataSource" Destroy-meth od= "Close" > <property name= "driverclassname" value= "Oracle.jdbc.driver.OracleDriver"/> <proper Ty name= "url" value= "Jdbc:oracle:thin: @localhost: 1521:activemqdb"/> <property name= "username" value= "root"/&
Gt <property Name= "PassWord "value=" root/> <property name= "poolpreparedstatements" value= "true"/> </bean> <!--Ora CLE DataSource Sample Setup--> <bean id= "Db2-ds" class= "Org.apache.commons.dbcp.BasicDataSource" Destroy-method = "Close" > <property name= "driverclassname" value= "Com.ibm.db2.jcc.DB2Driver"/> <property
"url" value= "Jdbc:db2://hndb02.bf.ctc.com:50002/activemq"/> <property name= "username" value= "root"/> <property name= "Password" value= "root"/> <property name= "maxactive" value= ""/> <prop Erty name= "Poolpreparedstatements" value= "true"/> </bean>
Or in the previous example project, we changed the Applicationcontext-activemq.xml configuration as follows:
<?xml version= "1.0" encoding= "UTF-8"?> <beans xmlns= "Http://www.springframework.org/schema/beans" Xmlns:xs I= "Http://www.w3.org/2001/XMLSchema-instance" xmlns:amq= "Http://activemq.apache.org/schema/core" Xmlns:conte xt= "Http://www.springframework.org/schema/context" xmlns:mvc= "Http://www.springframework.org/schema/mvc" xsi : schemalocation= "Http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/ Spring-beans-4.1.xsd Http://www.springframework.org/schema/context Http://www.springframework.org/schema/context /spring-context-4.1.xsd Http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/ Spring-mvc-4.1.xsd Http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/ Activemq-core-5.12.1.xsd "> <context:component-scan base-package=" cn.edu.hust.activemq "/> <mvc:annota
Tion-driven/> <amq:connectionfactory id= "Amqconnectionfactory" Brokerurl= "tcp://127.0.0.1:61616" username= "admin" pass word= "admin"/> <!--Configure the JMS connection factory--> <bean id= "ConnectionFactory" class=. Jms.connection.CachingConnectionFactory "> <constructor-arg ref=" amqconnectionfactory "/> <prop Erty name= "sessioncachesize" value= "/> </bean> <!--define Message Queuing (queue)--> <bean id=" Demoqu Euedestination "class=" Org.apache.activemq.command.ActiveMQQueue > <!--set the name of the message queue--> <constru ctor-arg> <value>first-queue</value> </constructor-arg> </bean> &L t;! --Configure the JMS template (Queue), the JMS tool class provided by spring, which sends and receives messages. --> <bean id= "jmstemplate" class= "org.springframework.jms.core.JmsTemplate" > <property name= "Conne" Ctionfactory "ref=" ConnectionFactory "/> <property name=" defaultdestination "ref=" demOqueuedestination "/> <property name= receivetimeout" value= "10000"/> <!--true is Topic,false is Q Ueue, default is False, this shows the write false--> <property name= "Pubsubdomain" value= "false"/> </bean> <! --Configure Message Queuing Listener (queue)--> <bean id= "Queuemessagelistener" class= Cn.edu.hust.activemq.filter.QueueMessageListener "/> <!--display the injected message listening container (Queue), configure the connection factory, The target of the listener is demoqueuedestination, which is the listener defined above--> <bean id= "Queuelistenercontainer" class= Rk.jms.listener.DefaultMessageListenerContainer "> <property name=" connectionfactory "ref=" connectionfactory "/> <property name=" destination "ref=" demoqueuedestination "/> <property name=" MessageListene R "ref=" Queuemessagelistener "/> </bean> <broker xmlns=" Http://activemq.apache.org/schema/core "Brok Ername= "localhost" datadirectory= "${activemq.data}" persistent= "true" > <!--<persistenceadapter> <kahadb directory= "${activemq.data}/kahadb"/> </persistenceAdapter>--> <persistenceAdapter> <jdbcpersistenceadapter datadirectory= "${activemq.data}" datasource= "#m Ysql-ds "> </jdbcPersistenceAdapter> </persistenceAdapter> </broker> <b Ean id= "Mysql-ds" class= "Org.apache.commons.dbcp.BasicDataSource" destroy-method= "Close" > <property name= "Dr Iverclassname "value=" Com.mysql.jdbc.Driver "/> <property name=" url "value=" jdbc:mysql://127.0.0.1/activemq?r Elaxautocommit=true "/> <property name= username" value= "root"/> <property name= "password" Valu E= "123456"/> <property name= "maxactive" value= "a"/> <property "name=" Value= "true"/> </bean> </beans>
When you restart MQ, you will find that there are three more tables in the DB database: Activemq_acks,activemq_lock,activemq_msgs,ok, which means that ACTIVEMQ has persisted successfully.
activemq_acks: Used to store subscription relationships. If this is a persistent topic, the Subscriber and server subscription relationships are saved in this table, and the primary database fields are as follows: Container: Message destination sub_dest: If you are using a static cluster, this field will have information about the other systems in the cluster Client_ ID: Each Subscriber must have a unique client ID to differentiate Sub_name: Subscriber name selector: selector, you can choose to consume only the messages that meet the criteria. A condition can be implemented with a custom attribute that supports multiple attributes and and or operations last_acked_id: Record the ID of the message that was consumed
Activemq_lock: Useful in a clustered environment, only one Broker can get the message, called Master Broker, and others can only wait for the master broker to be unavailable as a backup before it can become the next master Broker. This table is used to record which broker is the current master Broker.
activemq_msgs: For storing messages, both queue and topic are stored in this table. The primary database fields are as follows: ID: The container database primary key: Message destination Msgid_prod: The message sender client's primary key MSG_SEQ: is the order in which messages are sent, msgid_prod+msg_ Seq can compose a JMS MessageID expiration: The expiration of a message, the number of milliseconds from 1970-01-01 to now msg: the binary data priority of the Java serialized object of the message ontology: priority, from 0-9, The greater the number of values the higher the priority activemq_acks is used to store subscription relationships. If this is a persistent topic, the Subscriber and server subscription relationships are saved in this table.