How to rely on Scala's language features for high-force dependency injection

Source: Internet
Author: User
Tags abstract json mixed mongoclient mongodb mongodb driver log4j
how to rely on Scala's language features for high-force dependency injection

What is dependency injection.

Object is an abridged unit in the object-oriented world, and dependency injection groups the objects together. The simplest way to look at it is to inject the dependency into the object through a constructor or setter method.

If we use the appropriate container, we can also extract the dependencies of each component of the system into the configuration file or configuration code, and inject it by the container when needed, which is the high-force dependency injection.

The benefit of dependency injection is actually "decoupling",

If used in conjunction with interfaces, abstract classes, and so on, it will reap the benefits of easy expansion and easy maintenance.

How Scala is implemented.

In Scala you can use traditional dependency injection frameworks in Java, such as Guice, but without exception, they all need an external framework to work.

Using Scala, you can also rely on the cake model for high-force dependency injection without using any framework, which is the main thing to talk about today.

Let's look at a real example. Demand

Recently, I'm doing the "Add Column filter support for compute modules" requirement, which contains two parts,
The first is the system code transformation, the second is to be compatible with the previous flowchart, you need to update all the relevant existing 24 modules of the Workflow.param field values to the new version of the JSON.
We use the second requirement to tell how to implement dependency injection using the cake pattern. Design

Seeing this demand, we will think of the following things to do:
1. read out the JSON in the Workflow.param field of each module from MongoDB
2. Convert the older JSON from 1 to the new version of JSON
3. Update the new JSON in 2 to the Workflow.param field

If you simply implement the above 1, 2, 3, you can actually write a simple enough program contains 3 files,

A file to write 24 modules of the JSON conversion logic, a file write read and write MongoDB logic, A file write loop invokes the logic of each module JSON parsing and updating.

but have you ever thought,

If you have a new MongoDB driver in the future, the performance is excellent and you want to replace the existing DB read logic.

If MongoDB is later replaced by MySQL.

If a new JSON parsing library is in the future, the performance is excellent and you want to replace the existing JSON processing logic.

If you want some modules to add a logic that transforms JSON into XML later.

If you want to use log4j to write logs later, you also want to use a module that you have developed to output the logs to HDFs.

...

In fact, for the existing requirements, it is true that these future scalable places are superfluous, and here is purely to demonstrate the extended maintenance capabilities of the high-force dependency injection,

But everyone in the design process of the time to think more about, one is to exercise their ability to design programs, The second is really to expand later, will be relatively easy a lot, you will be very cool.

You need to find a balance between easy-to-scale maintenance and difficulty in implementation.

So my design is like this,
1. Abstract a JSON parsing layer, the JSON parsing as a component, 24 modules there will be 24 JSON parsing components
2. Abstract a data access layer, the read-write database is abstracted into a component, currently only MongoDB a database, only mongodb corresponding components
3. Abstract a log output layer to abstract the log output into a component
4. Abstract a job execution task layer, which consists of various components, which can be designed as follows

A. Install the JSON component and the DB component into a job so that a job module processes its own JSON and updates its own fields

B. Install only the JSON component to the job, externally get the new JSON results for all jobs, batch update db

This time I adopt the B scheme, because a scheme, because the database connection is too many, the database will be reported "too many open files", and then hang up. Abstract a launch loading layer consisting of a DB component and a log component to invoke the JSON transformation of 24 modules and the logic of a batch update database

The first 3 components described above are injected into the job layer or launch layer using cake mode. Coding Interface Definition

First look at the abstract interfaces at each level. data Access Layer Dbplugincomponent

Trait dbplugincomponent {
  trait dbplugin[r, u]{
    def querybynodetypes (Nodetypes:seq[nodetype]): Future[seq[r] ]
    def bulkupdateparam (workflowwithparamseq:seq[(R, String)]): Future[u]
  }
}

Trait set trait is not looking very fresh, indeed this style is not commonly used, but this is part of the cake mode:
The first step is to define the dependencies that will be injected.
The dependency that will be injected is defined here: the data access component. JSON parsing layer jsonplugincomponent

Trait jsonplugincomponent {
  trait Jsonplugin[o] {
    def fromold (jsonstr:string): O = {
      val json = Try (json.pars Earray (JSONSTR). Getjsonobject (0)). Getorelse (Json.parseobject (JSONSTR))
      fromold (JSON)
    }
    def fromold ( Json:jsonobject): O = throw new Unsupportedoperationexception
    def tonew (oldobj:o): String
    def transform (json:s Tring): String = {
      tonew (fromold (JSON)
    }
    protected def toselectedcolumnsjson (Columns:jsonarray): Jsonobject = {
      val json = new Jsonobject ()
      json.put ("isspecified", True)
      json.put ("Specifiedcolumns", Columns)
      JSON
    }
  }
}

The dependency that will be injected is defined here: the JSON parsing component.
The Fromold method is to convert the old JSON into an intermediate structure, and the Tonew method converts the intermediate structure to the new version Json,transform method is called externally. log output Layer loggercomponent

Trait loggercomponent {
  trait Logger {
    def info (s:string)
    def warn (s:string)
    def warn (s:string, E:thro wable)
    def error (s:string)
    def error (s:string, e:throwable)
    def debug (s:string)
  }
}

The dependencies that need to be injected are defined here: Log output Component Job execution task layer Jobcomponent

Trait Jobcomponent[r <: Nodetypewithparam] {
  this:jsonplugincomponent =
  val Json:jsonplugin[_]
  def work (workflow:r): (R, String) = {
    val Newjson = Json.transform (workflow.param)
    (Workflow, Newjson) c15/>}
}

A job component is defined here that tells the program that JSON parsing component dependencies must be injected, so this is used here,

This:jsonplugincomponent =

This notation is called its own type callout (Self-type annotation), which ensures the type safety of cake mode,
That is, mixed with the jobcomponent characteristics of the class, the same must be mixed with the characteristics of jsonplugincomponent, or compile error.
This is the second step in the cake pattern, which defines what type of dependency to inject.
The following code tells the job that a JSON parsing component is required, but specifically which component, from where, is determined by the downstream implementation.

Val Json:jsonplugin[_]

The work method defines how the task relies on the injected component to do its job, and the implementation class does not care about the method, just the specific component to be injected. Launch loading layer

Trait Launchcomponent[r <: Nodetypewithparam, U] {this:dbplugincomponent with loggercomponent = Val Db:dbplu Gin[r, U] Val logger:logger def launch (): Unit = {val Filternodesconfmap:map[nodetype, String] = configurationf Actory.get.getConfigList ("Filter.nodes"). Asscala.map (filternode.apply). tomap val nodejobmap = new mutable. Hashmap[nodetype, Jobcomponent[r]] val filternodetypes = filterNodesConfMap.keys.toSeq val filternodes = await.res Ult (Db.querybynodetypes (Filternodetypes), second) Logger.info (s "queriedcount = ${filternodes.size}") Val NewPar Amseq = filternodes.map {n = val job = Nodejobmap.getorelse (N.nodetype, {val instance = this
          . GetClass.getClassLoader.loadClass (Filternodesconfmap (N.nodetype)). newinstance (). Asinstanceof[jobcomponent[r] Nodejobmap + = (N.nodetype-instance) instance}) Job.work (n)} workresulthandler (NE WPARAMSEQ)} def workresulthandleR (seq:seq[(R, String)]): Unit} 

A launch component is defined here that tells the program that the data access component and the log output component must be injected, as well as using the notation of its own type notation to ensure type safety.
The following code tells the launch component that a data access component and a log output component are required, but specifically which component, from where, is determined by the downstream implementation.

  Val db:dbplugin[r, U]
  val Logger:logger

The launch method describes how to load each job component to complete a task, where the required job layer components are dynamically instantiated based on the configuration file.
The Workresulthandler method describes how to handle the processing results of all job components and requires a downstream implementation. Interface Implementation

Looking at how each level is implemented, the JSON parsing layer and the job task execution layer only pick one of the modules to illustrate the interface. data Access Layer MongoDB component Mongodbcomponent

Trait Mongodbcomponent extends Dbplugincomponent {class DB extends Dbplugin[mongoworkflowparam, Bulkwriteresult] { Private Val uri = ConfigurationFactory.get.getString ("Db.mongo.uri") private val dbName = ConfigurationFactory.get.get String ("Db.mongo.dbName") private Val db = Mongoclient (URI). Getdatabase (DbName) private Val collectionname = "Work Flow "Import org.mongodb.scala.bson.codecs.macros._ private implicit val codecregistry:codecregistry = Fromregist
    Ries (Default_codec_registry, Fromcodecs (new Nodetypecodec), Fromproviders (Classof[mongoworkflowparam])
    Private Val collection = Db.getcollection[mongoworkflowparam] (collectionname). Withcodecregistry (Codecregistry) Override Def querybynodetypes (Nodetypes:seq[nodetype]): Future[seq[mongoworkflowparam]] = {if (nodetypes.isempt Y) (Seq.empty[mongoworkflowparam]) else {collection.find (and (In ("NodeType", NodeTypes: _*) , NotEqual ("Param "," "), NotEqual (" Param "," [] "), NotEqual (" param ", null), NotEqual (" param "," {} ") ). Projection (Include ("id", "nodeType", "param")). Tofuture ()}} override Def Bulkupdateparam (WORKFLOWWITHPA ramseq:seq[(Mongoworkflowparam, String)]): Future[bulkwriteresult] = {val writes:seq[writemodel[_ <: MongoWork Flowparam]] = workflowwithparamseq.map{case (w, Newjson) and Updateonemodel (equal ("_id", w._id), set ( "Param", Newjson)} collection.bulkwrite (writes). Tofuture ()}}}

Downstream components are nothing special, but according to their own characteristics, the implementation of methods in the interface, this is the use of "mongo-scala-driver" drive implementation of the method of accessing MongoDB. data Access Layer dummy test component Dummydbcomponent

Trait Dummydbcomponent extends Dbplugincomponent {class DB extends Dbplugin[mongoworkflowparam, Bulkwriteresult] { Private Val uri = ConfigurationFactory.get.getString ("Db.mongo.uri") private val dbName = ConfigurationFactory.get.get String ("Db.mongo.dbName") private Val db = Mongoclient (URI). Getdatabase (DbName) private Val collectionname = "Work Flow "Import org.mongodb.scala.bson.codecs.macros._ private implicit val codecregistry:codecregistry = Fromregist
    Ries (Default_codec_registry, Fromcodecs (new Nodetypecodec), Fromproviders (Classof[mongoworkflowparam])
    Private Val collection = Db.getcollection[mongoworkflowparam] (collectionname). Withcodecregistry (Codecregistry) Override Def querybynodetypes (Nodetypes:seq[nodetype]): Future[seq[mongoworkflowparam]] = {if (nodetypes.isempt Y) (Seq.empty[mongoworkflowparam]) else {collection.find (and (In ("NodeType", NodeTypes: _*), NotEqual ( "Param", ""), NotEqual ("Param "," [] "), NotEqual (" param ", null), NotEqual (" param "," {} ")). Projection (Include (" id "," nodeType "," param ")/*. Limit (1) */.tofuture ()}} override Def Bulkupdateparam (workflowwithparamseq:seq[(Mongoworkflowparam, String )]): Future[bulkwriteresult] = {Future (bulkwriteresult.unacknowledged ())}}}

This implementation is for testing purposes during development,
because after I have developed the job portion of JSON parsing, I only want to read the old JSON parameters I need when I test, but I don't want to really update the database.
I simply inject this dummy component into the implementation of the launch Layer launcher, and the rest of the code does not change, it can be used for testing purposes. JSON parsing layer column Cryptographic module component Columnencryptjsoncomponent

Trait Columnencryptjsoncomponent extends Jsonplugincomponent {class Columnencryptjson extends jsonplugin[  Oldcolumnencryptparam] {override def fromold (json:jsonobject): Oldcolumnencryptparam = {val Retainoldcolumn = if (Json.containskey ("Retainoldcolumn")) Some (Json.getbooleanvalue ("Retainoldcolumn")) Else None val Selectedarr = if (Json.containskey ("selected")) Some (Json.getjsonarray ("selected")) Else None Oldcolumnencryptparam (Selectedarr, R Etainoldcolumn)} override def tonew (oldobj:oldcolumnencryptparam): String = {val Rootjson = new Jsonarray () Val Newjson = new Jsonobject () oldObj.selected.map {v = val Selectedcolumnsjson = To Selectedcolumnsjson (v) newjson.put ("Selectedcolumns", Selectedcolumnsjson)} oldobj.retainoldcolumn. Map (v = newjson.put ("Retainoldcolumn", V)) Rootjson.add (Newjson) rootjson.tojsonstring}} case cl Oldcolumnencryptparam (Selected:opTion[jsonarray], Retainoldcolumn:option[boolean])} 

This is the concrete implementation of JSON parsing transformation of the column encryption module, nothing to say, look at the code. job Execution Task level column Cryptographic module component Columnencryptjob

Class Columnencryptjob extends Jobcomponent[mongoworkflowparam] with
  columnencryptjsoncomponent {
  override Val json  = new Columnencryptjson
}

This is the implementation of the job layer of the column encryption module, which injects the JSON parsing component of the column encryption module. launch implementation component of the load layer Launcher

Class Launcher extends Launchcomponent[mongoworkflowparam, Bulkwriteresult] with
  mongodbcomponent with log4jloggercomponent {
  override Val Db:dbplugin[mongoworkflowparam, Bulkwriteresult] = new db
  override Val Logger = new Log4jlogger (this.getclass)
  override Def Workresulthandler (seq:seq[(Mongoworkflowparam, String)]): Unit = {
    val start = System.currenttimemillis ()
    val future = Db.bulkupdateparam (seq)
    val result = Await.resul T (future, second)
    val finished = System.currenttimemillis ()
    Logger.info (S "Elapsed time = ${finished-start} MS, Matchedcount = ${result.getmatchedcount}, Modifiedcount = ${result.getmodifiedcount}, Deletedcount = ${ Result.getdeletedcount}, Insertedcount = ${result.getinsertedcount} ")
  }
}

This is the concrete implementation of the launch layer loader, which injects the components of the data access layer implemented by the "Mongo-scala-driver" drive and the log output layer component of the log4j implementation,
The Workresulthandler method handles the logic to batch update the new JSON generated by all the job components to the database. Expansion and Maintenance

The

Above is a complete demonstration of how to rely on Scala's language features for high-force dependency injection.
Let's take a look at our implementation of dependency injection, how to quickly solve the article at the beginning of several possible changes in the future needs. If a new MongoDB driver is in the future, the performance is excellent and you want to replace the existing DB read logic. write a new data access Layer MongoDB component Mongodbv2component inject the newly written mongodbv2component component into the concrete implementation of the load Layer launcher
The injected code of the original component
Scala
with mongodbcomponent

Replace the injected code for the new component
Scala
with mongodbv2component
modifies the data type required by the generic (or may not be modified) strong> If later MongoDB replaced with MySQL.

The

is basically the same as the solution to the problem above, just a new data access layer component for MySQL . If a new JSON parsing library is in the future, the performance is excellent and you want to replace the existing JSON processing logic. Each module writes a new component of the JSON parsing layer implemented with the new parse library
This step does not save the work because the JSON for each module is customized. Replace the newly written component in 1 with the original JSON parsing component, and inject it into the corresponding job implementation to modify the data type required by the generic (or not modify it) if you want to add a part of the module to the logic of converting JSON into XML later Add an XML parsing layer and interface, using trait nesting, to define the XML parsing component that will be injected relies on an interface in XML parsing library implementation 1 to inject XML parsing implementation components into a job that requires JSON to XML, such as
Scala
With Columnencryptjsoncomponent

modified to
Scala
with Columnencryptjsoncomponent with Columnencryptxmlcompon ENT
Modify the job implementation code, override the work method, customize its own task processing logic, that is, the old version of json-"New json-" new version of XML. If later not only want to write logs with log4j, but also want to use a module of their own development of the log output to HDFs it. write a new log output layer of the component, which contains the log output to the HDFS logical modification Launch Layer implementation Launcher code, 1 components injected into, such as
Scala
with Log4jloggercomponent

modified to
Scala
with log4jloggercomponent with hdfsloggercomponent
Add the log output code when you need to output the log to HDFs /strong>

As explained above, you should have learned how to use the cake model to achieve high-force dependency injection in Scala, and recognize its powerful decoupling capabilities and the ability to expand and maintain ease of maintenance.
But this goal is only a shallow level of purpose, will use the cake mode does not mean that you can design a good program.
Through this case we should learn how to design their own programs, small to a module, an interface design, large to a system, a platform design.
Through the above process, we have summed up the main points of my design program. Let me summarize the points first.
1. Demand analysis, this step is very important, this step is a mistake may be to reconstruct the
2. Think about how the design can make the program easy to expand, easy to maintain, of course, consider the performance
3. Learn layered design, complex problem simplification
4. The highest possible level of abstraction
5. Abstract common logic into one place, allowing only downstream implementation of customized content

Specific code can be obtained from GitHub: Https://github.com/deanzz/json-transformer

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.