Ucrop Source Thinking analysis--flow carding

Source: Internet
Author: User

First of all to figure out the idea, the first to run the project on the mobile phone, play a play, to see what he has, and then consider how those features are implemented.
Project GitHub Links
First Stage
Toss some after a general impression, and then you can analyze the specific functions, and do not look at the layout of these are how to do, first from the library's entrance analysis, is how we interact with the picture, gestures! Yes, we use gestures to change the display of the picture, you can pan, rotate, enlarge, we should cut from here, the whole idea will be more clear.
So, here's the problem.
How to assign these functions, I should create a few classes, they should be what kind of inheritance relationship and so on, this is the time to embody the author's foundation, How to structure, is worth studying the problem.
1. Handling of gestures
We need a class that can recognize the user's gesture and then invoke the appropriate method. It's definitely a multi-touch.
Question 1: How can I tell if the last finger of a user has been released?
If we have now got the user's gesture, then we must then call some method of a class to do the processing, for example, we have identified a translation gesture, there is bound to be a posttranslate (Disx,disy) method to correspond.
The specific translation of the operation should be put in which class, there are scaling, rotation, cropping and so on.
Question 2: How to determine the function of a class and how to decouple
I did not think of a perfect solution to this problem, or, I have to do in the process of slowly optimizing the architecture, and temporarily do not have to start the project to directly depict the outline, after all, is a rookie.
Layering of ideas
Here, the author divides three classes, each with a different function.
Gesturecropimageview
Special user-aware gesture, pan, zoom, rotate. He doesn't care how to do specific things, and he doesn't care about tailoring.
Transformimageview
Basic features include loading pictures, panning, zooming, rotating, and notifying listeners what's going on.
Cropimageview
Core class. Specific gesture recognition, specific panning operations, these are just a function point, the role of this class is more like a manager, responsible for these functional points together, to achieve a good user experience and the real function of the library.

Second Stage
The first stage is just a general impression, with a macro context, we can now, we still do not see a line of code?
Tossing this library, I was curious when he went in to put pictures and crop the cue box, how to do exactly to let them match.
Question 3: How to initialize our cropping control to achieve the same effect as clipping the cue box
The author uses a ucropview as the outer nesting layout, It contains our crop cue box Overlayview and the crop control Gesturecropimageview.
Because the author has designed the library flexibly, the user can choose the specified cropping scale, 16:9,4:3, etc., and can use the aspect ratio of the native picture, and can adjust the cropping scale dynamically in the control, it is necessary to interact between the two controls, the best way is to set up a listener.

mGestureCropImageView.setCropBoundsChangeListener(new CropImageView.CropBoundsChangeListener() {            @Override            publicvoidonCropBoundsChangedRotate(float cropRatio) {                ifnull) {                    mOverlayView.setTargetAspectRatio(cropRatio);                    mOverlayView.postInvalidate();                }            }        });

We'll look at the specific use scenes.
According to the author's layout,Transformimageview is the top-level parent class (As for why this architecture, can also have a little thought). See what he has done.
Then we have to jump out into the ucropactivity to see how the data is given.
In the Setimagedata () method, the URI of the picture is specified

mGestureCropImageView.setImageUri(inputUri, mOutputUri);

Transformimageview

    Public void Setimageuri(@NonNull Uri Imageuri, @NonNull URI Outputuri)throwsException {Mimageuri = Imageuri;intMaxbitmapsize = Getmaxbitmapsize (); Bitmaploadutils.decodebitmapinbackground (GetContext (), Imageuri, Outputuri, Maxbitmapsize, MaxBitmapSize,NewBitmaploadcallback () {@Override                     Public void onbitmaploaded(@NonNullFinalBitmap Bitmap) {mbitmapwasloaded =true;                        Setimagebitmap (bitmap);                    Invalidate (); }@Override                     Public void onfailure(@NonNull Exception bitmapworkerexception) {LOG.E (TAG,"Onfailure:setimageuri", bitmapworkerexception);if(Mtransformimagelistener! =NULL) {mtransformimagelistener.onloadfailure (bitmapworkerexception);    }                    }                }); }

We only need to know this is a background resolution picture method, after success can get bitmap, set to ImageView, and then re-layout, redraw.
As far as I'm feeling, what we've said before is that the layout of the image according to the cropping ratio should now be reflected.
Based on the code above, we've set the scale for the cropping control in Ucropview, Cropimageview overrides the Onimagelaidout () method, and it's important to determine the clipping box rectangle and the picture rectangle.

//在Random模式下,走的是这段代码        if (mTargetAspectRatio == SOURCE_IMAGE_ASPECT_RATIO) {            mTargetAspectRatio = drawableWidth / drawableHeight;        }        //设置裁剪框矩形        //设置图片初始位置

As for setting the detail code of the cropping box, we will pull it out later to analyze it, this is the process of combing the prophase.

ifnull) {            mCropBoundsChangeListener.onCropBoundsChangedRotate(mTargetAspectRatio);        }

When we have determined the position of the crop box, notify Overlayview to redraw it (this is another detail issue).
Then, we place the picture.

    Private void setupinitialimageposition(floatDrawablewidth,floatDrawableheight) {floatCroprectwidth = Mcroprect.width ();floatCroprectheight = Mcroprect.height ();floatWidthscale = Croprectwidth/drawablewidth;floatHeightscale = Croprectheight/drawableheight;        Mminscale = Math.max (Widthscale, Heightscale); Mmaxscale = Mminscale * mmaxscalemultiplier;floatTW = (Croprectwidth-drawablewidth * mminscale)/2.0f + mcroprect.left;floatth = (Croprectheight-drawableheight * mminscale)/2.0f + mcroprect.top;        Mcurrentimagematrix.reset ();        Mcurrentimagematrix.postscale (Mminscale, Mminscale);    Mcurrentimagematrix.posttranslate (tw, TH); }

Here is the original picture of the width of the high.
Because the position of the picture and the position of the cropping frame are definitely not consistent at the beginning, we may need to pan and zoom in order for them to fit (another detail)
Here, the picture is placed correctly, and the clipping cue box matches.

Phase III
After the picture is properly placed, the rest goes back to our previous question-the recognition gesture, the calling method.
As mentioned above, the method of identifying gestures is all in the Gesturecropimageview class.
Tip: You don't need to calculate the details to identify gestures, just use the encapsulated class, pass the event to him, and he will give you a callback method.

@Override PublicBooleanontouchevent(motioneventEvent) {if((Event. Getaction () & motionevent.action_mask) = = Motionevent.action_down) {cancelallanimations (); }if(Event. Getpointercount () >1) {Mmidpntx = (Event. GetX (0) +Event. GetX (1)) /2; Mmidpnty = (Event. GetY (0) +Event. GetY (1)) /2; } mgesturedetector.ontouchevent (Event);//Gesture Recognition        if(misscaleenabled) {Mscaledetector.ontouchevent (Event);//Zoom recognition}if(misrotateenabled) {Mrotatedetector.ontouchevent (Event);//Rotary recognition}if((Event. Getaction () & motionevent.action_mask) = = motionevent.action_up) {//Handle Finger releaseSetimagetowrapcropbounds (); }  }

In the callback method is a variety of postxx, here is a small detail, because the previous did not do too much touch, here just answer the first question, how to judge the last finger released.
For Dogan Fingers, the last release triggers the ACTION_UP, and the previous finger release event triggers the ACTION_POINTER_UP.
All the panning, zooming, and rotations are handed to the Transformimageview class to complete and take a look.
Take translation as an example

   publicvoidpostTranslate(floatfloat deltaY) {        if00) {            mCurrentImageMatrix.postTranslate(deltaX, deltaY);            setImageMatrix(mCurrentImageMatrix);        }    }

Other methods are also implemented through the matrix, I am also the first contact, feeling is a powerful class, temporarily only need to know how he used, there is a need for in-depth research.
Once you have set up this matrix, you will be judged during the drawing of the view, which transforms the canvas according to the matrix so that you can draw the desired results.
Well, the flow of carding on to this, the following article will continue to analyze a specific implementation, welcome to Exchange.

Ucrop Source Thinking analysis--process grooming

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.