An introduction to algorithm ———— slope optimization

Source: Internet
Author: User

"Example Portal: BZOJ1010" BZOJ1010: [HNOI2008] Toy packing toy

"Test Instructions"
given n continuous line segments, each segment has a length of x[i], we can put a number of consecutive lines together, into a combination, two segments if connected, it is necessary to add a 1-length lattice in the middle of two segments (if not connected without adding), If we now choose to turn all segments of section I to section J into a combination, the total length of this combination is: X[i]+x[i+1]+x[i+2]+x[i+3]+...+x[j]+j-i, now gives a constant number l, assuming the current selection of the combination of the length of S, Then this combination gives us the cost of the (s-l) ^2, finding the minimum cost required to divide the N line into several combinations, and the individual segments can be a combination
"Input File"
The first line is two integers, N and L, respectively.
down N number XI, enter the volume of each item by number from small to large.
1<=n<=50000,1<=l,xi<=10^7
"Output File"
An integer that is the minimum value of the total cost.
"Sample Input"
5 4
3 4 2) 1 4
"Sample Output"
1

Algorithm Analysis:

Slope optimization is actually an algorithm to optimize DP, but it must be used when the DP equation is monotonic

Down we use case to illustrate slope optimization

F[i] represents the minimum cost of 1~i.
F[i]=min (f[j]+ (sum[i]-sum[j]+i-(j+1)-L) ^2) (J<i)
F[i]=min (f[j]+ (sum[i]+i-sum[j]-j-1-l) ^2) (J<i)
Make S[i]=sum[i]+i,l=1+l
Then F[i]=min (f[j]+ (s[i]-s[j]-l) ^2)

first, let's prove the monotony of decision making.
assuming J1<j2<i, the J2 decision at state I is not worse than the J1 decision (thinking about phasing out J1),
that is to be satisfied: f[j2]+ (s[i]-s[j2]-l) ^2<f[j1]+ (s[i]-s[j1]-l) ^2
then for all state T after I, is j2 not worse than J1? (terminology: Proving monotonicity of decision-making)
namely f[j2]+ (s[t]-s[j2]-l) ^2 < f[j1]+ (s[t]-s[j1]-l) ^2
easy to understand S[t]=s[i]+v
so get (1) Inequality: f[j2]+ (s[i]-s[j2]-l+v) ^2<f[j1]+ (s[i]-s[j1]-l+v) ^2
because of the known (2) Inequality: f[j2]+ (s[i]-s[j2]-l) ^2<f[j1]+ (s[i]-s[j1]-l) ^2
so the simplification (1) Inequality: S[i]-s[j2]-l as a whole, V as a whole, get:
f[j2]+ (s[i]-s[j2]-l) ^2+2*v* (s[i]-s[j2]-l) +v^2 <f[j1]+ (s[i]-s [J1]-l] ^2+2*v* (s[i]-s[j1]-l) +v^2

comparison (2) Inequalities:
A part of the left side: 2*v* (s[i]-s[j2]-l) +v^2
right part: 2*v* (s[i]-s[ j1]-l) +v^2
so we just need to pass:
2*v* (s[i]-s[j2]-l) +v^2<=2*v* (s[i]-s[j1]-l) +v^2
namely: (s[i]-s[j2]-l) <= (s[i]-s[j1]-l)
ie:-s[j2] <=-s[ J1]
ie: S[j1]<s[j2] That's for sure, so it's proof.
summary: for the current i:j2 is better than J1, then for T (i<t) The same: J2 is better than J1,
So at present I choose J2, eliminate J1, and later T will also not choose J2
J1 So I can permanently retire J1

and then we find the slope equation.
because f[j2]+ (s[i]-s[j2]-l) ^2<=f[j1]+ (s[i]-s[j1]-l) ^2
Expand:
f[j2]+ (s[i]-l) ^2-2* (s[i]-l) *s[j2]+s[j2]^2<=f[j1]+ (s[i]-l) ^2-2* (s[i]-l) *s[j1]+s[j1]^2
namely f[j2]-2* (s[i]-l) *s[j2]+s[j2]^2<=f[j1]-2* (s[i]-l) *s[j1]+s[j1]^2
namely f[j2]+s[j2]^2-2* (s[i]-l) *s[j2]<=f[j1]+s[j1]^2-2* (s[i]-l) *s[j1]
i.e. [(f[j2]+s[j2]^2)-(f[j1]+s[j1]^2)]<=2* (s[i]-l) *s[j2]-2* (s[i]-l) *s[j1]
i.e. [(f[j2]+s[j2]^2)-(f[j1]+s[j1]^2)]/(s[j2]-s[j1]) <=2* (s[i]-l)
for J:
Point coordinates for manufacturing
y=f[j]+s[j]^2
X=s[j]
We use the queue list at the decision point where there is meaning, the slope of two adjacent points in the list is incremented (the points in the queue form a lower convex shell), and are larger than the s[i]-l, then the queue header is the optimal decision point for I
When I join decision I, the tail of the team is List[tail], the previous one is LIST[TAIL-1]

Slope function Slop (point 1, point 2)
Meet: Slop (List[tail-1],list[tail]) >slop (list[tail],i),
So the end of the team List[tail] in the three (list[tail-1],list[tail],i) for the future tail is definitely not the optimal strategy, so it pops up tail--
Finally encountered: Slop (List[tail-1],list[tail]) <slop (list[tail],i), to ensure that the queue adjacent to the slope increment of two points, so join i:list[++tail]=i;

Then F[n] is the answer.

Reference Code:
#include <cstdio>#include<cstring>using namespacestd;Long Longf[51000],q[51000],s[51000];intn,l;DoubleXinti) {    return 2.0*s[i];}DoubleYinti) {    returnf[i]+ (s[i]+l) * (s[i]+L);}DoubleSlopintIintj) {    return(Y (j)-Y (i))/(X (j)-X (i));}intMain () {scanf ("%d%d", &n,&l); l++; s[0]=0;  for(intI=1; i<=n;i++)    {        intx; scanf ("%d", &x); x + +; S[i]=s[i-1]+x; }    intL=1, r=1; q[1]=0;  for(intI=1; i<=n;i++)    {         while(L<r&&slop (q[l],q[l+1]) <=s[i]) l++; intj=Q[l]; F[i]=f[j]+ (s[i]-s[j]-l) * (s[i]-s[j]-m);  while(L<r&&slop (q[r-1],q[r]) >slop (q[r],i)) r--; q[++r]=i; } printf ("%lld\n", F[n]); return 0;}

An introduction to algorithm ———— slope optimization

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.