This is also 2011 Baidu star of a problem.
This problem I was messed up, before hitting the code in my heart also no end, do not know whether can live.
My approach is simply to construct the amount of zombie blood that can be killed in chronological order, and to find the lesser of the first K. The method of construction is also very violent: for the T moment, the new structure of the first weapon of the blood volume, is to use the AI+T*BI in order to add to the previous time the amount of blood constructed. So the key to solving the problem is to constantly pruning and optimizing this method.
The first optimization is that T is only used from 1 to K, but not back, well understood.
Second, consider how to store the amount of blood that has been produced. I used a set to save all the blood, and then each time I loop, I export the contents of the set into an array. Inserting data into set and finding data are log n, so the data is exported to an array with the complexity of N*log (n). Here you can add an optimization, each time you export to an array of elements with only the first K.
The third optimization, if the t moment, all the ai+bi*t are larger than the current results, it can be ended.
After this, I measured a few sets of data, and soon the results, but the hand still timed out. So I constructed a few sets of abnormal data, a test, found that the bottleneck is still set there, because for large-scale test data, the subsequent time of a large amount of blood generated by the value of the previous occurrence, if the value of the amount of hash to determine whether the occurrence of, can be the complexity here from log (n) to O (1), Because this is a bottleneck, the effect is obvious. Sure enough, plus later on.
The code is as follows:
/** bjfu1099 * Author:ben*/#include<cstdio>#include<cstdlib>#include<cstring>#include<cmath>#include<ctime>#include<iostream>#include<algorithm>#include<queue>#include<Set>#include<map>#include<stack>#include<string>#include<vector>#include<deque>#include<list>#include<functional>#include<numeric>#include<cctype>using namespacestd; #ifdef on_local_debug#else#endiftypedefLong LongLL;Const intMAXN = One;intN, K;intA[MAXN], B[MAXN];intbuf[51000];Set<int>S;Const intMaxh =1000007;BOOLHash[maxh];intHval[maxh];voidHash_insert (intnum) { intk = num%Maxh; while(Hash[k] && hval[k]! =num) {k= (k +1) %Maxh; } if(!Hash[k]) {Hash[k]=true; HVAL[K]=num; }}BOOLHash_find (intnum) { intk = num%Maxh; while(Hash[k] && hval[k]! =num) {k= (k +1) %Maxh; } returnhash[k];}intWork () {BOOLHasnew =true; S.insert (0); Hash_insert (0); intoffset, I; for(intt =1; T <= K; t++) { if(hasnew) {I=0; Set<int>::iterator it =S.begin (); while(It! = S.end () && I <=K) {buf[i+ +] = * (it++); } hasnew=false; } offset= (I > K)? BUF[K]:0x7fffffff;//printf ("t =%d, offset =%d, I =%d\n", T, offset, I); BOOLFlag =false; for(inti =0; i < N; i++) {A[i]+=B[i]; if(A[i] >offset) { Continue; } for(intj =0; J < I; J + +) { intp = a[i] +Buf[j]; if(P <offset) {Flag=true; if(!hash_find (P)) {//for large-scale test data, the subsequent insertion is too repetitive, with hash to reduce the complexity of the logn here .//if (S.count (p) <= 0) {Hasnew =true; S.insert (P); Hash_insert (P); } } Else { Break;; } } } if(!flag) { Break; } } returnbuf[k];}intMain () {#ifdef on_local_debug freopen ("data.in","R", stdin);#endifmemset (hash,false,sizeof(hash)); scanf ("%d%d", &n, &K); for(inti =0; i < N; i++) {scanf ("%d%d", &a[i], &B[i]); } printf ("%d\n", work ()); return 0;}
bjfu1099 degree bears vs Zombies