0

Global Round 19 Editorial

 1 year ago
source link: https://codeforces.com/blog/entry/99883
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

We hope you enjoyed the contest! We recommend you to read all tutorials even if you solve the problem, maybe you will learn something new.

1637A - Sorting Parts
Idea: __JustMe__.

Hint 1
Hint 2
Tutorial
Solution

1637B - MEX and Array
Idea: __JustMe__ and Mangooste.

Hint 1
Hint 2
Tutorial
Solution

1637C - Andrew and Stones
Idea: TeaTime.

Hint 1
Hint 2
Tutorial
Solution

1637D - Yet Another Minimization Problem
Idea: Mangooste.

Hint 1
Hint 2
Hint 3
Tutorial
Solution

1637E - Best Pair
Idea: Mangooste.

Hint 1
Hint 2
Tutorial
Solution

1637F - Towers
Idea: TeaTime.

Hint 1
Hint 2
Tutorial
Solution
Alternative solution

1637G - Birthday
Idea: EvgeniyPonasenkov.

Hint 1
Hint 2
Hint 3
Tutorial
Solution

1637H - Minimize Inversions Number
Idea: Mangooste.

Hint 1
Hint 2
Hint 3
Hint 4
Tutorial
Solution

10 months ago, # |

Thanks for the great round and complete editorial :)

10 months ago, # |

Thanks to you for participating!

  • 10 months ago, # ^ |

    He didn't participated :-{

10 months ago, # |

My approach for E — There are distinct values of . Then fox and iterate over each and have those respective counts

  • 10 months ago, # ^ |

    I don't understand this. In the case where every value had count 1 are there not N distinct values of count. In that case wouldn't your solution TLE?

    • 10 months ago, # ^ |

      No because there are at most M bad pairs, so it runs in at most O(M).

    • 10 months ago, # ^ |

      If every value had count 1, then there is only one value of count :) i.e. 1

      • 10 months ago, # ^ |

        what's your profile...please tell me

        • 10 months ago, # ^ |

          Which profile?

          • 10 months ago, # ^ |

            your profile picture of this kid...please tell me I've seen him on numerous profiles

      • 10 months ago, # ^ |

        Can you explain the time complexity of problem E.

  • 10 months ago, # ^ |

    So for each that occurs times, you want to find largest in so that and ?

    • 10 months ago, # ^ |

      Yes I find the largest that is not in the banned list with

      • 10 months ago, # ^ |

        I seee. Thanks a lot. Got it accepted. I think missed out an observation that the will only have at most variations of it

  • 10 months ago, # ^ |

    Currently, your code MLE at test 77 146344541, is your approach wrong or you didn't implement it neatly?

    • 10 months ago, # ^ |

      I think they have added more test cases that's why

  • 10 months ago, # ^ |

    Say there are n distinct values in array a so the only value cntx and cnty can take is 1. But for every x we will iterate over every other y then is it not O(n2)?

    • 8 months ago, # ^ |

      No, because it won't iterate over every other y, since the enumeration stops immediately after you hit a pair thats not bad.
      Therefore, for every x, you will iterate over no more than (which is the number of bad pairs that has x in it). So that's in summary

10 months ago, # |

Rev. 2  

+9

I managed to upsolve D with annealing because my initial submit got unlucky amd FST'd 146160125

10 months ago, # |

for the test case 3 2 1 2

why is answer YES not NO?

Can't we choose len = 2, array will become [1,2,2], which is sorted, thus answer should be NO.

  • 10 months ago, # ^ |

    But for len=1 after performing the operation the array will not be sorted so the answer will be YES because we have to find only one case where array will not be sorted

10 months ago, # |

Rev. 6  

+35

UPD: Uphacked :)

Btw, there's a still some kind of motonocity: without caring about bad pairs, if we convert the postive side into a stair-shaped sequence as well, if we denote as the optimal for , then , which allows us to use divide and conquer optimization. Unfortunately, I don't know how to extend this solution to when we have bad pairs.

  • 10 months ago, # ^ |

    Rev. 3  

    0

    Did you use a similar approach?

  • 10 months ago, # ^ |

    In such a sequence, the area of the rectangle for a fixed increases until a certain point and decreases afterward, meaning that we can do a binary search to find the optimal .

    Unfortunately, this claim is false. Here is a test case that makes your solution fail.

    • 10 months ago, # ^ |

      I wasn't really sure about it and got convinced once it got AC. Thank you for the counter-test.

10 months ago, # |

Rev. 2  

-28

The comment is hidden because of too negative feedback, click here to view it

10 months ago, # |

I understand continuation 1 of D editorial, but continuation 2 seems unnatural to me. Could anyone who solved it like that share their thought process? Maybe it can help.

  • 10 months ago, # ^ |

    We wish to enumerate all possible sums for the array A as we can calculate the minimal cost for and we use the fact that when we switch two elements at index , the total sum of both arrays is invariant. Therefore, can be represented as , where is the total sum of both arrays. Therefore, to minimize , it suffices to minimize .

    Let , then we wish to minimize

    Now knapsack comes in. We notice that the minimal sum of A occurs when you place all the smallest items in A (note that this does not necessarily minimize the cost). Let's start here. Now, let's ask which elements we can switch to minimize the cost? (So we are starting at the smallest it can possibly be)

    We can imagine this as iterating from left to right. At each index , we can either choose to switch, or continue on. If we switch, we are adding to our minimum sum. We can now recalculate the minimal cost upon performing this switch.

    Now a question that might be asked is, if this is knapsack, what should our starting "sack" (weight) be? Since choosing A or B to contain the minimal sum is arbitrary (I could just as well choose B), it makes no sense to increase the sum of A past B since we are choosing A to start with our minimal sum. Therefore, the sack should only be of size where is the minimal sum of A. (our starting sum is ). If the sum of A is higher than , then that means the sum of B must be less than it. We don't need to consider that because its the same as choosing B to be the minimal sum and performing the same operations.

    In essence, we are just asking which changes we have to make to ensure minimal cost. This is a DP problem, and the recurrence relation is

    I apologize for the severe volume of parenthesis. The first term in the RR is just the minimal cost at that state. is the weight we have decided to add to our minimal sum. Why is it and not just ? Because each time we switch, we are subtracting the weight we have added from our "sack". This is why we want to add that weight we have subtracted, because that's the actual weight!

    The second term in the RR is 'switch' term. This is where we decide to take the item and switch it from to . The third term is the 'continue' term. We don't take this item and continue on in hopes for a better deal.

    After all is said and done, our promised value lies in . This is the minimal cost starting at index 0 allowing values for our sum to lie between, namely and .

    Hope this helped!

    • 10 months ago, # ^ |

      This helped a lot thanks!

10 months ago, # |

Can someone hack my solution for D or estimate probability it passes under problem restrictions? 146133303

It is not intended solution, but some randomized algorithm. The general idea is (while possible) swap ( with ) or ( with and with ) if that improves score. Do that 5000 times and take minimum score over all trials.

Wrote it as did not come up with anything better...

  • 10 months ago, # ^ |

    Rev. 2  

    +18

    I probably have a hack for your solution, with ai = 1, bi = ai + (43 if i<47 else 47) for i in [0, 43+47) But I can not find the link to hack your solution. Note that 43 47 are primes and sum < 100

    • 10 months ago, # ^ |

      Life is hard for Blue.

      • 10 months ago, # ^ |

        Hacked for you, buddy.

    • 10 months ago, # ^ |

      Good job :) You can be a good tester though.

10 months ago, # |

1637E - Best Pair We going over all different values — O(n) and checking each different cnt O(sqrt(n)) and finding first pair O(m) Why complexity isn't O(n * sqrt(n) + m * log(m)) ?

  • 10 months ago, # ^ |

    First two loops is just single loop over all possible x. Their number is up to n. Third loop is by cnt_y, and last one by y will add O(m log m) in total. Third loop is hardest to estimate. Of course it's capped by sqrt(n), but some wild magic happens. It's crucial that cnt_y <= cnt_x because otherwise here is counter test: 1, 2, 2, 3, 3, 3, 4, 4, 4, 4, .... K, K+1, K+2, K+3... (after stair there is half of array with cnt = 1). Then, if you loop for cnt_y >= cnt_x at first cnt_x = 1, the number of those is N/2, and for each one of them (x) you run whole loop over cnt and get O(n sqrt(n)) in total. But, if you loop cnt_y <= cnt_x as in the editorial you may say its iterations are capped by O(cnt_x) (because cnt_y <= cnt_x) and once we enter loop over y we can say its effect included in O(m log m). Thus, for particular x (first two loops) we run in worst case O(cnt_x). If you sum cnt_x for all x you'll get n. Therefore all those four loops in total is O(n + m log m).

    • 10 months ago, # ^ |

      Rev. 2  

      0

      The fastest (python) solutions (e.g. [http://codeforces.com/contest/1637/submission/146546953]) show clearly, that in reality the optimization cnt_y <= cnt_x just cuts the (worst case) time by half (due to symmetry), as we have "loop cnt_x loop cnt_y<=cnt_x"; the complexity is O(n * sqrt(n) + m), as @MAKMED1337 says, and due to simple ops in the loops this passes for n=3*10^5, even in python.

      • 10 months ago, # ^ |

        The submission you're linking is doing as in editorial cnt_y <= cnt_x If you replace single loop to do opposite, you'll get TL: 146727837. And if you'll be a little smarter and cap it with maxfreq, you'll get TL on my test. 146728261. How do I know it's my test? Well, because of this 146145570.

      • 10 months ago, # ^ |

        In other words comparing the 2 ways to loop thru the sets: a) 2 loops of the form loop c: all_cnts { loop x: bucket(c)} traverses exactly 'N' times, but I don't see at the moment a proof that b) loop x: all_x { loop c: all_cnts <= cnt[x]} is bound by N, or by something that scales as O(N).

        In simple random experiments, (b) leads to a number of traversals bound by ~ 10*N; perhaps somebody knows the theory behind this.

        [@r57shell's argument is a good argument against the specific counter test].

        Here - all_cnts: array of all cnt[x] over all x, - all_x: all unique x'es, - bucket(c): list of all x'es whose count is == c

        • 10 months ago, # ^ |

          I don't understand what you say. In short, loop over y and then inside: cnt_x <= cnt_y is proven to have O(n) time complexity (proof in editorial, proof in my comment, and proof is here). If you do loop over y and then cnt_x >= cnt_y you get O(n sqrt(n)) and C++ may pass but python definitely won't. Test case where it reach O(n sqrt(n)) magnitude explained in my comments above, and I even linked test generator in hacked solution.

  • 8 months ago, # ^ |

    Have you understood it? I am also confused about it.

    • 8 months ago, # ^ |

      We take element x and only goes cnt[y] <= cnt[x], number of such y <= cnt[x]. Number of pairs <= cnt[a1] + cnt[a2] + cnt[a3] + ... (for different a[i]) <= n

      • 8 months ago, # ^ |

        Sorry I missed that the x is distinct.

10 months ago, # |

Is there a way to solve Problem D in ?

10 months ago, # |

Hi , can anyone tell me why my solution for D does not gets MLE or TLE as i have passed 3 parameters in recursion, i know i have memoized the solution but still upper bound on memory in my solution can be (10^4)*(10^4)*(100). My solution : 146179358

  • 10 months ago, # ^ |

    The sumB state in your dp is kind of redundant, because for a given sumA there will be only single sumB.

    • 10 months ago, # ^ |

      Thank you, i get it now.

10 months ago, # |

std::bitest should be std::bitset

  • 10 months ago, # ^ |

    and backpack problem should be knapsack problem (at least, that is how we call it in US i think, let me know if it is different)

    • 10 months ago, # ^ |

      Yes, they are the same.

10 months ago, # |

Rev. 2  

+129

In problem E, I found that if you only iterate non-empty vectors(you can use a array to find vectors) and modify your code like this:

    for (int cnt_x = 1; cnt_x <= tot; cnt_x++)
        for (int x : occ[index[cnt_x]])
            for (int cnt_y = 1; cnt_y <= tot; cnt_y++)

Its complexity become , but it still passed every single test.

I tried to hack my code with a strong test(1~700 occur 1~700 times, and about 100000 random numbers occur 1 time) but codeforces returned "unexpected verdict". I guess that testers write the code I showed and they got TLE too. Can you help me to find out the reason of this unexpected verdict?

  • 10 months ago, # ^ |

    I fixed it yesterday but forgot to tell you about it. So now it works and your hacks are rejudged.

10 months ago, # |

VIDEO EDITORIAL FOR PROBLEM C : VIDEO_LINK
VIDEO EDITORIAL FOR PROBLEM B : VIDEO_LINK

10 months ago, # |

I don't know if this is a fact or something very obvious but I don't understand Iterating over all X and cnty <= cntx works in O(n) in Problem E. Shouldn't this be O(n√n)? Also, I don't understand how only iterating on cnty <= cntx is any better than iterating over all cnty

  • 10 months ago, # ^ |

    Rev. 2  

    0

    because , and the pairs are unordered, which means we can simply iterate over all for better complexity

    • 10 months ago, # ^ |

      I understand the first part that total numbers = N but but for every value, we need to consider cnty as well right? So, N times we will consider X and for every X we will consider √N cnty values?

      • 10 months ago, # ^ |

        Rev. 3  

        +22

        For each distinct value , we need to consider every . There are at most such , so in total the complexity is .

        It doesn't matter how many distinct there are, because we only need to consider , which is .

        • 10 months ago, # ^ |

          Oh damn! Sorry I missed the distinct part. Thanks a lot for clarifying.

          • 8 months ago, # ^ |

            I didn't realize my fault until I read this comment! Thanks a lot.

10 months ago, # |

For problem F, if we enumerate one of the biggest height node, then the contribution of node i (i is not the biggest node we determine) is max(0,h[i]-(the maximum h of the node except the node in the subtree of the biggest node when the root is i and i itself). We first determine the root of the tree, then my solution is to calculate up[i] and down[i], it means if the biggest node in the subtree of i, what the contribution will be for node father[i] and if the biggest node not in the subtree of i, what the contribution will be for node i. Then we can easily calculate the answer.

  • 10 months ago, # ^ |

    Rev. 2  

    +47

    By the way, the discrimination of this round is not good, maybe the reason is that the difficulty in thinking in problem E and F is insufficient.

    • 10 months ago, # ^ |

      To be honest,I can't agree more.

10 months ago, # |

Rev. 3  

-19

problem E: weak pretests, there're no number occurs more than times (maybe just random tests). I write sort wrongly and passed all the pretests, but failed system test later :(

I replace my code

sort(adj[idx].begin(),adj[idx].end(),[&](int x,int y){
    return diff[cnt[x]]!=diff[cnt[y]]?diff[cnt[x]]<diff[cnt[y]]:x>y;
});
sort(adj[idx].begin(),adj[idx].end(),[&](int x,int y){
    return cnt[x]!=cnt[y]?cnt[x]<cnt[y]:x>y;
});

then passed all tests.

10 months ago, # |

Rev. 2  

+25

Weak main tests for F. Simple memorization passed...

https://codeforces.com/contest/1637/hacks/785940

10 months ago, # |

Rev. 7  

+46

I have a different solution for D. At first we simplify the cost function,

Notice that is constant, so we only need to minimize

Let and be the final arrays and respectively after applying all swaps. Notice that, for some , if we fix , then is also fixed, because

Now we can do,

Let's also store

Transitions are simple,

If we do not apply any swap at position ,

If we apply swap at position ,

This dp can be done in . Then, the final answer is just

Here is my implementation.

  • 10 months ago, # ^ |

    Pretty Good

10 months ago, # |

Rev. 2  

0

quite thankful for beautiful problems and fast editorial with hints :)

10 months ago, # |

Can Someone please help me understand how we simplify the cost function in problem D?

10 months ago, # |

In problem D, how did you derive the second item of "cost", Σ(a[i]*(s-a[i]))?

  • 10 months ago, # ^ |

    2*a*b = (sum of all pair products) For a given a[i], this is a[i]*(a[0]+a[1]..+a[i-1]+a[i+1]+..+a[n-1]) =a[i]*(s-a[i])

10 months ago, # |

I was trying to use Go for the first problem and it worked on my local machine, but running the same tests with the Codeforces compiler produced an incorrect initial test case result of YES, YES, YES when it should be YES, YES, NO.

Is there a quirk with using Go that anyone might be able to comment on?

The second of my two submissions is here: https://codeforces.com/contest/1637/submission/146168203

10 months ago, # |

Any greedy approach for problem D?

10 months ago, # |

For B's dp based approach, in editorial are the transitions wrong since I got AC with dp[l][r] = max(1+mex(v[l],v[l+1],....,v[r]),max over c = [l,r)(dp[l][c] + dp[c+1][r]))

Can anyone share O(n^3) dp based solution for B?

10 months ago, # |

Rev. 3  

-8

The autor's solution for problem C outputs wrong answer for test 1 4 1 5 0 1 The code returns 3 but the shortest way is 1 5 0 1 -> 2 3 1 1 -> 3 1 2 1 -> 3 2 0 2 -> 4 0 0 3 that takes 4 operations. (Update — It's not true)

  • 10 months ago, # ^ |

    a[i] >= 1 in constraints.

    • 10 months ago, # ^ |

      Yeah, certainly. Thank you. Sorry for this.

10 months ago, # |

In A. Sorting Parts, for 2 1 4 5 3, what will be the output?

  • 10 months ago, # ^ |

    the output must be YES because array 2 1 4 5 3 isn't sorted and we can take len=1 (for example) and after sort operations we will get array 2 1 3 4 5 that is not sorted.

    • 10 months ago, # ^ |

      We can choose len = 2, and after the operation we get [1, 2, 3, 4, 5] which is sorted. So the answer should be NO, right?

      • 10 months ago, # ^ |

        It saidCould it be that after performing this operation, the array will not be sorted in non-decreasing order?It means if there exist a way to move that makes the array unsorted,the answer is YES

        • 10 months ago, # ^ |

          Got it, thanks!

        • 10 months ago, # ^ |

          Oh okay, Got it! Thanks!

      • 10 months ago, # ^ |

        If we have AT LEAST one way to choose len in a such case so the array won't be sorted the answer must be YES. The answer NO will be only if we can't choose NO ONE such len that array will not be sorted after operations. Read a descriprion to the problem again.

        • 10 months ago, # ^ |

          Got it, Thanks!

    • 10 months ago, # ^ |

      " if the array may be not sorted in non-decreasing order, output NO " , so does it mean that if there is at least one index position for which after performing the operation if it is not sorted we will print "YES"

10 months ago, # |

For the first problem, if we look at a case i.e the array is [3, 1, 2, 6, 5, 4], the array is not sorted. But we can choose len = 3 and the array will be sorted after the operation. So the above solution fails for this types of cases, right? Anyone please correct me if I am wrong!

  • 10 months ago, # ^ |

    Rev. 2  

    0

    This was really confusing, but it seems the problem statement is to Print "YES" if there is any len, which will result an unsorted array. So in this case len = 1,2 4 etc are options where the result is an unsorted array.

    • 10 months ago, # ^ |

      Yeah it was tricky. Got to know, thanks!

10 months ago, # |

can anybody explain in problem B , in editorial why they are replacing every segment of l>1 with 1 ,I am not getting it

  • 10 months ago, # ^ |

    Consider the case where there is no 0 in the array. Ex : 1 2 If you take length=2, and consider one partition => [1,2] , we get => 1 + 0 = 1; but if you divide segment as [1],[2] , we get => 2+0+0 = 2; Hence, we consider the later one.

    • 10 months ago, # ^ |

      ohk thanx

10 months ago, # |

Hi Mangooste!

Thank you for the nice round!

There is a wrong statment in the last but two paragraph for Problem H's Editorial:

Note that the number of points to the left and below i equals to pi−1−si, and the number of points to the left and below i equals to (i−1)−(pi−1−si)=i−pi+si. So di=(i−pi+si)−(pi−1−si)=i−2pi+2si+1. So ci=di−2si=i−2pi+1.

Here might be the number of points left and above .

  • 10 months ago, # ^ |

    Rev. 2  

    0

    Yes, you're right. It will be fixed soon, thank you!

    UPD: fixed.

    • 10 months ago, # ^ |

      Shouldn't the dp solution for B be dp(l,r)=max(1+mex({al,a(l+1),…,ar}),max(dp(l,c)+dp(c+1,r))) to get the maximum possible cost as required ? Confuse...

      • 10 months ago, # ^ |

        Fixed. Thanks!

10 months ago, # |

Rev. 3  

+23

I have an alternate solution for E.

First, let's group all values by their frequency. Let's say that has_ct[f] is a list of all values such that , and this list is sorted in descending order.

Fix two particular values of and ; let's call them f1 and f2. What we want now are and such that x in has_ct[f1], and y in has_ct[f2], and is not bad, and is maximal. We will iterate over all (f1, f2) pairs, and let our final answer be the maximum across all (f1, f2) pairs.

We use the fact that we sorted has_ct[f1] and has_ct[f2] in decreasing order. Draw a 2D grid. For some , let x = has_ct[f1][i] and y = has_ct[f2][j]. Then, define grid[i][j] = x + y. Because each list is decreasing, note that each horizontal and vertical slice of the grid is also decreasing. Naturally, the maximal value of the grid is attained at the top-left corner, when . But, if their is bad, then we need to explore the rest of the grid for the next largest value. But the shape of the grid tells us that the next largest values are the ones attained by taking one step right, or one step south.

In general, let be "the largest value in the grid that is attainable from using only right-down motions". If the corresponding is not a bad pair, then . If not, then , which is just like the classic standard grid dp.

We note that this DP is much faster than , because we only explore the grid further if is bad. So actually, the combined work of our DP across all (f1, f2) pairs is just .

Finally, note that there are only different frequency values possible, because . So, iterating over all (f1, f2) pairs does work.

There are also some miscellaneous log factors scattered about because of how I grouped by frequency, how I identified bad pairs, and the fact that I used a map to handle the DP memoization, but those aren't really too important.

Link to submission: https://codeforces.com/contest/1637/submission/146196644

10 months ago, # |

I made video Solutions for A-E in Case someone is interested.

10 months ago, # |

For D, Is it true that the in the optimal final arrays A and B, the sum of elements of A is as close to (sum(A) + sum(B))/2. ?

Kind of similar to "Minimum Subset Difference", But instead of take or not take, we have take a[i] or b[i].

10 months ago, # |

If you are/were getting a WA/RE verdict on any of the problems from this contest, you can get a small counter example for your submission on cfstress.com

Problems added: "A, B, C, D, E, F, G, H".

10 months ago, # |

Hey ,Can someone please tell me where will my code fail please?146221831

  • 10 months ago, # ^ |

    Rev. 3  

    0

    An example could be array = [1 3 1 1]. Here your code results in -1. But this is a solvable case as :

    [1 3 1 1] --> [2 1 2 1] --> [2 2 0 2] --> [3 0 0 3].

    I changed your code a bit and it passed : 146227037

    • 10 months ago, # ^ |

      i missed that case , thanks bro

10 months ago, # |

Can anybody please explain in problem B how the contribution of zero is i*(n-i+1) ?

  • 10 months ago, # ^ |

    Rev. 2  

    0

    The optimal way to divide a subarray is that to divide it into pieces at the length of 1.
    It was proved in editorial.
    So for each 0,it makes a contribution in every subarray of a which contains it.
    Consider a zero at position i.
    All the subarrys which begin with [1,i] and end with [i,n] will contain it.
    So,it contribute i * (n — i + 1).

    • 10 months ago, # ^ |

      Rev. 3  

      0

      Yeah ! Thanks a lot :) I got it now

10 months ago, # |

In the 'A' question if test case will be '2 1 3 4' then answer should be 'NO' we can sort it by len = 3.

10 months ago, # |

Can someone explain simplification of cost in Problem D

  • 10 months ago, # ^ |

    Possible Explanation

10 months ago, # |

Rev. 6  

+1

Hi, I want to share my solution for D with 1d dp.

Solution
Submission

10 months ago, # |

Thanks a lot for solutions they are very good and of great help to me !

10 months ago, # |

Shouldn't the dp solution for B be dp(l,r)=max(1+mex({al,a(l+1),…,ar}),max(dp(l,c)+dp(c+1,r))) to get the maximum possible cost as required ? Confuse...

10 months ago, # |

Problem A did not want us to sort the array..I read the question incorrectly lolol!!

10 months ago, # |

Rev. 2  

0

Problem D: Can someone please explain to me in more mathematical detail how to get the simplifaction for the cost, more specifically the following relation: ?

  • 10 months ago, # ^ |

    just expand the brackets of and you will get it :)

10 months ago, # |

Some solution for problem E now gives TLE which passed during contest system testing. Will there be any System testing now after rating changes

10 months ago, # |

Did AlphoCode participate this contest?

10 months ago, # |

What's with this test case 79 for E . Older AC's also getting TLE'ed

10 months ago, # |

Rev. 2  

+11

Slightly different approach to 1637F - Towers

Hint 1
Hint 2
Hint to Hint 3
Hint 3
Hint to Hint 4
Hint 4
Hint to Hint 5
Hint 5
Hint 6
Solution
  • 10 months ago, # ^ |

    Rev. 2  

    0

    thanks, this helped me a lot , nice writing

10 months ago, # |

In Problem C, do the positions of the elements other than the ends not matter at all? Since the final solution is somehow independent of it.

Can someone also explain a little beyond the editorial? I'm unable to convince myself what is explained above.

  • 10 months ago, # ^ |

    Rev. 2  

    0

    It doesn't matter how the elements are placed (except the first and last element)

    let the array be [1, 2, 3, 1] it is optimal to do the following:

    1. Select (i, j, k) = (1, 2, 3). The array becomes equal to [2, 0, 4, 1].

    2. Select (i, j, k) = (1, 3, 4). The array becomes equal to [3, 0, 2, 2].

    3. Select (i, j, k) = (1, 3, 4). The array becomes equal to [4, 0, 0, 3].

    now let the array be [1, 3, 2, 1] it is optimal to do the following:

    1. Select (i, j, k) = (2, 3, 4). The array becomes equal to [1, 4, 0, 2].

    2. Select (i, j, k) = (1, 2, 4). The array becomes equal to [2, 2, 0, 3].

    3. Select (i, j, k) = (1, 2, 4). The array becomes equal to [3, 0, 0, 4].

    if there is answer it's optimal to make odd numbers even first.

    Hope that helps you :)

    • 10 months ago, # ^ |

      Thank you!

10 months ago, # |

I like the approach for problem E! Are there other problems that can be solved with the same technique?

10 months ago, # |

Is it possible to do D in O(n)? If someone has done it can you please describe your approach.

10 months ago, # |

Plagiarism checking isn't happen yet. When will be the plagiarism checking?

10 months ago, # |

  • 10 months ago, # ^ |

    How Elegia's mind works!

10 months ago, # |

Rev. 2  

0

In problem E you can check if the edge is bad in O(m + n) total if when iterating over x you'll first mark all bad pairs bad in an array (and then mark it not bad again when going to vertex x + 1)

So you need log factor only for initial sorting/coordinate compressuring

10 months ago, # |

The alternative solution for F is very nice. I got up to most of it but couldn’t see that rooting the tree at the max value would deal with all issues regarding how to choose the second endpoint for each path. Hopefully some day I’ll be able to make these types of smart optimizations on my own.

10 months ago, # |

Interesting to come up with heuristics. 146448463 seems to work pretty well, though it's clearly fundamentally wrong. Main idea is to take 150 of the best ones wrt to and 50 of the best ones wrt .

10 months ago, # |

My solution to G is similar to the editorial but works on sequences with . We split that into an "even" subsequence divisible by and a non-empty "odd" subsequence that's only divisible by . The "even" part is solved recursively, the "odd" part solved by gradually pairing up elements into powers of two and smaller sequences just like in the editorial. When only powers of are left:

  • if all powers smaller than the answer occur only once and there are at most two such powers, we fail but that never happens — easy to check all possible inputs
  • if all powers smaller than the answer occur only once and there are at least three such powers (, , with ), we keep doubling and till we get twice; doubling two values takes two operations
  • once there's some power smaller than the answer occurring at least twice, we get and and change the to the smallest remaining power
  • once the smallest occurs at least twice: we can change or just double two values , which gives us a way to simply change all to ; we repeat while it's necessary

The only significant thing about this is the bound: in all my testing with much greater and it seems to be asymptotic. It also seems A081253 is the sequence of the only values of that tighten the bound if we keep increasing .

10 months ago, # |

Rev. 2  

0

Can someone look at my code, It is giving runtime(array out of bounds) error with c++17 and wrong answer on test 5 with c++20, though it is working on my local system(using c++ 17).

c++ 17 : https://codeforces.com/contest/1637/submission/146466246 c++ 20 : https://codeforces.com/contest/1637/submission/146466291

Edit : There was a problem with 2d vector thing, I changed it to a normal 2d array and initialised it to 0. Now it works.

  • 10 months ago, # ^ |

    In your code

    dp[i][j] = (dp[i - 1][j - a[i]] || dp[i - 1][j - b[i]]);

    Here [j - a[i]] or [j - b[i]] might be negative

    It should give you a RTE whether you used array or vector.

    Just put if statement to avoid this.

    • 10 months ago, # ^ |

      I have started j from min(a(i), b(i))+1, so that won't be a problem. But thank you going through the code. I have solved the question here

      • 10 months ago, # ^ |

        I have started j from min(a(i), b(i))+1

        Yeah I noticed that but still wrong.

        to make sure run this test

        and print the values of j - a[i] and j - b[i].

        it's weird how you got an AC :)

        • 10 months ago, # ^ |

          Rev. 2  

          0

          Yes this case is working with my ac solution. Is the answer 20402

10 months ago, # |

Rev. 3  

0

I learnt new things in problem B.

10 months ago, # |

Can someone completely solve the array cost formula used in D, step by step.

Thanks

10 months ago, # |

Rev. 2  

0

In problem E if we fix x and iterate over cnty>=cntx rather than cnty<=cntx still the time complexity must be the same but making this change gives a TLE on test 79. Mangooste can you please explain this.

here are the submission links cnty >= cntx and cnty <= cntx

  • 10 months ago, # ^ |

    If we fix and iterate over then it works in because you need in total to fix and . But if we iterate over then you need in total to fix it.

    • 10 months ago, # ^ |

      isn't the use of the condition cntx <= cnty just to stop repetedness of same pair and hence reduce the time complexity.

      for example if cntSet = {1, 3, 7}. so by using this condition we don't need to check {1,3} and {3,1} both we only need to check one of them.

      so the only difference between cnty <= cntx and cntx >= cnty would be that first one is checking for {3,1} and later one for {1,3}. the time complexity for both cases must be the same because in both cases no pair is checked twice (except the case for same elements). please can you please give me a counter example where checking for 2nd condition cost more steps if cntArr is already sorted.

      thanks in advance.

      • 10 months ago, # ^ |

        If we have a set of all like {1, 1, 1, 2} then if we fix and iterate over all then we will cosider pairs: (1, 1) 3 times and (1, 2) 3 times. But if we iterate over all then we will consider pairs: (1, 1) 3 times and (1, 2) only one time. Hope you'll get it ;)

        • 10 months ago, # ^ |

          First of all cnt can not have same values. For each distinct value of cnt we are taking the top most element except for m pair which are bad we need to check agian. Secondly if we take example as cntArr= {1, 2, 2, 2} then the case is totally opposite. In this case if we look for cnty <= cntx then consider (1,2) 3 times and in cnty>=cntx (2,1) will be considered once.

          • 10 months ago, # ^ |

            The main problem of the solutions that iterates over all is that if there are almost all possible from 1 to and many other values which occur only once, then this solution will consider all possible for every element that occurs once. But if there are for example such elements, then it will work in while another one will work much faster.

            • 10 months ago, # ^ |

              Okay got it now. thanks

8 months ago, # |

The time complexity of the solution for F should be nlog due to sorting

7 months ago, # |

can you tell me why timecomplexity of prblem E not O(n^2.logm)? For example, testcase like this: 1 5 9 1 2 3 4 5 1 3 1 4 1 5 2 3 2 4 2 5 3 4 3 5 4 5 every pairs is bad, expect {1,2} or {2,1}. I ran the code of tutorial and count number of steps, it returns 23 (5^2-2). I think it run like that: for i from 5->3, it don't have break, because all of pair with 5->3 in it is bad. for i from 2->1, just 2 pair is not bad. Where did i go wrong?

7 months ago, # |

Thanks for the great round and complete editorial :)

4 months ago, # |

Rev. 4  

0

For D, by Jensen's on , is minimized when and are as close together as possible. So instead of calculating the sum for any that are true, we can instead iterate through all true and find the that's closest to , where , and calculate . It's probably not faster at all (both seem to take 15ms on c++), but it feels a bit smarter.

Edit: of course, Jensen's is not the only way to arrive at the conclusion: expanding and using properties of quadratics works too.

2 months ago, # |

Can anyone explain clearly the problem C? I cannot understand why just iterate from 2 to n — 1 and get the result.

117 minutes ago, # |

Rev. 2  

0

Petr's accepted solution for E:best pair using pair hashing now giving tle for test case 78/79.
I m curious if there any better approach to hash pairs without getting hacked.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK