Codeforces Round 915 (Div. 2) Editorial
source link: https://codeforces.com/blog/entry/123384
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
A — Constructive Problems
Author: valeriu
B — Begginer's Zelda
Author: valeriu
C — Largest Subsequence
Author: tibinyte
D — Cyclic MEX
Author: tibinyte
E — One-X
Author: tibinyte
F — Field Should Not Be Empty
Author: tibinyte
why the name of problem C is Adam lying face when it is Largest subsequence? or am i stupid upd. fixed, thanks |
Problem names of A, B and C are wrong: upd: Thanks for fixing it :) |
-
In problem C my method has utilised O(n) complexity and yet it is giving tle at last test(9). please look upon this..237829405
4 days ago, # | Problem C did not mention whether the string had to be sorted in ascending or descending order. I got a wrong answer on test case 2 (21th test case dcbbaa) as i considered sorting in descending order too. Please look into this issue. |
4 days ago, # | gray round, that says it all... |
4 days ago, # | Can someone please explain C. Didn't understand this. |
3 days ago, # | In E you can also notice that for a subtree of fixed size the sum of LCAs for all the verices in this subtree is some linear function of : , where is root of the subtree. Then it is easy to solve the problem for arbitrary , thus getting a linear function. Since there's at most different subtrees you can easily solve the problem using recursion with memoization. This is my implementation: 237520087 |
-
Used the same idea, except the only candidates for subtree sizes are and .
So this actually gives per test case (assuming you merge in constant time).
-
-
What I've written was , and you can't simplify the way you did. Maybe I should've written in the first place, but nevertheless.
-
oh, I see. but isn't number of different subtrees actually ?
-
According to ToniB that is true, but I don't have the proof yet, so I'm not too keen on believing this. Actually, I might've found counterexamples, but I'm not too sure.
UPD. Tried to test out different values of , and found that it is, indeed, true, and my counterexamples were wrong.
-
-
-
-
Actually, if it really is the bound, then my solution also checks at most different subtrees per test case.
However, I don't really know how to formally prove that it is really the case. Can you share your proof?
-
For any you can see .
Now you prove by induction. Suppose where is a subtree size on -th layer from the top. Using the inequality above, you get
which simplifies to
Since these are the only possible sizes of subtrees in the next layer, the next step will also hold. Base case is trivial.
UPD: or just what ibrm said, looks simpler.
-
-
Can you explain more clearly how to construct this formula: f(v) = kv + b Thank you!
-
Well, firstly, suppose we have some subtree of size , and by induction for trees of size smaller than we already know their linear functions. Then what is the answer for this subtree (LCA is some vertex of this subtree)? LCA(S) = only in subsets, for which there is at least 1 leaf from the left son, at least 1 leaf from the right son, and there are no leafs besides this subtree (or LCA would be some ancestor of v). For a tree of size left son has the size , and right son has the size . Then there are subsets for which LCA is .
Now we need to calculate the answer for the case when LCA is equal to some vertex in the left or the right son. Because by induction we already know what formula of the answer for the left and right son is, we have , . Then the answer for the left son is , and for the right son. Composition of linear functions is also a linear function (given we have )
Combining everything together, the answer for the subtree is
UPD: base for this induction is linear function for subtree of size 1:
-
3 days ago, # | can someone help me in trying to check which testcase my code if failing for problem C :( 237528783 |
3 days ago, # | The author's solution F is too long, it seems it could have been written simpler. https://codeforces.com/contest/1905/submission/237540164 |
3 days ago, # | Problem B from graph :p,though not related to graph related algos :) |
We can rather think of B in this ways: No matter what path we choose, we only reduce the total no. of leaf nodes by 2. And it is obvious that we will only choose a path having 2 leaf nodes. So, if initially the total no. of leaf nodes is x. After 1 operation, it will be x-2. So, how many times you subtract 2 from x? ceil(x/2) You can use this site to visualize the trees and confirm yourself for better understanding: https://csacademy.com/app/graph_editor/ |
3 days ago, # | can any one tell me the intuition of D? |
-
I might have had a different intuition than the Editorial, but i can try. For simplicity, let denote , that is the mex of a prefix of of length .
Lets assume . What's the answer? . Because every mex of a prefix will be equal to except the prefix containing the entire permutation.
Now, what happens when ?
If you try a few cases you will see that the answer in this case is . If then the answer must be . Because for all , and because is excluded from , . as usual.
We can see that the mex is until we "hit" the index (where sits). When that happens, mex will become equal to , because that is the only remaining number that is excluded.Using this information, we can build a solution where we consider the positions of where can be in decreasing order. In other words we consider the cyclic shifts in this order (i assume WLOG that ):
And so on. The nice thing about doing it this way is that we know that the mex is for all indices before the index of the number as explained above. This way we can think of the operation not as a cyclic shift, but as appending numbers to the end of our list.
So our operations really look like this:
When appending a new number, we need to consider how it can change our answer. It can be helpful to think about some cases.
If we ever append to the end, we know that all mexes will be except the last one, which will be (assuming our simplified view of 'appending to list'). If we ever append a after this, we know we won't change any of mexes before the index of , but between the index of and the index of the mex will be .
If you try this for some different cases on paper, it shouldn't be impossible to see that what we need to do when appending a new number, say , is that we must find the index of the last number that is smaller than . I.e. we must find the largest in our list so far such that . Because we will not affect any mexes up to . All mexes between and will then be equal to our new value , because by construction, is smaller than all numbers in this range. Thus, we will update our answer with . Our final answer will be the maximum of all of the answers from each 'shift'.-
How do you find the last element smaller than ? Edit: Oh, that's why VectorViking has a monotonic stack. We read the array of from left to right, and at every , we pop all elements bigger than before pushing . Thank you very much!
-
I tried implementing this solution but idk where is my error. I get the following verdict: 45478th numbers differ — expected: '31', found: '0' even though at the end of my code I output something like , where is always . Here is my code
-
-
Let's solve the following example: 2 3 6 7 0 1 4 5
We first place 0 at the end: 1 4 5 2 3 6 7 0
What's the cost? It's 8 ( =n ), because everything before 0 adds +0, and at the end there is always +n (we used all numbers from 0 to n-1).
Now we rotate the sequence to the left.
- 1st iteration: 4 5 2 3 6 7 0 1 (cost is 1 + 8)
- 2nd iteration: 5 2 3 6 7 0 1 4 (cost is 1 + 4 + 8)
- 3rd iteration: 2 3 6 7 0 1 4 5 (cost is 1 + 4 + 5 + 8)
- 4th iteration: 3 6 7 0 1 4 5 2 (cost is 1 + 3 * 2 + 8)
Why is that? Because now 2 is at the end and we can use it for places where we used 4 and 5 (they are bigger). It's enough to maintain a monotonic stack while iterating, so the solution is linear.
You can see my solution at: 237547878. Hope this helps!
-
Why is the cost of the 4th iteration not ?
-
Because the cost is computed as the sum of mex, for each iteration:
- 0th: 0 0 0 0 0 0 0 8
- 1st: 0 0 0 0 0 0 1 8
- 2nd: 0 0 0 0 0 1 4 8
- 3rd: 0 0 0 0 1 4 5 8
- 4th: 0 0 0 1 2 2 2 8
As you can see, when we "rotated" 2 from the start of the sequence to the end, it became the new smallest non-negative integer ( mex ) instead of 4 and 5. The third 2 comes from the rotation itself.
-
3 days ago, # | Can someone please explain D. I'm not able to understand the tutorial. Thanks! |
3 days ago, # | In problem B solution. I think there should be ceil(K/2) instead of ceil(K+1/2) |
3 days ago, # | unbalanced round |
Can anyone tell why I am getting a runtime error on this submission(C)- 237514134 |
-
Type cast d.size()-1 to (int)d.size()-1, now u are getting wrong answer
This is the proof that D worst time is O(n) I will use the accounting method (the second method of amortized analysis) The operations of the algorithm to solve problem D:
So, the upper bound of the algorithm is O(n^2) It is correct, but not tight Note two important things: 1- we have only n operations 2- we cannot remove frequency of any element unless it was incremented before Can't we just make the cost of increment to be 2 instead of 1? one for incrementing and one as a credit to be the cost of the future remove operation so we can assume the cost of merge operation to be 0, and the increment operation to be 2 hence, the total cost Which is O(n) |
Can someone pls see why my code for problem C fails for 1043rd numbers on test case 2 ? 237554366 |
3 days ago, # | this contest was great |
3 days ago, # | in C I don't know what is |
3 days ago, # | here is my code for problem C,i cant understand why it doesnt work.here is mine.#237576156 |
3 days ago, # | i think problem F is easier than D for someone like me :( |
3 days ago, # | can anyone explain the editorial solution of E |
3 days ago, # | The problem D. The second cycle may be from 1 to n . |
3 days ago, # | "Thus, we can easily check if the string is sortable". How that's literally the point of an editorial plus for number of operations you're not explaining why which kind of defeats the whole purpose of an editorial. |
3 days ago, # | Is there a simpler O(n) solution for F ? |
-
https://codeforces.com/contest/1905/submission/237719024 this solution is O(N). I've used hashmap to store the pair of swaps, so that should be O(N) too.
3 days ago, # | Here comes a solution for problem E. We still focus on the fact that, for each depth there are at most 2 different interval lengths, and let's assume they are , which will become a layer deeper. For the case we do it in a similar way. We can count the number and the sum of ids of both kinds of intervals, e.g. there are intervals of length , and the sum of their ids is . The same for with intervals of length . Then, we count the number of leaves of a segment-tree with a -length interval root, e.g. there are leaves in a segment-tree with a -length interval root, and for . At last, the contribution of this depth is . The time complexity is . You can see my submission for more details. |
2 days ago, # | I solve D with Treap algorithm. |
-
can you please explain how did you do it?
-
Just replace your segment tree with Treap)))
-
Thank you! I think it was a nice idea. It's much more easier than author's solution!
-
-
2 days ago, # | Thats enough of internet today |
Can someone please help me figure out the error for C in my submission 237655544? It fails on the 2124th test case, test 3. |
2 days ago, # | NO hints. DOWNVOTED for every problem and this blog itself. |
can any one tell me the intuition of E? I'm not able to understand the editorial. |
42 hours ago, # | Will using set instead of deque in D exceed the time limit ? I used a set to maintain which numbers present in the current prefix mex array are grater than p1 and with a time complexity of O(nlogn) my code is giving tle on submission |
I solved F without segment tree, here is my solution: 1) There is only 2 * n possible good swaps same as normal solution: n solutions of type(i, pi) and n solutions of type: can actually make some other numbers good 2) We calculate the solution for first type in O(n) time by precalculating for every i: are all elements before i-th index smaller than i in boolean array? So when we swap a[i] and a[a[i]] we need to "only" check did they become good solutions now(for pi = i we just check if it is good in our boolean array because if it is after swap it will be added to answer, but we also need to check is pi = ppi(if we have situation: 1 2 3 5 4, after swaping 5 and 4 we also make 5 in good spot) and similarly check will it increase solution.Yes, after doing this operation some more indexes might become good, but it will be checked in second case :) 3) Second case: which indexes can become good after swaping some 2 elements? Only those that have 1 pair inversed on left and right of them. We can find for every element can it become good and those 2 numbers in O(nlogn). After doing it, we just sort those pairs of indexes and for every pair (i,j) that makes x indexes good we add x to solution without swaps. Note: in this part we also check will (i, j) also get i or j in their good position and do same checks as in previous part. Solution:237847430 |
Very late to this, but I found another solution for problem F that avoids segment trees or sets: 238072983 If we think of the permutation as a function , then an index is good if and only if and no arc jumps across it (where having the arc jump over means or ). Then the only helpful swaps are those that split the bounding range of a cycle into two cycles with disjoint bounding ranges (all other swaps merge two cycles into a bigger cycle, which we do not want). The set of new good indices that will result from splitting the cycle are the set of indices such that they are between the new resulting bounding ranges, they satisfy , and they have no more arcs jumping over them after doing the swap (i.e. they had exactly 2 arcs jumping over them before doing the swap). We can create an array where each index stores the number of arcs jumping over it, by simulating many range addition updates (this can be done efficiently by first constructing the array of partial differences, then calculating prefix sums). Then we create another array that keeps track of which indices currently satisfy and also have exactly 2 arcs jumping over them, which we perform range sum queries on (and can also be computed efficiently by using prefix sums). Then iterate over each cycle, over each adjacent pair of the cycle's elements in left-to-right order. The only special case is when for all , in which case the answer to the original problem is . |
7 hours ago, # | Hey guys, can please someone help me out here ? In problem D, why are we running the last for loop in the code for n-1 one times and why not n times ? if we run it for n times, then we will end up with the original configuration only right so the answer should not change. Just curious to know why. |
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK