Editorial for Hello 2024
source link: https://codeforces.com/blog/entry/124220
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
1919A - Wallet Exchange
Author: maomao90
1919B - Plus-Minus Split
Author: maomao90
1919C - Grouping Increases
Author: maomao90
1919D - 01 Tree
Author: maomao90
1919E - Counting Prefixes
Author: maomao90
1919F1 - Wine Factory (Easy Version)
Author: maomao90
1919F2 - Wine Factory (Hard Version)
Author: maomao90
1919G - Tree LGM
Author: maomao90
1919H - Tree Diameter
Author: maomao90
Full solution: dario2994
12 hours ago, # | Thanks for fast editorial |
12 hours ago, # | Thank you for the contest! |
12 hours ago, # | C was tough |
-
Yeah definitely, I think it should be around 1700 rated. Actually, I thought to find the longest non-increasing subsequence and remove it from the array. Then I have to calculate the cost of the remaining portion of the array. That should be the answer. But I could not implement it.
-
IMO, C was somewhat tricky to implement but the concept isn't that hard. I think D was pretty tricky, conceptually.
-
Longest non-increasing subsequence can fail sometimes, example 4 3 2 1 8 4 2 1.
If your code detects 4 3 2 2 1, penalty is 1, but optimal answer is 0 (4 3 2 1 and 8 4 2 1).
-
I firstly implemented this way using longest subsequence, but it was WA. Than I implemented greedy that is accepted.
-
Can you pls share your approach?
-
Basically: any item should either be in s or t It doesn't matter which one so have the last item inserted in each one in 2 vars: t,s if (s < t) swap(s,t) so t is the minimum(doesn't matter just wanna have the minimum)
you agree that the most optimal approach is that the current element of the array goes to the one with the minimum, if it's less than min it's the best, if it was larger than min but smaller than the other one you put it into that, still free since it's smaller
otherwise put it inside the any of those u like(doesn't matter since you'll swap them if wasn't good) in any of them the cost will be increased by one
-
-
I wrote DP code for part B and it ran out of memory, then I came up with a simpler approach. Figured the same would happen for part C, so I didn't even try using DP and ran out of time trying to think of a better solution. Oops.
Very good competition! 2024 will be a good year, it seems to me because the competition was cool! Thanks maomao90 for the competition. |
12 hours ago, # | thank you for this fast and organized editorial. |
12 hours ago, # | FastEditorialForces! |
12 hours ago, # | russian??? |
12 hours ago, # | F1 statement: "There are n wizard's..." Me: "Well.... okay, there are n Jesuses..." P.s. really cool problems and thank you for super fast editorial! |
12 hours ago, # | proofByAcForces |
12 hours ago, # | I solved F1 with sqrt decomposition. Why no mention about it in the editorial? |
-
For the same reason the segtree solution to C and the DP solution to B weren't mentioned.
-
DP solution to B will probably time limit. could you tell me your approach for it, maybe I got it wrong.
12 hours ago, # | Did anyone solve problem C with a cartesian tree. I tried so but i couldn't get accepted. My idea was to create de max cartesian tree and count the total amount of left children minus one. Took the tree code from geeksforgeeks. Also mention that if two elements are equal the one from the right will be the child. Here's my approach 240585200. |
12 hours ago, # | Do better. B, C have 6 pages long proof. |
-
Can you provide me with a proof for why the greedy approach they gave works in the third case where x < ai < y ? I did not understand the following part:
This is because if we consider making the same choices for the remaining elements ai+1 to an in both scenarios, there will be at most one time where the former scenario will add one penalty more than the latter scenario as the former scenario has a smaller last element after inserting ai . After that happens, the back of the arrays in both scenarios will become the same and hence, the former case will never be less optimal.
12 hours ago, # | Does anyone have any material on ReLU Segment Trees? I solved F1 (and I will try to do F2 as well) using a bit of a different segment tree than first solution and don't know if it is just the second solution (although I don't think it is). Thanks in advance |
-
My F1 solution uses a ReLU segment tree. I wasn't able to adjust it to solve F2 in time though. I thought the problem was really terrible at first because it's an ugly mathy solution, but after seeing that the intended solution was optimized flow instead, I like the problem much more.
12 hours ago, # | Great contest & fast editorial. I'm glad that 2024 has had such a perfect start. Thank you! |
12 hours ago, # | Thank you for the contest! Best wish for 2024. |
12 hours ago, # | This contest is a perfect example of how to set and prepare rounds. Well done to the author and coordinator! The tasks were pleasurable to solve, balancing math and algorithms. I thought that every problem was on point in terms of difficulty, quality, and their respective subjects (not every task was "constructive math"). Overall, the round seemed thoroughly tested and well-prepared. |
12 hours ago, # | I think this contest could've benefited from weaker samples on A and B. They're very proof-by-AC able. Other than that, best contest I've ever had the joy of participating in. |
12 hours ago, # | I was not sure about C, so tried other approaches and when they all failed, then tried the above greedy approach and to my surprise it worked. Greedy is hard to prove! |
12 hours ago, # | Proof for D? |
12 hours ago, # | Can someone tell me what's wrong with my solution for F1? |
-
Take a look at Ticket 17195 from CF Stress for a counter example.
12 hours ago, # | so fast tutorial |
12 hours ago, # | F1 can also be solved by first consider D&C solution (maintain sum of A, sum of B, sum of answer for each node, when merge(L, R), try to match L.A with R.B as much as possible), then put this dp into segment tree. |
12 hours ago, # | greedy on D was quite unexpected |
12 hours ago, # | C and D is really hard. |
12 hours ago, # | nice contest |
12 hours ago, # | The editorial proof of the greedy algorithm correctness in Problem is obviously not complete. It is not clear why the optimal splitting for a prefix coincides with the restriction of the optimal splitting of the whole array to this prefix, while this claim is implicitly used. Does anybody understand a complete proof? |
-
I'm not sure what you mean by that. Why is this proof not complete? My understanding is that because it's a subsequence, and every element must be inserted into either array b or c, it is a complete proof.
-
The question is why if you have the optimal splitting of to subsequences and , then the optimal splitting of can be obtained by back inserting of either into or into . In general, the optimal splitting for may have nothing to do with and .
The proof is written such that it looks like this fact is used, but may be I am just misunderstanding something.
-
I think the idea is that the split doesn't really matter. The only thing that matters is the final number in each array. So suppose that X and Y are the final two numbers in array A and B respectively, then regardless of any of the prior decisions for splitting the numbers, based solely on the values of X and Y we can determine whether a number should go into array A or B. You might argue that X and Y could have been chosen to be greater which might lead to a more optimal result, but the algorithm maximizes the value of X and Y in the first two cases, and in the third case the editorial lays out an argument for why it is at least just as good putting it in the other array.
Does this sound right?
-
-
-
If you do let me know, the same is the reason I wasn't able to give this greedy approach a try
-
I believe, the following was implied.
Assume that you have any splitting of to subsequences , which can be extended to a splitting of the whole array with penalty . Then the splitting of constructed greedily from also can be extended to some other splitting of the whole array with the same penalty or less.
This works, and in order to prove it you need to construct these from , which is more or less done in the editorial.
-
-
I agree with you. The proof is not a standard greedy proof. It only says: when the splitting of the first k elements is fixed, we can obtain the optimal (optimal under this condition, not globally optimal) solution of the splitting of the first k+1 elements. Edit: Actually there is a mistake, when the first k elements are fixed, then out of all optimal splittings of the remaining n-k elements, it's always not worse to choose the splitting (in the editorial way) of the (k+1)th element, so it's always chosen that way.
12 hours ago, # | fast editorial, thx |
12 hours ago, # | Hey can anyone share their intuition for problem D , i didn't understand the idea from editorial , and would appreciate if anyone can share their stack based idea |
12 hours ago, # | thank you , but I am sb |
12 hours ago, # | HELLO 2024. THANK YOU |
For the problem C I used the greedy approach explained in this editorial but i did it in reverse. I iterated from N-1 to 0 and tried to add the element x to the array with the first largest element unless x is smaller than that but larger than the first smallest element of the two array. I implemented this but it gives WA, here is my submission: 240582496 Is it a solution error or an implementation error? UPD: My bad, it was >= and not only > :( |
12 hours ago, # | Again ,C was too much for me |
11 hours ago, # | For problem c I tried to separate longest decreasing subsequence (indices) from all elements, then I tried to compute a[i]<a[i+1] for rest of the elements but couldn't pass. Can anyone explain why? |
-
-
Thanks, can you tell me? If in a contest I start with a wrong approach how can I get around this wrong approach like this problem?
-
One of the options would be to stress-tests your solution (see: stress-testing on linux, stress-testing on windows).
-
-
I had a very different solution to D, I think. Note that there must be exactly one leaf node with value . Call this node and consider the path from the root to . Then, for each node along this path, let be its subtree on the side not including (the other edge). Then, if we subtract from the distance for each node , this must also form a valid dfs ordering of distances. This gives the following recursive definition: a height- tree has DFS ordering where , are concatenations of height- trees' DFS orderings. For example, consider the first sample . Then, must be a concatenation of height trees and must also be a concatenation of height trees. Checking if is a concatenation of height trees is the same as checking if is a concatenation of height trees. Recursing gives that this is indeed valid. Similarly, being a concatenation of height trees is the same as checking if is a concatenation of height trees (which it obviously is). So, we can use Divide & Conquer to check if an array corresponds to a concatenation of height trees, by (for example) using a RMQ to find all positions with value and recursing on their subranges. We also check that the root only has one . Overall, the code looks like this:
Since we look at each position as a root exactly once (utilizing the RMQ for this speedup, or some sort of binary search structure), the runtime is (coming from building the RMQ). My code: 240589892 |
11 hours ago, # | If ai<x insert ai to the back of the array with a smaller last element. If y<ai , insert ai to the back of the array with a smaller last element. If x<ai<=y , insert ai to the back of the array with a bigger last element. Can someone help with an example here. Which array are they mentioning |
11 hours ago, # | The sample of D was too weak.. |
11 hours ago, # | Finally, decent contest! Could've given more examples in D though:( |
11 hours ago, # | I think this approach works for C but I couldn't figure out how to implement it efficiently during the contest:
For example, for this approach you would get [8, 8, 9], then since no number is >= 9, you would start from the next number which is 0, and get [0, 5]. So one array is [5,0,9,9,8] (the elements we saw in this process, but reversed since we reversed the array to do this), and the other one is the remaining elements. Question: is it possible to implement something like this efficiently (even if it is not correct for all cases of C?) I tried to use a set of pairs with {value, index} and finding the lower_bound of the current pair, but couldn't figure out a sorting function to get the exact behavior I wanted. Thanks! |
-
I tried this approach but it's not working WA on pretest 2
Couldn't figure out what the issue
11 hours ago, # | For F2 I feel you can just imagine a substring of towers as some mechanism that:
Then it is just one segtree and no need for flow... My sol: 240574390 |
11 hours ago, # | Why is the TL of E 1 second? |
11 hours ago, # | Easy A,B,C problems. |
11 hours ago, # | 2024 gave me: Spoiler |
11 hours ago, # | Beautiful contest Enjoyed it Thanks to the creators, coordinators and testers for such a great contest thanks for the fast editorial too |
11 hours ago, # | Finally im green! Thanks for the great contest! |
11 hours ago, # | I had a square root decomposition solution for F1. I made blocks which store three values.
|
11 hours ago, # | Loved problem C. Although I couldn't do it myself, it was a good one. The tutorial explanation was good too. |
11 hours ago, # | For the Problem C, wrt to 3rd scenario If we insert a[i] to the back of the array with the smaller last element, although there will be an additional penalty of 1, but we have optimised the end point of the subsequence Also, for the part:
Can someone give an example where it will become same and former case will never be less optimal? |
11 hours ago, # | The "NO" tests on G were pretty weak, looks like two of the three submissions in contest didn't check enough conditions. (It's pretty easy to fix, you can just write the DP to compute the matrix corresponding to your tree and make sure it's equal to the input.) |
10 hours ago, # | thanks for the fast editorial |
10 hours ago, # | Why i was solving D like this the edge is 0 or 1, not one of the edge is 0 and the other is 1 :'( |
My intuitive gready solution of C: lets keep last element of both subseq as large as possible. So e.x. if one subseq ends with |
10 hours ago, # | Incase someone wants an easy implementation for C-https://codeforces.com/contest/1919/submission/240529072 The idea is to maintain the endpoints of two sets with minimum answer and maximum endpoints. If current element is A(i) and endpoints are a,b. If b>=A(i),then assign A(i) to b and no need to increase answer. Else if a>=A(i),then assign A(i) to a and no need to increase answer.(Note-a>=b) if A(i)>a and b,then swap the smaller endpoint(i.e. b) with A(i) and increase answer by 1. Easy to implement, though night be slightly tricky to prove correctness |
-
If b < A(i) <= a, Though assigning A(i) to b increases the penalty by 1, it will maintain the maximum endpoints and might lead to better answer in future right?
-
At a later time , the maximum endpoint can at max decrease answer by 1 as as soon as we replace the max endpoint then it will no longer be used (it will be replaced). Hence we have to greedily decrease the answer wherever we can... Since b<A(i)<=a then replacing a by A(i) decreases the penalty by 1,which is also the maximum decrease that the endpoint a can contribute
-
10 hours ago, # | In the problem C, I used a different approach which is calculating the LIS (Not the LDS), then halving the LIS length to get the 2 splits and respective lengths, possible but weirdly it gives WA on TC 2, at some 3600th Testcase, so can someone help finding a counter case for this approach. Submission: 240610899 |
Intuition in C: We process from to and maintain the final element of each subsequence. When processing :
|
-
can you explain what does this mean ?
This is because if we consider making the same choices for the remaining elements ai+1 to an in both scenarios, there will be at most one time where the former scenario will add one penalty more than the latter scenario as the former scenario has a smaller last element after inserting ai . After that happens, the back of the arrays in both scenarios will become the same and hence, the former case will never be less optimal.
-
I think people get confused why prioritize minimizing the penalty first will give the optimal result.
I believe the reasoning is that, no matter which choice you made, the final elements will always contains , so the two final elements will differ at most by one element. And in both cases, the future penalties can only differ by 1.
So avoiding penalty now and take penalty later is always a better option.
(Problem C)I still don't understand why the optimal choice(in a greedy algorithm) at each step eventually leads to the optimal answer at the end |
9 hours ago, # | Can someone share his O(n) approach for D?? |
HELLO FRIENDS, I'M NEW TO CP. HERE is MY SOLUTION FOR THE 3RD QUESTION. WAS MY LOGIC CORRECT? I(OBVIOUSLY) COULD NOT SOLVE IT. void solve(int ind, vector &a, int n, vector &s, vector &r, int ps, int pr, int &mini) { if (ind == n) { int penal = ps + pr; mini = min(penal, mini); return; } s.push_back(a[ind]); if (ind != 0 && s[s.size() — 1] > s[s.size() — 2]) { // ps++; solve(ind + 1, a, n, s, r, ps + 1, pr, mini); s.pop_back(); // ps--; } solve(ind + 1, a, n, s, r, ps, pr + 1, mini); s.pop_back(); r.push_back(a[ind]); if (ind != 0 && r[r.size() — 1] > r[r.size() — 2]) { // pr++; solve(ind + 1, a, n, s, r, ps, pr + 1, mini); r.pop_back(); // pr--; } solve(ind + 1, a, n, s, r, ps, pr, mini); r.pop_back(); return; } |
7 hours ago, # | What is ReLU segment tree? There seems to be no resources about it in English side of internet... (I would appreciate resources in any language though) |
6 hours ago, # | nice contest |
6 hours ago, # | Can someone please explain why the greedy approach works for PROBLEM C? I did not get the third case where x < ai < y. what is the proof that adding to y to avoid +1 penalty is always optimal? Part where I did not understand: This is because if we consider making the same choices for the remaining elements ai+1 to an in both scenarios, there will be at most one time where the former scenario will add one penalty more than the latter scenario as the former scenario has a smaller last element after inserting ai . After that happens, the back of the arrays in both scenarios will become the same and hence, the former case will never be less optimal. |
5 hours ago, # | I had the same solution for D, but AC is quite tough in Python |
4 hours ago, # | a b:^v^ c:@-@? |
3 hours ago, # | Editorial code for D says YES to this test case: N=9, A=[2,1,1,1,1,1,1,1,0]. It seems incorrect to me? |
2 hours ago, # | My solution for E. I think it works in time. |
104 minutes ago, # | Why all from A-D have no algo :((((((((((((((( |
91 minute(s) ago, # | Proof for : First note that any most-distant leaf will have a parent edge weight of , because otherwise there would exist a more distant node under the other child (edge-1 child) of the parent. Now we need to prove that for any adjacent array values, one of them is a maximum and the other is less by , if there is a binary tree solution for that array (but it does not have such leaves sharing the same parent), we can still convert it to another binary tree conforming to the same array with the leaves sharing the same parent: Using the same previous principles, the maximum leaf will be the edge-1 child. We can always cut its parent edge-1, remove the other edge-0 and merge the nodes connected by it, then go to the other leaf adjacent in the array (with distance less by ), add edges under that node, one of them will be the moved the maximum node (under edge-1), and the other will be the less-by-1 node. |
65 minutes ago, # | Great problems D and F1! Finally became International Master. E is also excellent; it's a pity that I struggle with counting. |
42 minutes ago, # | Can anyone explain why the answer for F1 is the maximum suffix sum ? I cant figure it out :< |
I have some questions about problem F1 For the conclusion "the remaining amount of water remaining at tower is the maximum suffix sum of ", firstly here is my proof: proof Then here is my question: Is there a more universal unstanding of the solution? or are there some ideas summarized from the problem? i don't come up with the solution because i alway think of those legal situation, and the suffix of may seems "contradictory" to the actual situation, which even didn't come to my mind. |
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK