5

Codeforces round #717 editorial

 6 months ago
source link: https://codeforces.com/blog/entry/89846
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

1516A - Tit for Tat

The general approach to minimizing an array lexicographically is to try to make the first element as small as possible, then the second element, and so on. So greedily, in each operation, we'll pick the first non-zero element and subtract from it, and we'll add that to the very last element. You can make the implementation faster by doing as many operations as you can on the first non-zero element simultaneously, but it's not necessary.

Code link: https://pastebin.com/pBsychs2

1516B - AGAGA XOOORRR

So let's try to understand what the final array looks like in terms of the initial array. The best way to see this is to look at the process backwards. Basically, start with the final array, and keep replacing an element with the elements that xor-ed down to it, until you get the initial array. You'll see that the first element turns into a prefix, the second element turns into a subarray that follows this prefix, and so on. Hence, the whole process of moving from the initial to the final array is like we divide the array into pieces, and then replace each piece with its xor, and we want these xors to be equal. A nice observation is: we need at most pieces. That's because if we have or more pieces, we can take pieces and merge them into one. Its xor will be the same, but the total piece count will decrease by . Now, checking if you can divide it into or pieces is a simple task that can be done by bruteforce. You can iterate over the positions you'll split the array, and then check the xors are equal using a prefix-xor array or any other method you prefer.

Additional idea: for pieces, you don't even need bruteforce. It's sufficient to check the xor of the whole array is . Hint to see this: write the bruteforce.

Code link: https://pastebin.com/tnLpW23C

Bonus task: can you find an solution? What if I tell you at least elements have to remain instead of ?

1516C - Baby Ehab Partitions Again

First of all, let's check if the array is already good. This can be done with knapsack dp. If it is, the answer is . If it isn't, I claim you can always remove one element to make it good, and here's how to find it:

Since the array can be partitioned, its sum is even. So if we remove an odd element, it will be odd, and there will be no way to partition it. If there's no odd element, then all elements are even. But then, you can divide all the elements by without changing the answer. Why? Because a partitioning in the new array after dividing everything by is a partitioning in the original array and vice versa. We just re-scaled everything. So, while all the elements are even, you can keep dividing by , until one of the elements becomes odd. Remove it and you're done. If you want the solution in one sentence, remove the element with the smallest possible least significant bit.

Alternatively, for a very similar reasoning, you can start by dividing the whole array by its and remove any odd element (which must exist because the is ,) but I think this doesn't give as much insight ;)

Code link: https://pastebin.com/aiknVwkZ

1516D - Cut

Let's understand what "product=LCM" means. Let's look at any prime . Then, the product operation adds up its exponent in all the numbers, while the LCM operation takes the maximum exponent. Hence, the only way they're equal is if every prime divides at most one number in the range. Another way to think about it is that every pair of numbers is coprime. Now, we have the following greedy algorithm: suppose we start at index ; we'll keep extending our first subrange while the condition (every pair of numbers is coprime) is satisfied. We clearly don't gain anything by stopping when we can extend, since every new element just comes with new restrictions. Once we're unable to extend our subrange, we'll start a new subrange, until we reach index . Now, for every index , let's define to be the first index that will make the condition break when we add it to the subrange. Then, our algorithm is equivalent to starting with an index , then replacing with until we exceed index . The number of steps it takes is our answer. We now have subproblems to solve:

calculating

To calculate , let's iterate over from the end to the beginning. While at index , let's iterate over the prime divisors of . Then, for each prime, let's get the next element this prime divides. We can store that in an array that we update as we go. If we take the minimum across these occurrences, we'll get the next number that isn't coprime to . Let's set to that number. However, what if other elements, that don't include , are the ones who aren't coprime? A clever way to get around this is to minimize with , since covers all the elements coming after .

counting the number of steps quickly

This is a pretty standard problem solvable with the binary lifting technique. The idea is to perform many jumps at a time, instead of . Let's calculate : the index we'll end up at if we keep replacing with times. Clearly, since . Now, to calculate how many steps it takes from index to index , let's iterate over the numbers from to . Let the current be . If is less than or equal to , we can jump steps at once, so we'll make equal to and add to the answer. At the end, we'll make one more jump.

Code link: https://pastebin.com/Ng314Xc8

1516E - Baby Ehab Plays with Permutations

Let's think about the problem backwards. Let's try to count the number of permutations which need exactly swaps to be sorted. To do this, I first need to refresh your mind (or maybe introduce you) to a greedy algorithm that does the minimum number of swaps to sort a permutation. Look at the last mismatch in the permutation, let it be at index and . We'll look at where is at in the permutation, and swap index with that index so that is in the correct position. Basically, we look at the last mismatch and correct it immediately. We can build a slow on this greedy algorithm: let denote the number of permutations of length which need swaps to be sorted. If element is in position , we can just ignore it and focus on the first elements, so that moves us to . If it isn't, then we'll swap the element at position with wherever is at so that becomes in the right position, by the greedy algorithm. There are positions index can be at, and after the swap, you can have an arbitrary permutation of length that needs to be sorted; that gives us ways to go to . Combining, we get that .

Next, notice that you don't have to do the minimum number of swaps in the original problem. You can swap indices and swap them back. Also, it's well-known that you can either get to a permutation with an even number of swaps or an odd number, but never both (see this problem.) So now, after you calculate your , the number of permutations you can get to after swaps is . Now, let's solve for .

Sane solution

Notice that after swaps, only indices can move from their place, which is pretty small. That gives you a pretty intuitive idea: let's fix a length and then a subset of length that will move around. The number of ways to pick this subset is , and the number of ways to permute it so that we need swaps is . So we should just multiply them together and sum up, right? Wrong. The problem is that double counting will happen. For example, look at the sorted permutation. This way, you count it for every single subset when , but you should only count it once. A really nice solution is: force every element in your subset to move from its position. How does this solve the double counting? Suppose different subsets give you the same permutation; then, there must be an index in one and not in the other. But how can they give you the same permutation if that index moves in one and doesn't move in the other?

So to mend our solution, we need to create denoting the number of permutations of length which need swaps to be sorted, and every single element moves from its position (there's no .) How do we calculate it? One way is to do inclusion-exclusion on the we already have! Suppose I start with all permutations which need swaps. Then, I fix one index, and I try to subtract the number of permutations which need swaps to be sorted after that index is fixed. There are ways to choose the index, and permutations, so we subtract . But now permutations with fixed points are excluded twice, so we'll include them, and so on and so forth. In general, we'll fix indices in the permutation. There are ways to pick them, and then there are ways to pick the rest so that we need swaps. Hence: . Phew!

If you have no idea what the hell I was talking about in the inclusion-exclusion part, try this problem first.

Code link: https://pastebin.com/3CzuGvtw

Crazy solution

Let denote the set of the integers between and (inclusive.)

Let's try to calculate from . To do that, we need to understand our a bit further. Recall that . Let's think about what happens as you go down the recurrence. When you're at index , either you skip it and move to , or you multiply by . But you do that exactly times, since decreases every time you do it. So, this basically iterates over every subset of of size , takes its product, and sums up!

Now, let's use this new understanding to try and calculate from . suppose I pick a subset of . Then, a part of it will be in and a part will be in . I'll call the first part small and the second part big. So, to account for every subset of length , take every subset of length of the big elements, multiply it by a subset of length of the small elements, and sum up. This is just normal polynomial multiplication!

Let denote the sum of the products of the subsets of length of the big elements. That is:

Then, the polynomial multiplication between and gives ! How do we calculate though?

Notice that every big element is a small element plus . So we can instead pick a subset of the small elements and add to each element in it. This transforms the formula to:

Let's expand this summand. What will every term in the expansion look like? Well, it will be a subset of length from our subset of length , multiplied by . Now, let's think about this backwards. Instead of picking a subset of length and then picking a subset of length from it, let's pick the subset of length first, and then see the number of ways to expand it into a subset of length . Well, there are elements left, and you should pick elements from them, so there are ways to expand. That gives use:

But the interior sum is just ! Hurray! So we can finally calculate to be:

And then polynomial multiplication with itself would give . Since you can move to and to , you can reach any you want in iterations using its binary representation.

Code link: https://pastebin.com/yWgs3Ji6

Bonus task-ish: the above solutions can be made to work with with a bit of ugly implementation, but I don't know how to solve the problem with . Can anyone do it? The sane solution seems far off, and I don't know if it's possible to do the convolution from to quickly in the crazy one.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK