1

An Segment Coverage Problem

 1 year ago
source link: https://codeforces.com/blog/entry/115348
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

There are $$$n$$$ points on the line, the position of $$$i$$$ th point is $$$p[i]$$$. there are Q querys. you will got a segment which length is $$$D[i]$$$, ask at least how many segments are needed to cover all points.

$$$ 1 \le n \le 10^5 $$$, $$$ 1 \le p[i] \le 10^9 $$$, $$$ 1 \le q \le 10^5 $$$

23 hours ago, # |

I got a weird heuristic solution. But I didn't know how to hack it or prove it.

So the first problem is that if q=1, there's a greedy way to sort the p array, and for every line segment put the left end point on the left end point that's not covered.

Let's start with D[i]=1. The answer is clearly n.

And then increment D[i]. If we want the answer to change, then we just need to increase D[i] to the smallest difference among all the current points.

Then you do a brute force reconstruction for each point where the difference is equal to the minimum. If the reconstructed point overlaps with the original point, then break and reconstruct the next point equal to the minimum.

I guess it's n log^2n + q log n.

  • 22 hours ago, # ^ |

    Rev. 2  

    -18

    If you want to maintain optimality of the construction, then whenever D[i] changes, all segments in front of the first point of change must be modified too, right, how are you keeping track of all this?

    • 22 hours ago, # ^ |

      Rev. 2  

      -18

      It only increases to the minimum difference value each time. It wouldn't have changed before first modified point.

      The length of the line segment may change, but since none of the longer line segments reaches the next point, the answer will not change according to the greedy above, so there is no maintenance.

      • 21 hour(s) ago, # ^ |

        But since the length of segment is changing, so the start point of segments in the back obvious changed, how do you maintain it?

18 hours ago, # |

I’m not sure if this fits, but…

try to use this
  • 18 hours ago, # ^ |

    still not idea. This problem is from an old problem i didn't solve, and I have already forgot the source.

    • 16 hours ago, # ^ |

      What was my idea:

      1. Sort the queries and the points increasingly by $$$D[i]$$$ and $$$p[i]$$$ respectively.
      2. For each query, if $$$D[i]$$$ is less than $$$\sqrt{10^9}$$$, then solve it in $$$O(n)$$$ by the simple greedy.
      3. For all other queries, keep a list of at most $$$\sqrt{10^9}$$$ pointers tracking the leftmost endpoints of the ranges. As you go from one query to the next, advance each of the pointers to the right to transition to the new solution. Each pointer only goes to the right, therefore it advances at most $$$n$$$ times.

      Total complexity is $$$O((n + q) \sqrt{10^9})$$$ which might or might not fit in the TL, depending on how tight it is.

18 hours ago, # |

Rev. 3  

-28

If $$$q = 1$$$, you can just sort the $$$p$$$ array, and greedily put the left most point of the segment on the left most point you haven't covered yet. The complexity is $$$O(ans * \log n)$$$, the implementation is very simple, you just spam upper_bound.

Now, if $$$1 \leq q \leq 10^{5}$$$, we won't be answering queries straight away. We'll call $$$T[i]$$$, $$$1 \leq i \leq n$$$, the minimum $$$l$$$ such that all point can be covered with no more than $$$i$$$ segment of length $$$l$$$. To get the answer for each $$$T[i]$$$, we can simply binary search to find the answer, then we can answer each query in $$$O(\log n)$$$ using the array $$$T$$$.

The complexity of this algorithm is $$$O(n * (\log n)^{2} * \log (max (D)) + q * \log n)$$$, because for each $$$i$$$, it takes $$$O(i * \log n * \log (max (D)))$$$ to find $$$T[i]$$$, therefore the total number of operation is $$$ (1 + 2 + 3 + ... + n) * \log n * \log (max (D))$$$ = $$$n * (\log n)^{2} * \log (max (D))$$$. To answer the query, you just have to binary search on $$$T$$$, so the complexity is $$$q * \log n$$$

Note that it is important that you set the lower bound of the binary search for each $$$1 \leq i < n$$$ to be $$$T[i + 1]$$$, as the solution depends on the fact that the complexity to check a given $$$l$$$ is $$$O(ans * \log n)$$$, and we don't want $$$ans$$$ to wander above $$$i$$$.

The complexity is really big, but I think that you can apply some heuristic tricks to make the $$$\log$$$ constant much smaller to barely fits through the constraints. Edit: sorry, had a brainfart, my solution was wrong

  • 18 hours ago, # ^ |

    But $$$ (1 + 2 + 3 + ... + n) = n^2 $$$?

    • 11 hours ago, # ^ |

      Oh my bad just had a brain fart :((

15 hours ago, # |

Here's an algorithm with $$$O(n\sqrt{q}\log n)$$$ running time, which is unlikely to pass but is at least faster than brute force.

We process the queries in increasing length order. Say the current length is $$$\ell$$$. Maintain a dynamic forest, where the parent of a point $$$p$$$ is its successor when executing the greedy algorithm (i.e., the first point after $$$p$$$ that has distance $$$>\ell$$$). Using this dynamic forest, we can simulate greedy in $$$O(\log n)$$$ time.

The issue is this dynamic forest will change a lot. To handle this issue, we only update the parent of a node $$$p$$$ if there are only $$$\leq b$$$ points between $$$p$$$ and its parent, where $$$b$$$ be a parameter to be set later. Otherwise we say $$$p$$$ is "corrupted" and do not explicitly maintain its parent. In this way, the parent of a node will only change $$$O(b)$$$ times. This part overall takes $$$O(nb\log n)$$$ time.

On the other hand, there's a greedy algorithm that runs in $$$O(\mathrm{ans}\log n)$$$ time (repeatedly find the successor). Similar to this idea, we repeatedly find the next corrupted node $$$q$$$ that the greedy algorithm will encounter, which takes $$$O(\log n)$$$ time using dynamic forest, and then find the parent of $$$q$$$ by 1D successor search in $$$O(\log n)$$$ time. There will be only $$$O(\frac{n}{b})$$$ corrupted node in the optimal solution, so each query takes $$$O(\frac{n}{b}\log n)$$$ time. Balancing the terms, set $$$b=\sqrt{q}$$$, the total running time is $$$O(n\sqrt{q}\log n)$$$.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK