Hi guys! I was wondering if this problem:- https://www.codechef.com/problems/BITMASK4 can be solved using DFS within the time constraints. Any help will be appreciated. Thanks you!
# | User | Rating |
---|---|---|
1 | tourist | 3993 |
2 | jiangly | 3743 |
3 | orzdevinwang | 3707 |
4 | Radewoosh | 3627 |
5 | jqdai0815 | 3620 |
6 | Benq | 3564 |
7 | Kevin114514 | 3443 |
8 | ksun48 | 3434 |
9 | Rewinding | 3397 |
10 | Um_nik | 3396 |
# | User | Contrib. |
---|---|---|
1 | cry | 167 |
2 | Um_nik | 163 |
3 | maomao90 | 162 |
3 | atcoder_official | 162 |
5 | adamant | 159 |
6 | -is-this-fft- | 158 |
7 | awoo | 155 |
8 | TheScrasse | 154 |
9 | Dominater069 | 153 |
10 | djm03178 | 152 |
Hi guys! I was wondering if this problem:- https://www.codechef.com/problems/BITMASK4 can be solved using DFS within the time constraints. Any help will be appreciated. Thanks you!
Name |
---|
How do you want to use DFS here? I see one way: to make a graph out of lines by connecting those that are parallel and then searching for the biggest connectivity component, but that has quadratic time complexity as you have to check each pair.
I see 2 correct solutions here, and neither use DFS or any graphs in any way
What you wrote is exactly what I had in mind and wanted to know if there was a way to optimise the time complexity. Also, do you mind briefly explaining the 2 solutions you see, which don't use DFS?
Welp I'm sorry as I have realised the second one wasn't a solution at all so I'll tell the easier one.
Two lines (A1, B1, C1) and (A2, B2, C2) are parallel if and only if A1/B1 == A2/B2 (or B1 == 0 && B2 == 0), so we can make a map from A/B ratios to number of lines with such ratio (and a separate counter for lines with B == 0) and just go over the lines and increase corresponding count each time. Then just take the maximum. If you are nervous about ratios being non-integers then you can tell your map to count two numbers as equal if they differ less than some EPS (around 1e - 7 I suppose).
If you use an ordered map, your algorithm's time complexity is going to be O(TN * log(N)), but it won't exceed TL anyway if you use C++ or such.
If you use an unordered map, logarithm's gone from your complexity, but if you use C++ always remember that std::map on basic types works faster than std::unordered_map for sizes up to 1e6 or such because of the scary hash function so I recommend you to use the ordered one.
Perfect. Thanks a lot!