Блог пользователя usernameson

Автор usernameson, история, 3 года назад, По-английски

Introduction

I was fiddling around with the codeforces API and came up with the following average difficulty calculations for Div 2 only contest problems in 2021.

  • A: 826
  • B: 1122
  • C: 1544
  • D: 2000
  • E: 2373
  • F: 2736

Caveats

The data is a little messy. I ignored problems like D1, D2 etc. Also the code can't seem to find the shared problems in Div 2 when there is a Div 1 contest occurring at the same time.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +100
  • Проголосовать: не нравится

Автор usernameson, история, 3 года назад, По-английски

Overview

Assume you are doing bitmask dp where each mask starts with a value and for every mask you want to add this value to all its submasks.

Example Type problem

You have an array $$$a$$$ and for each number $$$x$$$ in $$$[1,10^{6}]$$$ you want to know how many $$$a_{i}$$$ have the property $$$a_{i}$$$ & $$$x = x$$$.

Standard approach

You can use the standard submask iteration to do it in $$$O(3^n)$$$. For the example type problem the initial value for each $$$x$$$ is the number of times it appears in the array and these counts get added to all submasks.

Slight speed up

You can speed this up slightly by using a lazy prop style technique. You iterate through all masks in decreasing order and remove a bit in the original mask. This becomes your new mask call it mask2. You then add the value of mask to the value of mask2. Next you iterate through all submasks of mask2 and add the value to the submask with bit removed added back.

Example implementation

Code
Problem Application

Полный текст и комментарии »

  • Проголосовать: нравится
  • +8
  • Проголосовать: не нравится

Автор usernameson, история, 3 года назад, По-английски

I think most of the nice/pleasant colours are given to lower rated participants consider the following

Grey — boring colour

Green — usually associated with positive events

Cyan — pleasant colour liked by everyone

Blue — decent colour somewhat exciting due to association with unclicked links

Purple — disappointing colour due to association with clicked links

Yellow — annoying colour due to pretentious people calling yellow traffic lights amber

Red — usually associated with danger

Nutella — not a colour adding to bread yields significant improvements to taste

Полный текст и комментарии »

  • Проголосовать: нравится
  • -67
  • Проголосовать: не нравится

Автор usernameson, история, 3 года назад, По-английски

Is anyone able to access emax-eng? I've been getting error: server error for the last couple of days please try again in 30 seconds.

Полный текст и комментарии »

  • Проголосовать: нравится
  • 0
  • Проголосовать: не нравится

Автор usernameson, история, 4 года назад, По-английски

I was scrolling through the contest page on Codeforces and it seems that div 3 contests tend to get more registrants than div 2 contests. It seemed odd. Has anyone looked into this?

Полный текст и комментарии »

  • Проголосовать: нравится
  • +19
  • Проголосовать: не нравится

Автор usernameson, история, 6 лет назад, По-английски

Let's say we have a set of planes of the form zi = aix + biy + c. Each plane will have a (possibly empty) region of where it has the maximum z value over all the planes in the set. I am wondering if these regions have sufficiently nice properties that they can be maintained, updated and queried efficiently.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +35
  • Проголосовать: не нравится

Автор usernameson, история, 6 лет назад, По-английски

I have noticed that educational rounds like to use mod 998244353 instead of mod 109 + 7. Does anyone know the reason for this? One idea that comes to mind is since 998244353 < 109 there might be opportunities to troll contestants by breaking division and forcing them to calculate things like mod 998244353. However I have yet to see this used in an educational round, although this idea was used to good effect in 272D - Dima and Two Sequences.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +11
  • Проголосовать: не нравится

Автор usernameson, история, 6 лет назад, По-английски

It was round 504. My first submission for problem D failed the pretests. I had 45 minutes to fix it. I figured out what was wrong with my approach fairly quickly but only found an alternate approach with roughly five minutes remaining in the contest. I coded as quickly as I could and submitted with 6 seconds to go in the contest. Unfortunately this solution also failed the pretests.

The reason my solution failed was I thought it would it be cool to write

for(int i=n-1; i; i--)

instead of

for(int i=n-1; i>=0; i--)

with the eventual goal of being a guy who writes stuff like

for(int i=n-1;i--;)

. Unfortunately

for(int i=n-1; i; i--)

terminates one step earlier than

for(int i=n-1; i>=0; i--)

, which is bad if you want to do stuff when i=0. So in conclusion it is probably not a good idea to experiment with new coding styles during a contest just because they look cool.

Полный текст и комментарии »

  • Проголосовать: нравится
  • -59
  • Проголосовать: не нравится

Автор usernameson, история, 7 лет назад, По-английски

Introduction

This is the story of how I failed to qualify for round 2 to in the 2018 Google Codejam. Hopefully you find it interesting and learn some things you shouldn't do in Codejam.

Qualification Round

Qualification was fairly easy due to the ample time of 24 hours. I managed to solve everything correctly apart from the large dataset for the last question. I tried to see if anyone had done mathematical analysis of Madison Cube garden from Futuruma since that seemed closely related to the last question but to no avail.

Round 1A

The ideas to solve questions just did not come to me during this round. I only managed to solve the small set for question A.

What I learnt:

Eat a good breakfast before doing a competition in the morning.

Round 1B

I looked at question A and could not think of a correct approach. Then I looked at B and decided to work on it since it seemed simpler. I saw a the equations and coded a solution. The solution failed the sample test cases. Then I realised I had misunderstood what the question was asking. So I modified my code and it also failed the sample test cases. Turns out I still misunderstood the question. After I finally understood what the question wanted I was running low on time so decided to code a O(n2) solution to pass the small. This worked.

After looking at the other questions I decided it was best to spend the remaining time modifying the solution so it would not TLE on the large. I did some simple micro-optimisations then a beautiful two pointer style approach that would take my solution to O(n) came to me with roughly five minutes to go. I coded quickly and the solution passed the sample cases. I submitted it with less than a minute to go, however it failed the small test set. After the contest I managed to get my O(n) solution to work. Turns out the code I submitted was correct apart from two lines where I wrote i when I should have written i - 1. Also if I had got that solution to work I would have qualified for round 2.

What I learnt:

  1. Read the question carefully before coding.

  2. Be thankful for sites like csacademy that give you problems without stupid stories about jackasses looking at road signs.

Round 1C

Question A seemed fairly straightforward after a bit of thought. I considered using a prefix tree to be fancy and ensure a sufficiently low time complexity but my previously written prefix tree implementation was not well suited to the task. Looking at the limits I decided a binary search should be fast enough. I also suspected a two pointer style approach might exist but decided to stick with the binary search rather than look for it. Question A passed after I fixed how I handled an edge case in my implementation.

After a brief look at B and C I decided to attack the small C since it looked like a knapsack style dp approach should work. I was able to get C to work without too much trouble but knew there was no way my code would pass the large.

I had a look at the B and came up with a greedy approach. First I got an idleness limit exceeded. I realised I had to change the template I used for codejam problems since interactive problems have a different format for the expected output. Doing this allowed me to convert my idleness limit exceeded into a WA. Yay progress! Then I spent the rest of the contest fiddling with my greedy approach to no avail. After the contest I read the editorial and learnt my initial greedy approach was correct. I looked at my code found out when I altered my template I did not allow for multiple test cases. One simple for loop and I would have got AC. Like round 1B if I had this loop I would have qualified for round 2.

What I learnt:

Pay special attention to how you code interactive problems.

Conclusion

I didn't make it to round 2 of gcj this year. There were some interesting problems and I looking forward to seeing and upsolving the problems from future rounds. Best of luck to everyone who made it round 2.

Полный текст и комментарии »

Теги gcj
  • Проголосовать: нравится
  • +33
  • Проголосовать: не нравится

Автор usernameson, история, 7 лет назад, По-английски

Introduction

I am going to describe a simple randomised approach to solve this problem 723E - One-Way Reform.

Problem Overview

You have an undirected graph. You want to turn it into a directed graph by choosing a direction for each edge. Also you want to choose directions such that the number of vertices with equal out-degrees and in-degrees is maximised.

The Basic Random Approach That Does Not Work

The basic random approach is to randomly assign all edges directions (i.e for a given edge make it point in one direction with probability 0.5 and in the other with probability 0.5) and hope if we do this enough times we get an optimal solution.

However there a simple obstruction that stops this from working. Assume our original graph is a cycle. Then we can make every edge have the same out-degree and in-degree by creating a directed cycle. We can see the probability of this happening with our basic random approach is roughly 0.5n for a cycle with n nodes. If the cycle is large it unlikely we will be able to find this solution within a reasonable number of random attempts.

Modifying The Basic Approach To Work

There is a simple modification to the basic approach that makes it work. We still randomly assign the edges directions as in the basic approach.

The modification is after we assign the edge directions we try to improve our solution. For a given directed edge from u to v if the out-degree of u is greater than its in-degree and the in-degree of v is greater than its out degree we swap the direction of the edge to go from v to u.

We check each edge exactly once, record how good our solution is and repeat the process with a new random assignment. It is clear that this should deal with the case when our graph is just a big cycle. Surprisingly if you repeat this process 1000 times and take the best answer found in all trails it passes all the testcases. Code 34649661

Conclusion

This approach feels like a hill-climbing algorithm. We start at a random position and head to a local optimum. To increase the odds that we find a global optimum we choose a variety of different starting positions. It could be possible that a well constructed graph would cause this approach to fail with a high probability. However, I do not know how to construct such a graph.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +39
  • Проголосовать: не нравится

Автор usernameson, история, 7 лет назад, По-английски

I am looking for something with a greater emphasis on constructing randomised algorithms to solve problems. The material I have found so far has a lot of probability theory and only a few examples of randomised algorithms that don't seem particularly useful for competitive programming e.g a randomised quicksort.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +17
  • Проголосовать: не нравится

Автор usernameson, история, 7 лет назад, По-английски

What algorithms have names that you don't like and why? If you could change their names what would the new names be?

My choices: Dijkstra: It is too hard to pronounce. I think it should be called Dijimon. Sprague-Grundy: I don't like the name, "Grundy". It sounds too uncouth. I would prefer just Sprague. I also like the term nimbers so the Nimber theorem would also be acceptable.

Полный текст и комментарии »

  • Проголосовать: нравится
  • -53
  • Проголосовать: не нравится

Автор usernameson, история, 7 лет назад, По-английски

Somehow I managed to solve 901B - GCD of Polynomials however I don't know why my solution works. My idea was to pick two random polynomials of degrees n and n-1 check how many steps it takes to find the gcd and return the polynomials if the answer is n. However doing the gcd over is a pain because of fractions. If you try to avoid them by multiplication by denominators things overflow pretty quickly.

So I decided to pick a prime and do gcds over the finite field Fp. The strange thing is with a medium sized prime like 1009 the algorithm took more steps over this field than over . Then I tried using the prime 43 just to see what would happen, since it worked on the case where 1009 gave an incorrect answer. Surprisingly using 43 worked for all the cases. Anyone know of a special relationship between the number of steps the Euclidean algorithm for polynomials takes over and Fp for a given prime p?

code

Полный текст и комментарии »

  • Проголосовать: нравится
  • +2
  • Проголосовать: не нравится

Автор usernameson, история, 8 лет назад, По-английски

Are there any good rules of thumb on when a square root decomposition will be too slow? I suspect if we have an interval of length 105 and 3 × 105 queries a square root decomposition is too slow but this is only based on my attempts to solve http://codeforces.me/problemset/problem/765/F with a square root decomposition where I calculated the complexity of my algorithm to be and exceeding the time limit. In general if you are given an interval of length N and Q queries what sort of values of do you want in order to be confident that a square root decomposition will pass the time constraint? Also are there standard limits used in Codeforces deliberately set to prevent square root decomposition based approaches?

Полный текст и комментарии »

  • Проголосовать: нравится
  • +7
  • Проголосовать: не нравится

Автор usernameson, история, 8 лет назад, По-английски

Has anyone looked into the problem of determining the algorithmic complexity of a solution to a problem using the maximum solving time on test cases of different sizes? I imagine you could use least squares where the basis functions include some polynomials, some powers of log, an exponential and products of these up to a certain fixed size. It might be an interesting thing to calculate for complex solutions.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +11
  • Проголосовать: не нравится

Автор usernameson, история, 8 лет назад, По-английски

Overview

In this post I will explain with an example one way to code the Edmonds Karp max flow algorithm to solve a problem.

The Problem

The problem we study is Array and Operations http://codeforces.me/problemset/problem/498/C. The basic idea is you have a sequence of numbers and a bunch of special pairs taken from the sequence. For any of the pairs you can divide each number by a common factor greater than 1 and replace the elements in the pair with the result of this division. The question asks what is the maximum number of times you can do this.

Converting this to a max flow problem

This section is specific to the problem. You can skip it if you just want to know how to implement Edmonds Karp on a given flow network.

Before we convert this problem to a max flow problem we make some observations. First if we have a pair of numbers the maximum amount of times we can perform the operation described on this pair is equal to the number of prime divisors the pair have in common counting multiplicity (i.e if the same prime divides the pair multiple times we count it multiple times). Next we note this is the same as the number of prime divisors in the greatest common divisor of the pair counting multiplicity.

Next assume we have one number that is paired with multiple other numbers. Then to find the maximum number of times we can perform the operation on this number we first find the gcd between it and it first pairing. Then we count the number of prime divisors in this gcd. Next we replace the original number by the original number divided by the gcd found and repeat the process with the second pairing. We could instead find the gcd of the number and the product of all the numbers paired with it and then count the prime divisors of this gcd to get the same result. However this approach may lead to overflow issues.

Now for any given number we know how to count maximum the number of times we can perform the operation on a pair and the maximum number of times we can perform this operation on the number in total. If we think of the flow as the maximum number of times we can perform the operation we can construct a flow network as follows. We let each number that appears in special pair have two nodes associated with it. Call them the left node and the right node. The left node has a directed edge running from the source to it with a capacity equal to the maximum number of times the operation can be performed on this number in total. The right node has a directed edge running from it to the the sink with the same capacity. Next for each number in a special pair we connect its left node to the right node of the number with which it is paired with a directed edge from left to right with capacity equal to the number of times we can perform the operation on this pair. This gives us our flow network. We also mention it is different from the flow network given in the editorial but it also works.

Of course to do all this we need functions to calculate the number of prime divisors and greatest common divisors. For calculating the greatest common divisor the Euclidean algorithm is fairly easy to implement. Since the numbers given can go up to 109 calculating the number of prime divisor is harder. One approach is to calculate all primes up to 1000 by brute force. Then we can use those as a sieve to get all primes up to 32000. Finally since 320002 > 109 with all these primes we can find the number of prime divisors for any number up to 109.

Set up three graphs

Now that we have our network and edge capacities we create three graphs. We store these in adjacency matrix form we can use a 2 dimensional array or a vector<vector> here. The first graph is the flow where all entries are set to 0. The second graph is the residual where all the edge values are set to the capacity. Finally the third graph is the network graph where we have a 1 if two edges are connected and 0 otherwise.

Find an augmenting path

Next we create a function to find an augmenting path that returns pair<int,vector> where the first term is the capacity and the second is the path. To do this we perform a breadth first search starting from the source. We only need to store the predecessor of each vertex and not the distance. With the predecessors it is easy to find a path by back tracking from the sink. If there is no path we just return an empty pair and deal with it later when we augment along the path. Next we find the capacity of the path by taking the minimum of all capacities of edges included in the path.

Augmenting along the path

Next we create a function that takes the path we found before and update both the flow and the residual. It is useful to let this function return a bool which if true if we made an update and false otherwise. First we call our previous function to find a path. If the path is an empty pair we return false. Otherwise take an edge from point x to point y in the path. For the residual we decrease the capacity of the edge from x to y by the capacity of the path and increase the capacity of the edgy from y to x by the capacity of the path. For the flow we if the edge from x to y is in the network graph we increase the flow from x to y by the capacity of the path otherwise we decrease the capacity of the flow from y to x by the capacity of the path. We do this for all edges in the path then return true.

Finding the maximal flow

To find the maximal flow we just keep calling the augment function until it returns false.

Find the answer

To find the answer we first find the total flow by adding all the flows leaving the source. We could also have stored total flow as we were augmenting. Finally to get the answer we divide the total flow by since our network counts the application of each operation twice.

Code


#include<algorithm> #include<vector> #include<list> #include<iostream> #include<utility> #include<string> #include<set> #include<queue> #include<stack> using namespace std; //ps stores all primes below 32000 vector<long long> ps; //graph is the network graph, fl is the flow //res is the residual vector<vector<int>> graph,fl,res; //find an augmenting path pair<int, vector<int>> getPath(){ pair<int, vector<int>> ans; int sz=fl.size(); //find predecessors using a breadth first search vector<int> preds(sz,-1); preds[0]=0; queue<int> q; q.push(0); while(!q.empty()){ int cur=q.front(); q.pop(); for(int i=0; i<sz; i++){ if(res[cur][i] && preds[i]==-1){ preds[i]=cur; q.push(i); } } } //if there is no path to the sink //return an empty answer if(preds[sz-1]==-1) return ans; //create an augmenting path //from predecessors int pos=sz-1; vector<int> vi; while(pos){ vi.push_back(pos); pos=preds[pos]; } vi.push_back(0); reverse(vi.begin(),vi.end()); int cap=res[vi[0]][vi[1]]; //find the capacity of the path for(int i=1; i<vi.size()-1; i++){ cap=min(cap,res[vi[i]][vi[i+1]]); } return make_pair(cap,vi); } //udpate the residual and flow //along an augmenting path bool augment(){ //find a path auto path=getPath(); auto vi=path.second; //return false if the path is empty if(!vi.size()) return false; //get the capacity of the path int cap=path.first; for(int i=0; i<vi.size()-1; i++){ //update the residual res[vi[i]][vi[i+1]]-=cap; res[vi[i+1]][vi[i]]+=cap; //update the flow if(graph[vi[i]][vi[i+1]]){ fl[vi[i]][vi[i+1]]+=cap; } else{ fl[vi[i+1]][vi[i]]-=cap; } } return true; } //count the number of prime //divisor in a number int divCt(long long num){ if(num==1) return 0; int ct=0; //isP checks we get //a prime greater than //32 000 bool isP=false; int cur=num; //keep dividing by primes until we get //1 or a prime greater than 32 000 while(cur>1 && !isP){ bool found=false; for(int i=0; i<ps.size() && !found; i++){ if(!(cur%ps[i])){ cur=cur/ps[i]; found=true; ct++; } } if(!found && cur>1) isP=true; } return ct+isP; } //get the gcd of two numbers using the //Euclidean algorithm long long gcd(long long a, long long b){ if(b>a) return gcd(b,a); while(b>1){ int r=a%b; if(!r) return b; a=b; b=r; } return 1; } int main(){ vector<int> psj; //find primes below 1000 //by brute force and store //it in psj for(int i=2; i<1000; i++){ int cnt=0; for(int j=2; j<i; j++){ if(!(i%j)){ cnt++; } } if(!cnt){ psj.push_back(i); } } //find primes below 32 000 //using psj to sieve vector<bool> psb(32000,true); psb[0]=false; psb[1]=false; for(int i:psj){ for(int j=2; i*j<=32000; j++){ psb[i*j]=false; } } for(int i=0; i<32000; i++){ if(psb[i]){ ps.push_back(i); } } //store the sequence given in //the question vector<long long> sequence; sequence.push_back(0); int n,m; cin>>n>>m; for(int i=0; i<n; i++){ int num; cin>>num; sequence.push_back(num); } //store whether two points are in a pair or not vector<vector<int>> basicGraph(n+1,vector<int>(n+1,0)); for(int i=0; i<m; i++){ int x,y; cin>>x>>y; basicGraph[x][y]=1; basicGraph[y][x]=1; } //initialise flow to all 0s vector<vector<int>> flow(2*n+2,vector<int>(2*n+2,0)); fl=flow; //initialise the residual vector<vector<int>> residual(2*n+2,vector<int>(2*n+2,0)); //fill in values for edges not involving the //source or the sink for(int i=1; i<=n; i++){ for(int j=1; j<=n; j++){ if(basicGraph[i][j]){ auto num=gcd(sequence[i],sequence[j]); residual[i][j+n]=divCt(num); } } } //fill in values for edges involving the source //or sink for(int i=1; i<=n; i++){ int ct=0; long long cur=sequence[i]; for(int j=1; j<=n; j++){ if(basicGraph[i][j]){ long long other=sequence[j]; long long num=gcd(cur,other); ct+=divCt(num); cur=cur/num; } } residual[0][i]=ct; residual[n+i][2*n+1]=ct; } res=residual; //fill in edge values for the //network graph graph=residual; for(int i=0; i<2*n+2; i++){ for(int j=0; j<2*n+2; j++){ graph[i][j]=min(1,res[i][j]); } } int ans=0; //keep augmenting until //we cannot find an augmenting //path while(augment()){} //set ans to the max flow for(int i=0; i<2*n+2; i++){ ans+=fl[0][i]; } cout<<ans/2; return 0; }

Полный текст и комментарии »

  • Проголосовать: нравится
  • -13
  • Проголосовать: не нравится

Автор usernameson, история, 8 лет назад, По-английски

Introduction

This post is about div 2 question A from Round 406 http://codeforces.me/problemset/problem/787/A. Coding the solution is easy however figuring out why this solution works requires a bit more thought.

The Situation

The question boils down to having four fixed integers a, b, c, d all between 1 and 100 and finding the smallest non-negative integer pairs m, n such that b + an = d + cm. A solution that works is to brute force all integer pairs in [0, 200] × [0, 200]. (The editorial says sticking with pairs [0, 100] × [0, 100] is sufficient but the proof in this case given is simpler).

The Technique Overview

If we want to brute force enough pairs to either find a solution or conclude that one does not exist we need to know how many pairs are enough. It turns out a technique in number theory can help us. The idea assume we have a solution to our equation and use it to generate more solutions. Usually in number theory this is used to show if a particular equation has one solution it has infinitely many and the next step is to find a single solution for the equation and then conclude it has infinitely many. In our case we assume we have a large solution and show it can be used to generate a smaller solution.

The Technique in Action

Assume we have a pair (n0, m0) such that b + n0a = d + m0c. Let's perturb our n0 by adding an integer h to it. Then we have b + (n0 + h)a = d + m0c + ha. We would like to combine the last two terms on the right hand side so we let h = tc for some integer t. Then we have b + (n0 + tc)a = d + (m0 + ta)c. Thus given a solution (n0, m0) the pair (n0 + tc, m0 + ta) is another solution for all integers t.

Next we assume we have a solution (n0, m0) where n0 > 200 and show the smaller solution (n0 - c, m0 - a) is positive. This allows us to restrict our search to solutions to those with values between 0 and 200 inclusive. First we note since c ≤ 100 we have n0 - c > 100. Thus the first term is positive. For the second term we note since this new pair is a solution we have b + (n0 - c)a = d + (m0 - a)c. Since c > 0 it is sufficient to prove (m0 - a)c > 0. Also since a > 0, b > 0 and n0 - a > 100 we can conclude the left hand side of the equation is greater than 100. Thus d + (m0 - a)c > 100. Finally since d ≤ 100 we conclude (m0 - a) > 0 as required.

Did I figure all this our during the contest?

I did not. Once I could not see a simple way to generate the answer I decided to brute force. I suspected something like what I described above would occur and brute forced all pairs in [0, 999] × [0, 999]. Fortunately for me doing this worked. My contest submission is below.


#include<algorithm> #include<vector> #include<iostream> #include<utility> #include<string> #include<set> #include<queue> using namespace std; int main(){ int a,b,c,d; cin>>a>>b>>c>>d; for(int i=0; i<1000; i++){ for(int j=0; j<1000; j++){ if(b+a*i==d+c*j){ cout<<b+a*i; return 0; } } } cout<<-1; return 0; }

Полный текст и комментарии »

  • Проголосовать: нравится
  • +15
  • Проголосовать: не нравится

Автор usernameson, история, 8 лет назад, По-английски

Introduction

In this entry I will show how to use lambda expressions to deal with a fairly common situation that arises in problems. Also this post is specific to C++ 11 and beyond where lambda expressions were introduced to the language.

The Situation

Sometimes in problems it is natural to store inputs in pairs. For example it may be important to keep track of both the size of an element and where it occurs in the input; or when an element has two important attributes. This usually results in a vector of pairs. Next it can be useful to apply an algorithm to this vector of pairs and this where lambda expressions shine.

An Example

For an example problem I will use the codeforces problem The Meeting Place Cannot Be Changed http://codeforces.me/problemset/problem/780/B. The idea of the problem is you have people at n points who each have a maximum speed and who want to meet. You have to figure out the minimum time they can meet. My solution is interesting for three reasons. Firstly and most importantly it uses a lambda expression to sort a vector of pairs. Secondly the approach differs from the editorial approach. Thirdly, the solution passes the system tests but I have a suspicion a specific type of example would cause it to exceed the maximum time limit.

My approach

I started with the basic observation the shortest time any two given people can meet is equal to the distance between them divided by the sum of their max speeds. To keep track of peoples max speeds and positions I stored them as a pair and placed the results in a vector of pairs. Next I concluded that the minimum time all people can meet is the maximum of all the shortest times any two people can meet. Of course at this point I tried to brute force all pairs and exceeded the time limit. To speed up a solution I came up with a way to eliminate some pairs while still getting the same result. To do this I used the basic observation if we have three people p1, p2 and p3 ordered in increasing starting coordinates if p1 is slower than p2 it will take longer for p1 to reach p3 than for p2 to reach p3. This means we can ignore the case of p2 getting to p3 in our calculation. At this point I used a lambda expression to sort the pairs in order of increasing starting points. Then I divided the pairs into two groups. For one group I stored the leftmost pair and all pairs to the right of it that had a lower speed than the leftmost pair. For the other group I stored the rightmost pair and all pairs to the left of it that had a lower speed than the rightmost pair. Next I used the brute force approach from before with pairs where the first element was in one group and the second element in the other. I expected some test to appear where half the elements are in the first group and the other half in the second group resulting in the time limit being exceed. This did not happen and the solution was successful.

Code

#include<iostream>
#include<vector>
#include<algorithm>
#include<iomanip>
#include<utility>

using namespace std;
int main(){
    int n;
    cin>>n;
    vector<int> x,v;

    //store positions and speeds
    for(int i=0; i<n; i++){
        int a;
        cin>>a;
        x.push_back(a);
    }
    for(int j=0; j<n; j++){
        int b;
        cin>>b;
        v.push_back(b);
    }
    
    //store positions and speeds together in a vector pair
    vector<pair<int,int>> xv;
    for(int i=0; i<n; i++){
        xv.push_back(make_pair(x[i], v[i]));
    }
    

    //the lambda expression
    sort(xv.begin(), xv.end(), [](pair<int,int> vp, 
    pair<int,int> vp2){return vp.first<vp2.first;});
    
    vector<pair<int,int>> lc,rc;
    

    //splitting into the groups described
    int vm=xv[0].second;
    lc.push_back(xv[0]);
    for(auto p:xv){
        if(p.second<vm){
            vm=p.second;
            lc.push_back(p);
        }
        
    }   


    rc.push_back(xv[n-1]);
    int vm2=xv[n-1].second;
    for(int i=n-2; i>0; i--){
        if(xv[i].second<vm2){
            vm2=xv[i].second;
            rc.push_back(xv[i]);
        }
    }
    
    //brute forcing the groups
    double ans=0;
    for(auto p:lc){
        for(auto p2:rc){
            double d=(double)abs(p.first-p2.first)/(p.second+p2.second);
            ans=max(ans,d);
        }
    }
   
    cout<<setprecision(15)<<ans;
    
    return 0;
}

Final Comments

There are many other approaches to this situation that do not require lambda expressions with pairs. You could create your own class with its own methods or overloaded operators for interacting with STL algorithms. You could define a function, function object or overloaded operator that takes two pairs as arguments and use STL algorithms with these. However, once you understand the syntax for writing simple lambda expressions to use with STL algorithms I think you will find they are the least tedious of all options to code.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +4
  • Проголосовать: не нравится

Автор usernameson, история, 8 лет назад, По-английски

Introduction

Generally bitmasking is used to solve optimisation problems with the following procedure. Store all possible states then pick the best one. Doing this cleverly can sometimes reduce a brute force approach that takes O(N!) to something that takes O(2N). However, if you apply the standard bitmasking approach to counting problems you can run into issues.

A Simple Case

Let's consider a simple case where we have three balls, two are red and one is blue and there are two problems. For the first problem we have a function f that takes balls and spits out a value and we want to optimise f. For the second problem we want to count the number of selections of balls we can make where the order of the selection does not matter and the two red balls are indistinguishable. Bitmasking in the usual way works perfectly fine for the first problem. However if we encode each of the balls with its own bit and try to solve the second problem we will get an answer that is too large. Basically our encoding overlooks the fact that red balls are indistinguishable.

A Solution

To solve this problem we can still reserve the first two bits for a red ball. However instead of letting 010 and 001 denote the cases where the one red ball is selected or the other is selected we let 001 denote the case where 1 red ball is selected and 010 where 2 red balls are selected. Of course doing this means 011 will denote the case where three red balls are selected. This means the bitmask 011 will not be valid in our case since so for counting problems we will need to keep track of which bitmasks are valid and check for validity before doing things with them.

A real problem

I used the approach outlined to solve colorfulGardenHard from topcoder https://community.topcoder.com/stat?c=problem_statement&pm=14247. The basic idea of the question is you have two strings of length at most 15. You want to find how many distinct rearrangements of the the letters there are in the first string such that no two adjacent letters in the first string are the same and no letter in position i of the first string is the same as the letter in position i of the second string. Also because answers can get big we have to return our answer mod 1,000,000,007

My approach

First we start storing some basic results about the first string such as which characters appear and how many times they appear. With this information we figure out where to store the counts for each character in a selection in a bitmask. We also create a function called isValid to see check if a given bitmask represents a selection that exceeds the count for a given character. We also make isValid have a parameter n that checks if the number of elements in the selection represented by the bitmask is equal to n. This is useful later for the dynamic programming part.

Now we get to the dynamic programming part. We consider what happens when we have a character sequence of length n that is valid so far that takes a collection of elements represented by a bitmask k and ends in a character represented by j. If we have another character represented by j' if three conditions hold we can put j' at the end of j to give a sequence of length n+1 that is valid so far. Two of the conditions are given in the question namely j and j' cannot be equal and j' cannot correspond to the character given in n+1th position of the second string. The third condition is the new bitmask we get by appending j' is this manner must be valid. To get the new bitmask we take the previous bitmask and add 1<<start[j'] to it i.e we are just figuring out where the count for the character j' is encoded in the bitmask then going there and adding 1 to the count. To test for validity we use our isValid function.

Using the above reasoning we do our dynamic programming using an array where dp[i][k][j] stores the number of sequences of length i ending in the character represented by j and with a collection of characters represented by the bitmask k. We initially set everything in the array to 0. Then we loop based on the procedure above (in my code I filled row 1 first then started the loop at row 2, however looping immediately should also work).

The rest is just a matter of summing and taking the modulo diligently to avoid overflow issues when things get large.

code

#include<iostream>
#include<vector>
#include<algorithm>

const int MOD=1e9+7;

using namespace std;
class ColorfulGardenHard{
    vector<int> pos, starts, charCt;
    
    //check if k corresponds to a mask with elem elements
    //and the character count encoded in the mask
    //does not exceed the total number of characters available
    bool isValid(int k, int elems){
        
        int tot=0;
        for(int i=0; i<pos.size(); i++){
            int ct=charCt[pos[i]];
            if(i==pos.size()-1){
                tot+=k>>starts[i];
                if(k>>starts[i]>ct)
                    return false;
            }
            else{
                int s=starts[i];
                int e=starts[i+1];
                int comp=k>>s;
                int m=((1<<(e-s))-1);
                
                
                comp=comp&m;
                tot+=comp;
                if(comp>ct)
                    return false;
            }
           
       }
        return tot==elems;
    }
                
                
                
        
        
    public:
    int count(string garden, string forbid){
       
        //vi stores the counts of every character for a-z 
        //appearing in garden we then store this as charCt
        //so isValid() can use it

        vector<int> vi(26,0);
        for(char c:garden){
            vi[c-'a']++;
        }
        charCt=vi;
        

        //pos stores the locations of characters
        //that appear in the string i.e ignore those
        //with count 0
        for(int i=0; i<vi.size(); i++){
            if(vi[i])
                pos.push_back(i);
        }
        
        int k=0;

        //start stores the for starting points for
        //each character in the string we assign
        // in the bitmask representation
        //this is the key difference between optimisation
        //problems and counting problems
        //since we are storing counts of elements 
        //we may need multiple bits to do so

        for(int i:pos){
            starts.push_back(k);
            int j=0;                
            int ct=vi[i];
            while(ct>=1<<j){
                k++;
                j++;
            }
        }

        //here is where we use dynamic programming
        //dp[i][k][j] stores the number of sequences of length i
        //that make use of the character counts represented by the
        //bitmask k and end in the character determined by j

        
        
        vector<vector<vector<long long>>> dp(garden.size()+1,vector<vector<long long>>((1<<16),vector<long long>(pos.size(),0)));
        
        long long ans=0;
        for(int j=0; j<pos.size(); j++){
            if(forbid[0] !=(char)('a'+pos[j]))
               dp[1][1<<starts[j]][j]=1;            
        }
        for(int i=1; i<garden.size(); i++){
            for(int j=0; j<pos.size(); j++){
                for(int k=0; k<1<<16; k++){
                    if(dp[i][k][j]){
                        for(int j2=0; j2<pos.size(); j2++){
                            if(j!=j2 && forbid[i] !=(char)('a'+pos[j2]) && isValid(k+(1<<starts[j2]),i+1))
                               dp[i+1][k+(1<<starts[j2])][j2]+=dp[i][k][j]%MOD;
                        }
                    }
                }
            }
        }
        

       //getting the answer by summing the number of valid sequences
       //of length garden.size() that end in j
       //Although the sum involves k there will only be one value
       //of k that is non zero namely the value the corresponds
       //to using all the characters in the string
        
        for(int j=0; j<pos.size(); j++){
            for(int k=0; k<1<<16; k++){
                ans+=dp[garden.size()][k][j]%MOD;
                ans%=MOD;
                                   
            }
        }
        return ans;
        
    }
    
};

Полный текст и комментарии »

  • Проголосовать: нравится
  • +19
  • Проголосовать: не нравится

Автор usernameson, история, 8 лет назад, По-английски

Disclaimer

I do not know if this technique has another name. I came up with it to solve specific types of problems but it probably has been done by many others before.

The situation

Assume we are given a sequence of numbers a1, a2, ..., an - 1, an and we have a function f that is well defined for all sequences of numbers with the property f(b1, ..., bm) = f(f(b1, ..., bm - 1), bm) for any sequence b and the range of f has size K, which is relatively small. Our goal is to rearrange the numbers a1, ..., an so that when the sequence is entered into f in this order the value of f is optimised (minimised or maximised).

The solution

If we try to brute force all permutations of the sequence there are n! options which is usually too slow. However what we can do is use dynamic programming and for each sequence ending in aj with m terms store all possible values of f. Since we know the range of f is K at each step we are storing at most K terms.

Cost analysis

To calculate all possible values of f for a sequence ending in aj with m + 1 terms given we have calculated all possible values for f with sequence with m terms we need at most (n - 1)K calculations. Doing this for each term to calculate all possible values for f with a sequence of m + 1 terms gives at most n(n - 1)K calculations. Doing it n times since we need sequences of length n gives at most n2(n - 1)K. Finally to find the optimised value requires at most nK more calculations. So at worst we have n2(n - 1)K + nK calculations.

Noticing the situation

The nature of the function is what determines whether this situation occurs. If the function is based on bitwise operations and the values of the sequence are small there is a good chance that this technique applies. Also if the function is based on arithmetic modulo N where N is small this technique might also apply.

Worked Example

I used this technique to solve the problem trysail from topcoder. https://community.topcoder.com/stat?c=problem_statement&pm=14307. The general idea of the question is you have a sequence of at most 50 numbers that are between 0 and 255 and you want to split them into three non-empty groups so that when you take the xor of each group and sum them the result is maximised.

My approach: First we note when we xor any group of elements the result will be an integer between 0 and 255 inclusive. Next xor clearly satisfies the property required for f given above. These are strong hints that we can use the function range trick. Next we need to figure out a way to deal with three groups. Since the numbers are small we can store the xor of each group in an int letting the first 8 bits store group 1, the next 8 bits store group 2 and the next 8 bits store group 3. We also note since we are only concerned with maximising the sum we can treat certain groupings as equivalent to reduce the number of cases to consider.

Now assume for the first i elements we have stored all the xors of all possible groupings and we want to add the i+1th element and get the xors of all possible groupings with first i+1 elements. Turns out for each possible grouping there are six ways to do this up to equivalence, we either add the new element to one of the existing groups or give the new element its own group and combine the three previous groups into two. This is where the function range trick comes into play. Even though it looks like we are generating a lot of new groups every time we add a new element since the range of xor is limited many of these new groupings result in the same value for xor. Implementation wise we can use a set for storage so repeated elements will only be stored once.

Code

#include<set>
#include<vector>
#include<algorithm>
#include<iostream>
using namespace std;

class TrySail{
        
    public:
    int get(vector<int> v){
        set<int> prev; 
        
        //put the first three elements in a separate group
        prev.insert(v[0]+(v[1]<<8)+(v[2]<<16));

        //given all possible xors when i elements are grouped
        //store all possible xors when i+1 elements are grouped
        for(int i=3; i<v.size(); i++){
            set<int> next;
            
            for(int j:prev){
                next.insert(j^v[i]);
                next.insert(j^(v[i]<<8));
                next.insert(j^(v[i]<<16));
                
                int x=j%(1<<8);
                int y=(j>>8)%(1<<8);
                int z=j>>16;
                next.insert(v[i]+((x^y)<<8)+(z<<16));
                next.insert(v[i]+((x^z)<<8)+(y<<16));
                next.insert(v[i]+((y^z)<<8)+(x<<16));
            }
            prev=next;
        }
        int ans=0;

        //find the largest answer
        for(int i:prev){
            int x=i%(1<<8);
            int y=(i>>8)%(1<<8);
            int z=i>>16;
            
            
            int sum=x+y+z;
            ans=max(ans,sum);
        }
        
        return ans;
    }
};

As a final note this code passes the system tests but it gets quite close to the time limit for certain inputs with 50 elements.

Полный текст и комментарии »

Теги dp, math
  • Проголосовать: нравится
  • +39
  • Проголосовать: не нравится