Hi —
Recently I submit this solution to a polynomial FFT multiplication problem — POLYMUL.
Even on my computer — this solution runs very slow, and I cannot identify the reason. I'm wondering if anyone can assist in optimizing this solution — there must be something wrong for it to run so slowly.
Thanks in advance, Disguised
The main optimization you can do, in my opinion, is to set a higher base case threshold. That is, instead of evaluating the base case when there's only a single element left, use a O(N2) evaluation when the number of elements is less than 32 (32 is usually a good threshold ^^). In my experience (not only in FFT, but also in Karatsuba and the like) this makes a huge difference.
Start with getting rid of:
push_back
, it's O(1), but does reallocations. Pre-allocate memory (or just usevector.reserve
).complex<double>
— it's slow somewhy, implement same class yourself.I'm not sure that these are the most important, but they came to my mind first. By the way, here you can find FFT implementation from SPb SU 4's notebook.
Looking at the std::complex implementation, I noticed that there are template specializations for standard types that use complex operations from the C language. Seems that it is slow because of additional function calls.
I tested fft with the custom double "implementation" (i.e. a wrapper class around double with all operators overloaded) and std::complex and it shows the same perfomance with the custom complex class. But this implementation is even bigger than the one with the custom complex class, so it seems that it cannot be used to shorten the code.
P.S. Maybe there is another way to override template specialization than creating the custom class, but don't know it.
vector.preserve
sounds interesting.Fixed
http://uoj.ac/problem/34/statistics
Oops, you are not first anymore...
Please let me know if anyone can provide FFT and NTT implementations in JAVA?
joney000 Do you still need it?
I have completed referring some c++ implementations: source code