I often, not that often, but still often see things like this while hacking:
#define MIN(a,b) (a) < (b) ? (a) : (b)
#define MAX(a,b) (a) > (b) ? (a) : (b)
#define ABS(a) (a) > 0 ? (a) : -(a)
While these are not as common as other (dubious) preprocessor macros, I still see these being used fairly commonly. There are, in my opinion, several downsides to using these -- if the inputs were functions, one of them gets executed twice.
So I want to ask, is there any advantage to using these over std::min(a, b)
and others?
Maybe some legacy from ancient times when those persons coded in C.
Standard min function compares two variables of the same data type. It throws an error if you try to compare different data types (int with long long or float with double). The macro defined helps you avoid that!
Good point, but from a type-safety viewpoint I'm not sure that is a good idea. This certainly sounds like unexpected behaviour might happen in certain situations.
Dude, it's competitive programming. Who cares about type-safety?
I care. It prevents bugs. If I take min from int and double then probably that was not what I was going to do. If it was then I can put there explicit cast which is no large cost and if it wasn't then I am richer by long time I would need to find that.
I'm not really sure about this, but maybe an
if
is a bit faster than calling a function?Apparently it is ...a bit.
This code takes on average 2.31 s to run when compiled without
-O2
, and 2.20 s with, on my computer. Swapping the#define
with#include <algorithm>
and theMIN
withstd::min
gives on average 3.32 s without-O2
and 2.60 s with. And that was with usingvolatile
to prevent optimizing.So if you optimize like Codeforces does, it's about a 400ms difference on over 109 iterations. It's entirely negligible if you do it a sane amount of times.
Actually,
std::min
andstd::max
are exactly the same asif
on an optimizing compiler: https://godbolt.org/g/hYJ8dJIn your case I suggest removing
volatile
and looking at the produced assembly output instead. It should be the same too.Let's suppose you call something like
a = MIN(a, foo(x, y, z))
, wheref(*)
is a complicated and time-consuming function. The macro replaces that with "a < foo(x, y, z) ? a : foo(x, y, z)
", which means you have the potential to callf(*)
twice. And it gets worse iff(*)
is dependent on global variables or changes them.Long story short: stick with the
std::min()
andstd::max()
functions, or use an if statement instead. There are virtually NO advantages to using those deprecated macros with today's optimizations of C++11 compilers.Take care! :)
Also writing your own code means more bugs. For example, consider
with the above macros.
MIN(x++, y++)
may also lead to interesting results.It's especially fun if
f(*)
reads from input!That's an awful thing.
Consider the following segment tree implementation:
What do you think is the complexity of this implementation?
Is it O(n)?
For sure.
Maybe the people use this examples to teach how to create "functions" with defines, but they don't make clear that this functions are already created. Or at least that was what happened to me.