Блог пользователя AlexLuchianov

Автор AlexLuchianov, 3 года назад, По-английски

"I used to be a competitive programmer like you, but then I took a macro to the knee."

First I want to say that I recognize the fact that not all macros are bad, there are even some ingenious macros that are really useful such as the all(x) macro from the post C++ tips and tricks by Golovanov399 . Another useful macro is NDEBUG. This macro is used to disable all assertion checks inside the code. It can be used like this:

#define NDEBUG
#include <cassert>

Note that NDEBUG has to be defined before including cassert to have an effect. Other useful macros are the debug macros, a little more information about them can be found in the following blog, Compilation and debugging tutorial by ramchandra.

However, for every one such great macro there exist another countless macros that are, let's just say, not that great. Here are some reasons for which I believe that macros should be used only with great care.

Macros ignore namespaces

One problem with macros is the fact that they completely ignore namespaces, for an example check the following code below:

#include <iostream>
namespace a {
  #define f(x) (x * 2)
};
namespace b {
  #define f(x) (x * 3)
};
int main() {
  std::cout << a::f(2) << '\n';
  return 0;
}

This code will not compile since macros are, in reality, just text substitutions handled by the pre-processor which does not know anything about namespaces or classes. In a small project this may seem like a minor inconvenience since you can simply remember what macros you have used. However, in a large project with many dependencies, this can create a massive amount of headaches for those involved. That is why it is recommended to undefine the macros at the end of the file in which they are defined, so that they do not affect other files.

Of course, this argument doesn't really apply to competitive programming since most contests are individual contests and the projects are small.

Macros reduce code readability

There is also the issue of readability. I often see codes full of weird for macros such as the following:

#define f0r(a, b) for(int i = a; i <= b;i++)
#define f0rr(a, b) for(int i = a; b <= i;i--)

These macros make the code, to say the least, unreadable for someone not used to this specific type of macro. In real projects where multiple people work together this will make teamwork extremely hard. Once again, this is competitive programming and if you understand your own code that is all that matters. I also wish to say that, although I do not encourage the use of macros, I understand that some people value efficiency and will prefer to write shorter less readable code.

Most macros also have better modern alternatives

For example, let's consider the popular #define ll long long has the much better alternative using ll = long long. The main difference is that the latter variant allows us to write things such as ll(a) to convert the result into a long long, whereas this doesn't compile if using define.

Some people also use macros to define constants like #define INF 1000000000, however we can also use the alternative: int const INF = 1000000000;. The advantages of using constants instead of macros include type-safety checking and respecting namespaces.

Another macro that I often see is #define int long long. This is often done to quickly solve problems involving overflows. However, this is both bad practice and can potentially affect execution time negatively if the judge is using a 32-bit system.

Macros are not functions

A piece of code is worth a thousand words, so try to guess the output of the following piece of code:

#include <iostream>
#define plustwo(x) x + 2
int main() {
  std::cout << plustwo(2) * 3 << '\n';
  return 0;
}

One would guess that the output of the code above is $$$12$$$, however the actual output is $$$8$$$.

Why does this happen? Did the compiler make an error when processing this code? The answer is that the compiler worked just as intended. Let's examine in more detail what #define plustwo(x) x + 2 actually does. This define searches for occurrences of plustwo(x) and puts in their place x + 2. So, the code above is actually equivalent to:

#include <iostream>
int main() {
  std::cout << 2 + 2 * 3 << '\n';
  return 0;
}

Now, it is clear what the output is supposed to be. One quick fix is to pad the macro with parentheses as follows: #define plustwo(x) ((x) + 2).

Unfortunately padding does not solve all of our problems. For example, see the following code:

#include <iostream>
#define square(x) ((x) * (x))
int main() {
  int x = 2;
  std::cout << square(x++) << '\n';
  return 0;
}

It's output is $$$6$$$. Why does this happen even though we padded everything? The reason is that the code above translates to the following:

#include <iostream>

int main() {
  int x = 2;
  std::cout << ((x++) * (x++)) << '\n';
  return 0;
}

No matter how we look at the code above, it is obvious that some strange things are happening inside. One thing that makes this situation even worse is the fact that the order of evaluation is also undefined. The general rule of thumb to avoid such headaches is to not use more than one x++ or ++x in a single operation.

So, if we pad our macros and we don't mix it with x++ or ++x, are we actually safe? The answer is, unfortunately, no.

The most evil macro of all

The worst macros of all are, undoubtedly, the MIN and MAX macros. First we must understand why do people actually use them if we have modern alternatives such as std::min and std::max. This issue was actually also discussed here. The short answer is that people used those macros because it didn't check for equal types and it was marginally faster than the std alternatives. I'm not going to lie, I actually used those macros too exactly because of those reasons. I discovered after some time how dangerous they actually are, so I stopped using them. The program below is a clear illustration of the risk of using these macros:

#include <iostream>

#define MIN(a, b) (((a) < (b)) ? (a) : (b))
#define MAX(a, b) (((a) < (b)) ? (b) : (a))

int const nmax = 20;
int v[5 + nmax];
int step = 0;

int f(int n) {
  ++step;
  if(n == 1)
    return v[n];
  else
    return MAX(f(n - 1), v[n]);
}

int main() {
  int n = 20;
  for(int i = 1; i <= n; i++)
    v[i] = n - i + 1;
  step = 0;
  std::cout << f(n) << " ";
  std::cout << step << " ";

  return 0;
}

The output of the code above is 20 1048575. Why does the function make such a big number of steps? To find the answer we must look at how the function actually looks after we process the MAX keyword:

int f(int n) {
  ++step;
  if(n == 1)
    return v[n];
  else
    return (((f(n - 1)) < (v[n])) ? (v[n]) : (f(n - 1)));
}

What the ternary operator actually does is compute $$$f(n - 1)$$$ and it then compares it with $$$v[n]$$$. Then since $$$f(n - 1)$$$ was bigger and the expression f(n - 1)) < (v[n]) is true it decides to return $$$f(n - 1)$$$. Thus, it has to evaluate $$$f(n - 1)$$$ again. Note that if we initialized array $$$v$$$ with increasing numbers instead of decreasing numbers the program would have made only $$$20$$$ steps and we would have never noticed this vulnerability. The bug would be even harder to detect if we used the MIN/MAX macros in, let's say, a Segment Tree implementation(This actually happened to me once).

Conclusion

Most times macros have modern safe alternatives. Nonetheless, in some cases macros are really useful, thus we should not be afraid to use them. Please share your favorite macros or share some stories about your worst macros:)

  • Проголосовать: нравится
  • +280
  • Проголосовать: не нравится

»
3 года назад, # |
  Проголосовать: нравится +127 Проголосовать: не нравится

Let me guess... a macro stole your sweetroll?

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Any tips on how to use macros? I only use

#define debug(x) cerr << #x << ' ' << x << '\n';

because I still haven't found an alternative to it

  • »
    »
    3 года назад, # ^ |
    Rev. 3   Проголосовать: нравится +16 Проголосовать: не нравится

    I didn't understand your comment completely but this may help you.

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +9 Проголосовать: не нравится

    I use a separate header file for this. It seems to work great, but again even in the header file, more or less you might be using defines only.

»
3 года назад, # |
  Проголосовать: нравится +2 Проголосовать: не нравится

I never knew a "min" macro could cause human extinction. Thank you sir

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I maybe fell in this trap, but got used to using #define forr(i,n) for (int i=0; i<n; i++) :)

Story about using macros: once at national Olympiad, had to write my template from beginning, but have seen earlier #define ctn(x) cout<<x<<"\n" , so I decided to try it on my own. Won't tell how much nerve it cost to me to get some kind errors every 10 minutes :x

»
3 года назад, # |
  Проголосовать: нравится +64 Проголосовать: не нравится

With great macros comes great responsibility

»
3 года назад, # |
  Проголосовать: нравится +192 Проголосовать: не нравится

As was explained to me some years ago, the real benefit of f0rr and such macros is mot that it is marginally shorter, but to prevent bugs like this:

for (int i = 0; i < n; i++) {
  for (int j = 0; j < m; i++) {
  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +15 Проголосовать: не нравится

    Wow, that really is a bigger benefit than the shorter length:))

    I still like the classic for better since such an error wouldn't really make it past the examples(it will most likely enter an infinite loop or simply fail to compile depending on what character was mistyped). However, I understand that there is time to be gained by avoiding such errors entirely.

    • »
      »
      »
      3 года назад, # ^ |
      Rev. 2   Проголосовать: нравится +19 Проголосовать: не нравится

      It can sometimes make it easier to forget that you're using an int as the loop variable (and hence can lead to bugs when you're iterating over a small range where the range starts at a large integer, for instance, in segmented sieve or brute-force that you expect will be fast enough). Something like this makes it easier to do these loops more safely :)

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +37 Проголосовать: не нравится

    Start using ranges::iota_view

    Example: 147850590, or pretty much every code I submitted in the last few months

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    That's exactly the argument I always bring for this particular one. C++ for is so general that for the simplest usages that constitute >90% of all usages they introduce a lot of redundancy — you need to write the same variable 3 times, which is error-prone bad design.

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +19 Проголосовать: не нравится

    My setup is that I have snippets (most editors have a snippet system) which expand into for loops. For example I will type the following sequence of characters: f i TAB n TAB SPACE { ENTER and the resulting code is (where | represents my cursor):

    for (int i = 0; i < n; i++) {
        |
    }
    

    I used to use a snippet that went like f TAB i TAB n TAB SPACE { ENTER where i could be substituted for any letter, but I found that 99% of the time I only needed ijk, so I just created three macros fi, fj, and fk and dropped the unnecessary TAB press.

    I'm pretty confident that my solution is strictly better than the other common solutions, since it simultaneously

    • Produces code that looks completely normal
    • Requires fewer keystrokes than macros such as F0R, etc.
»
3 года назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится
#define int long long

is the most useful macro that everyone should use

(unless ur doing constant optimization problems, well you might want to use #define int short by then)

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится -39 Проголосовать: не нравится

    yep, use it and TLE will be your most common verdict on hard problems.

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +25 Проголосовать: не нравится

    I wholeheartedly agree. Imagine you are on ICPC competition. You submit solution and get WA. It's the most common verdict, so it could be a million various things and overflow is a very common one and always one that you need to spend some time considering. Eliminating that possibility is a huge boost to efficiency, debugging time. It's true that it has some negative side-effects but they are very minor. If you get TLE or MLE just remove it and gg, not a big deal

    People who don't accept this enlightened truth can be categorized like this: 1) people who don't understand that cp is not commercial programming and argue it's a bad habit because it's a bad habit, 2) people who are just stubborn, 3) people who are super-humans who make overflow bugs in less than 1% of their codes. Third category is the only one that has some justified right to oppose, but I don't think many of such people exist. If overlfow bugs are present in more than 1% than positive effects of such macro should already heavily outweigh the negatives.

    I am doing cp for ~12 years and started using that macro somewhere halfway through and it's been the single best decision I have ever made. Overflow bugs caused me tons of failed submissions, endless frustration, hours of wasted time etc etc. Negative side effects are almost non-existent

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Overflow bugs caused me tons of failed submissions, endless frustration, hours of wasted time etc etc.

      this is because you don't use a modular arithmetic class

      • »
        »
        »
        »
        3 года назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Well, I agree that would certainly help me and that I should start doing that. However, I would argue that: 1) there are many problems where you need to use long longs where you don't have any modulos involved, 2) modular arithmetic class introduces more overhead than just using long longs. Hence, the main argument for me to use modular arithmetic class would be to get rid of these annoying "%P" that are everywhere. But yeah, if somebody doesn't use define int long long, that could help a bit on that side too

        • »
          »
          »
          »
          »
          3 года назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          modular arithmetic class introduces more overhead than just using long longs.

          Please explain because I don't think using a class has any penalty and may even be faster if you're mostly adding/subtracting, because the value can be stored as an unsigned 32-bit int

          • »
            »
            »
            »
            »
            »
            3 года назад, # ^ |
              Проголосовать: нравится -20 Проголосовать: не нравится

            Hm, I don't think I have a very strong evidence for this one. I tried implementing my own modular int for a particular problem once and even though the code got significantly cleaner, it slowed down by maybe 30% or sth like that. I just accepted the fact that it is what it is, but maybe I've written it in a way that is far from the most optimized ones

          • »
            »
            »
            »
            »
            »
            3 года назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            The problem with explicit modular arithmetic classes is that sometimes you need to compute a modular sum of a bunch of integers, and you'd rather compute this sum in long long and then compute the remainder once rather than using conditional subtraction all the time.

    • »
      »
      »
      3 года назад, # ^ |
      Rev. 2   Проголосовать: нравится +44 Проголосовать: не нравится

      Negative side effect for some people: I switched to #define int long long a few months back and yesterday I was doing OI practice when I realized I couldn't compile IOI-style graders because I kept getting weird errors like undefined reference to 'function(int, int, int*, int*)'.

      It honestly took me a whole fucking hour to realize that #define int long long made the function signature different 💩💩💩💩💩💩💩💩

      • »
        »
        »
        »
        3 года назад, # ^ |
          Проголосовать: нравится +4 Проголосовать: не нравится

        Ugh, that's true. I think that enforcing signatures should be killed with fire, there are many more troublesome aspects to that than just define int long long for I-don't-freaking-know-what-kind-of-profit. Hopefully it's just a one-time surprise

        • »
          »
          »
          »
          »
          3 года назад, # ^ |
            Проголосовать: нравится +23 Проголосовать: не нравится

          What would you do instead of enforcing signatures?

          • »
            »
            »
            »
            »
            »
            3 года назад, # ^ |
            Rev. 2   Проголосовать: нравится +17 Проголосовать: не нравится

            Just completely abandon that weird format and use standard input and output — 10x simpler approach

            • »
              »
              »
              »
              »
              »
              »
              3 года назад, # ^ |
                Проголосовать: нравится +20 Проголосовать: не нравится

              While I'm not somehow attached to implementing functions according to declarations, I understand why IOI wants to remove i/o as a thing to consider while coding. As long as i/o is prewritten for contestants, there'll be a chance to break something by redefining int too early.

              In the end, there's always a prescribed interface — whether through language features or through i/o — and you gotta follow it to avoid hidden bugs.

            • »
              »
              »
              »
              »
              »
              »
              3 года назад, # ^ |
                Проголосовать: нравится +41 Проголосовать: не нравится

              The following is mostly my personal experience as the main problemsetter for our national "IOI-like" Olympiad.

              Although not always the case (specially if the beginner solved at least a few "standard IO" problems on a judge), being given a function to complete with a standard signature is often easier for beginners (less code to write, no IO problems like not using fast IO, etc).

              In my experience (and that of most secondary school contestants that I got to personally ask), it is much nicer and easier to use the signature format for INTERACTIVE problems. Especially those involving many different kinds of queries / interactions.

              When having IO-based interactive problems, advanced competitors (if used to interactive problems) will actually almost always code by themselves the "interacting functions" that make the actual low level IO inside (write, flush, check interactor response because maybe we have to stop the program immediately because of error, etc), because mixing that IO directly in the middle of the "program logic" instead of having a nice, short, higher level function to make the query in a single line leads to much less readable, much more buggy code (been there, done that). In my experience a lot of bugs and problems that beginners face in interactive problems are induced by writing that kind of code.

              So, for interactive problems in particular, I think that being provided simple normal functions that you can call to query / interact directly helps beginners a lot, not only because they have to write less code, but because they are already induced to write "less buggy" code by the interface itself.

              Of course, strong contestants can and will easily code those functions themselves, so they will prefer to "be in total control" of the program and have IO-based interaction instead. Also, IO-based interaction is MUCH easier for the problemsetters :) (at least if multiple languages must be supported: A big negative of fixed signature is that it cannot "allow any language", as just running an executable file is not enough).

              One technical detail that might matter occasionally is that the IO-based interaction might be much slower than the function-based (single process) version. This might make judging a certain problem impractical, but most of the time it is not a problem (in lots of interactive problems, runtime is not very important, but number of queries are instead).

              • »
                »
                »
                »
                »
                »
                »
                »
                3 года назад, # ^ |
                  Проголосовать: нравится +5 Проголосовать: не нравится

                This format is much less popular than usual standard IO, so there's no way that somebody is not accustomed to standard IO, while providing functions to fill brings A TON of technical difficulties. First and foremost is that it is a pain to even execute this thing. Many cp-ers are not Linux magicians and have no knowledge of some specific compilatiin flags, have no idea what is linking, no idea how multi-files programs should be compiled, and how to debug them etc. If they do some small mistake along the way they are lost in compile errors they've never seen with no clue about what they are supoosed to do.

                Moreover code structure that such functions impose is much less enjoyable to work with (regarding to visibility scope). It's less convenient to use global variables and functions. Whenever I code such problems I heavily use lambdas in places where I would usually see bo reason to use them and I would certainly not know lambdas in high school and without them code would be much messier.

                And to add to that, I can cofidently say that I lost gold medal on my IOI because I had a bug related to how input was provided. If it was standard IO I am certain I would get gold.

                I am a very experienced competitor with master degree in CS, yet whenever I encounter such problems I feel a big pain that comes with all these technical difficulties that ... remove the burden of ... reading an array?

                There is an argument to be made for interactive problems, but in no universe having to read input by myself is a bigger problem than putting competitors out of their comfort zone with that technical bullshit (as they may be strong cp-ers, but beginner technicians)

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  3 года назад, # ^ |
                    Проголосовать: нравится +30 Проголосовать: не нравится

                  I don't know how to write an interactor that does I/O interaction with another process and even if I did it's probably lengthier, slower, and more difficult to write than function-based interaction.

                  also have you tried standard IO with problems that needed more than 10^8 interactions?

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  3 года назад, # ^ |
                    Проголосовать: нравится 0 Проголосовать: не нравится

                  As I said, there is an argument for interactive problems, hence I can see the justification. But for problem of "standard type" (that still constitute majority of problems) — I can't

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  3 года назад, # ^ |
                    Проголосовать: нравится +25 Проголосовать: не нравится

                  I don't understand why he's getting downvoted. I agree that the format used on IOI is bad and overcomplicated, which makes it harder and slower to work with it.

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится +23 Проголосовать: не нравится

      I dislike it primarily because it contributes to the (already too widespread) idea that competitive programming encourages bad habits and code style. I understand that long long and int64_t are so much more painful to type, and the abbreviated names ll, LL, sint are ugly while num and i64 are miserable to type. Maybe I should advocate for using ival = long long; instead?

»
3 года назад, # |
  Проголосовать: нравится +18 Проголосовать: не нравится

I use the following macro in segment tree:

#define ls(x) ((x)<<1)
#define rs(x) ((x)<<1|1)

Once my friend modified my code when I didn't lock my computer and added the following line:

#define cout cerr

It took me nearly half an hour to debug :/

»
3 года назад, # |
Rev. 2   Проголосовать: нравится +20 Проголосовать: не нравится

I sometimes think MAX like this is great.

template <class T, class U>
constexpr typename std::common_type<T, U>::type MAX(const T &a, const U &b) {
    return (a < b) ? b : a;
}

It allows me to write something like MAX(0, 1LL) for simple types and it won't evaluate function twice.

However, it has two defects comparing std::max.

  1. It might be a bit slower than std::max as std::max returns const T &.
  2. It can't find common type for different custom types, so you need to write the type explicitly like MAX<A, A> or MAX(this_is_A, A(this_is_not_A)). In this case, you only need to write std::max<A> for STL max.

Therefore, I'm still struggling picking this up.

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +18 Проголосовать: не нравится

    second case can be partially solved by class U = T

    pair p(1, 2);
    MAX(p, {3, 4}); //works
    MAX({5, 6}, p); //no
    
»
3 года назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

I leaked the technocup like that)))

#define MAX(a, b) (((a) < (b)) ? (b) : (a))
»
3 года назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

Another useful tutorial

Thank you sir Luchianov

»
3 года назад, # |
  Проголосовать: нравится +49 Проголосовать: не нравится

Somebody I was once tutoring (won't say who in case he wants to be left alone) once had the max macro in some segment tree code when I was teaching him segment trees. I never use C++ and just skipped the ~150 line header, assuming he was using namespace std and just using the default max() method. Took me a solid 50 minutes of walking through the code to figure out why the segment tree query was O(n)...

Turns out that max() macro querying one of the children nodes twice on each layer. That makes the runtime O(2^(log(n))) or O(n). Didn't see that one coming.

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

On an OI competition, I used #define int long long, and forget to change %d to %lld. It ran well on my computer, so I didn't realize the bug until I knew I get 0 on that problem. >_<

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +5 Проголосовать: не нравится

    Don't worry, Warsaw team on ICPC 2018 didn't get medal because they tried outputting long double with %f, so even some high rated coders still live in the stone age and use cstdio (and they still make fun of cout for being bad even though cstdio literally ripped their dreams apart). Just use cout and you will never encounter any such idiotic problems

    • »
      »
      »
      3 года назад, # ^ |
      Rev. 2   Проголосовать: нравится +20 Проголосовать: не нравится

      Yes,now I use cin/cout most of time, and use scanf/printf only if the format is too complex.

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится +10 Проголосовать: не нравится

      Just use cout and you will never encounter any such idiotic problems

      I definitely remember some issues with long double on some Windows compilers (probably old MinGW) even with std::cout.

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Some of the macros that I use in CPP are these for initializing and reading an array

#define vi vector<int>
#define ifill(a) for(auto &x : a) cin >> x;
#define out(a) for(auto x: a){ cout << x << " "; } cout << endl;

They rarely cause an issue, and save quite a bit of time.

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +19 Проголосовать: не нравится

    All of these can (and should) be replaced with either using or a function.

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится +7 Проголосовать: не нравится

      thank you, I changed all of my snippets into using and functions with templates (and copied some of yours)

»
3 года назад, # |
  Проголосовать: нравится +18 Проголосовать: не нравится

(sharing my favorite macros)

#define jj(...) __VA_ARGS__; [](auto&...x){(cin>>...>>x);}(__VA_ARGS__);
#define ii(...) int jj(__VA_ARGS__)

ii(n) //int n; here
string jj(a, b)
double jj(ax, ay, bx, by, cx, cy)
»
3 года назад, # |
Rev. 2   Проголосовать: нравится +5 Проголосовать: не нравится

The following macro could be useful:

#define modified(x, modify) [&](auto _){modify; return _;}(x)

Example usage:

a = modified(b, ranges::sort(_));

is equivalent to

a = b;
ranges::sort(a);

With this, reading, sorting and outputting $$$n$$$ integers can be done like

int main() {
    int n;
    cin >> n;
    vector<int> a(n);
    copy_n(istream_iterator<int>(cin), n, begin(a));
    ranges::copy(modified(a, ranges::sort(_)), ostream_iterator<int>(cout, " "));
}