Akshat.saxena21's blog

By Akshat.saxena21, history, 9 years ago, In English

Though there is no difference between the working of endl and \n. Both work as a ‘enter Key’. But there is a huge difference between their working mechanism. Endl flushes the output buffer and ‘\n’ doesn’t. if you want the buffer flushed frequently, use ‘\n’. if you do, use endl.

Endl is actually slower because it forces a flush, which actually unnecessary. You would need to force a flush right before prompting the user for input from cin, but not when writing a million lines of output. Write ‘\n ’ instead of endl. **** The only difference is that std::endl flushes the output buffer, and '\n' doesn't. If you don't want the buffer flushed frequently, use '\n'. If you do (for example, if you want to get all the output, and the program is unstable), use std::endl.

The difference can be illustrated by the following: std::cout << std::endl; is equivalent to std::cout << '\n' << std::flush; So, • Use std::endl If you want to force an immediate flush to the output. if you are in a command line app and want to guarantee that the user can see the output immediately. • Use \n if you are in a worried about performance (which is probably not the case if you are using the << operator). I use \n on most lines. Then use std::endl at the end of a paragraph (but that is habit it is usually not necessary). Contrary to other claims, the \n character is mapped to the correct platform end of line sequence only if the stream is going to a file (std::cin and std::cout being special but still files (or file-like)).

  • Vote: I like it
  • +17
  • Vote: I do not like it

| Write comment?
»
9 years ago, # |
Rev. 3   Vote: I like it -22 Vote: I do not like it

Yet another reason why C++ sucks?

In the C's stdlib a FILE* can have three bufferization modes:

  • line-buffered -- flushed after each newline isfwrite'n to the FILE*
  • fully-buffered -- flushed after buffer of fixed size(4096 bytes or so) is filled
  • unbuffered -- flushed on every fwrite

By default when the FILE* points to a terminal it is line-buffered, when it points to a file it is fully-buffered. Sounds reasonable, right? Because every flush does a system call (at least on Linux) that requires switching to Kernel mode and back, messing with filesystem caches, holding locks when necessary, etc. So why would one want to get all that overhead and flush it on every newline when printing to a file?

I've run a program that does cout << "a" << endl; 106 times with stdout redirected to a file. Time measured by time was:

real	0m7.275s
user	0m1.680s
sys	0m5.584s

strace has shown, that data is flushed for each two bytes.

Then I changed endl to "\n", and the result was:

real	0m0.445s
user	0m0.432s
sys	0m0.008s

And finally using printf in pure C:

real	0m0.153s
user	0m0.132s
sys	0m0.016s

The last two examples were flushing by 4096 bytes at a time.

  • »
    »
    9 years ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    std::cout wastes a lot of time trying to staying in sync with C-style output. If your program only uses std::cin and std::cout without using C-style output, turning the sync off with std::ios::sync_with_stdio(false) will offer a reasonable speedup.

    • »
      »
      »
      9 years ago, # ^ |
        Vote: I like it -6 Vote: I do not like it

      Here are the same tests with ios::sync_with_stdio(false) done right before the loop in first two examples in the same order:

      # cout, endl
      real	0m7.474s
      user	0m1.360s
      sys	0m6.084s
      
      #cout, \n
      real	0m0.367s
      user	0m0.356s
      sys	0m0.008s
      
      #printf, \n
      real	0m0.167s
      user	0m0.144s
      sys	0m0.008s
      

      It is not so much different from previous one, is it?

      Look at the last two examples, the sys fields are exactly the same, the noticeable difference is in user fields. That is the time program spent executing its own code. This is the overhead brought by the way cout is implemented.

      Now, look at the first example, most of the time is spent in kernel mode(sys) executing syscalls. You could write your own IO functions in C or even assembly and reduce the user field, but you won't get rid of overhead in sys field if you still make a syscall for every two characters.

      I was talking about how bad it is to flush very often, not about how slow the C++ and its cout is.

»
9 years ago, # |
Rev. 2   Vote: I like it -7 Vote: I do not like it

You would need to force a flush right before prompting the user for input from cin, but not when writing a million lines of output.

Don't know about C++, but in the C world line-buffered FILEs are flushed before reading something from a FILe attached to a terminal(like stdin in most cases). Cool, isn't it?

  • »
    »
    9 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    What are you talking about? cout is a stream, not a file and in C++ you have everything from C plus some other things. I don't see why C++ sucks while C doesn't.

    • »
      »
      »
      9 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I'm not sure what you mean by saying 'file'. In C there is FILE* that allows you to fwrite/fread to/from files on disk, sockets, terminals, pipes, etc. Also you can specify FILE*'s flushing behavior. That is what I was talking about.

      On the other hand in C++ there is cout that is an instance of class ostream that allows similar operations. I don't know if the cout is implemented on top of FILE* or if it directly calls platform-dependent IO functions(syscalls etc).

      The C++ way of doing thing things sucks. Flushing data after each newline when writing to a file is like cutting off your balls with a chainsaw. I would better shoot myself in a foot in C.

      Of course you could use good old cstdlib functions from your C++ code and almost everyting you have in C. But that wouldn't be C++ way I was complaining about.