![]() |
[QUOTE=Dubslow;299926]I realized after the fact that that's the whole reason for int return type, and that it obviously wouldn't go into a char array. I either was too lazy to edit it, or forgot, or only figured it out after the hour. Now: When you say EOF is not a char, do you simply mean that it cannot be represented by a char-type, or do you mean more fundamentally that it's different from what ints/chars/longs are? I'm pretty sure the former, since you said EOF is often chosen to be represented by -1. (I'll go back and reread the previous post about this in any case.)
[/QUOTE] Well if you think about getchar again, you can see that it returns an int, and one of the possible values it returns is EOF (which is just a symbolic name for some implementation-defined negative integer--pretty much always -1). So you *know* that it has to be an integer value of some sort, right? [QUOTE=Dubslow;299926] In [URL="http://publications.gbdirect.co.uk/c_book/chapter5/sizeof_and_malloc.html"]5.5[/URL], which is examples pretty similar to ReadLine except not requiring realloc(), all malloc() calls are cast. [/QUOTE] Yes, it does do that, and it never says why. I mean, it goes to all this trouble to say that void * pointers can be freely assigned with (most) other pointers (i.e. without casting), but then it goes and casts the return value of malloc anyway. IMO that either shouldn't be done (preferable) or it should at least have an explanation. But the author can maybe be excused because the fact is that this has always been such a common practice. I just happen to believe it's a bad practice. Want a concrete example of why? Fair enough. (In all honesty, this example is archaic. C99 got rid of the "implicit int" rule for undeclared functions. However, it's a valid example for the C89 standard and earlier.) Suppose you call malloc as follows: [code] int *p; p = (int *)malloc(10 * sizeof(int)); [/code] Seems reasonable, right? But suppose you also forgot to #include stdlib.h. Let's further suppose that pointers are 8 bytes and ints are 4 bytes. Because there's no declaration for malloc() in scope, it is treated as if it returns an int. I.e. only 4 bytes of return value are looked at. This is obviously bad, since it'll convert that 4 bytes to an 8-byte pointer by just putting in 4 bytes of 0, thereby losing some possibly significant bits that were returned. If you didn't have the cast in there, then the compiler would be *required* to give you an error or warning, saying that you're trying to assign an int to a pointer, and they're not compatible. But you put in a cast, which says to compiler, "Shut up, I know what I'm doing." So it dutifully doesn't warn you. Guess how long it takes you to track down the cause of the crash that occurs possibly much later? Now, as I said this example is archaic. In C99, if you don't have a declaration for malloc() in scope then the compiler will have to warn you anyway. However, it's a decent illustration (I think) of the dangers of telling the compiler to shut up when there's no reason to do so. I.e. that type safety is really quite useful, and you turn it off at your peril. [QUOTE=Dubslow;299926] I'm sure we all wished they chose one or the other and stuck with it for all three functions. Edit: Whoops, messed that up. malloc() and realloc() share the one-argument, whereas calloc() has two. Either way, seems to be a silly choice to me.[/QUOTE] I couldn't agree more. As I said, I *hate* that calloc has a different interface from malloc. |
[QUOTE=LaurV;299932]Just for the sake of correctness, there IS an eof character codable on one byte, which is still in use on many systems... :razz:[/QUOTE]
Are you talking about the ASCII EOT character, which you can generate on most terminals with Ctrl-D? Yes, that really confuses the issue even more for beginners to C, because it makes it seem all the more like they're putting some kind of "EOF character" into the file. But that's not at all what it's actually doing. *Sigh* |
[QUOTE=jyb;299944]Are you talking about the ASCII EOT character, which you can generate on most terminals with Ctrl-D? Yes, that really confuses the issue even more for beginners to C, because it makes it seem all the more like they're putting some kind of "EOF character" into the file. But that's not at all what it's actually doing. *Sigh*[/QUOTE]Perhaps Ctrl-Z. [URL="http://en.wikipedia.org/wiki/Control-Z"]http://en.wikipedia.org/wiki/Control-Z[/URL] [QUOTE]In some operating systems, Control+Z is used to signal an end-of-file, and thus known as the EOF character (more accurately: the EOF control code), when typing at a terminal, terminal emulator, MS-DOS command line, or Win32 console. Early DEC operating systems used this convention, which was borrowed by CP/M, and was later in turn borrowed and continued in the MS-DOS and Microsoft Windows operating systems.[/QUOTE]
|
[QUOTE=jcrombie;299935]I think I can see where you're coming from here. It's good practice to follow the standard. (Agreed) The standard allows all kinds programmer behavior. (Agreed) We shouldn't criticize what appears to be bad programmer behavior as long as it conforms to the standard (Disagreed)
[/QUOTE] Good grief, where do you get the idea that I think that bad behavior which is standard-conforming shouldn't be criticized? I don't mean that rhetorically! Really: what did I say that makes you think that?:smile: [QUOTE=jcrombie;299935] About NULL: Yes NULL happens to be zero in all implementations. This is a coincidence. Relying on it is bad practice. Thousands of instances of people relying on it exist and thus this will never change. The standard should be updated to reflect this. [/QUOTE] No. Once again I'm sorry, but you're just wrong about this. There's no coincidence. The standard states quite clearly that a line like "int *p = 0;" MUST set p to be a null pointer. NULL isn't anything mysterious or malleable; it MUST be defined as either 0 or (void *)0. I.e. it's nothing more than a syntactic convenience to make 0 look more "pointery". What isn't defined is what the internal representation of a null pointer looks like. I.e. is it all-bits 0, or something else? But from the programmer's perspective, this just doesn't matter. Again, it's just like the double value 1.0, which has some completely obscure (except to IEEE-754 nerds) representation. As a C programmer, you don't have to know that. [QUOTE=jcrombie;299935] [CODE] char* p = (char*) malloc( .... ); if ( !p ) { printf some error message. } [/CODE] the correct code: [CODE] char* p = (char*) malloc( .... ); if ( p == NULL ) { printf some error message. } [/CODE] [/QUOTE] Nope. Both of these examples are perfectly correct. The standard guarantees that comparing a pointer to 0 (which is EXACTLY the same as comparing it to NULL, by definition), will compare it to a null pointer. So again I ask you, how is anybody suffering because of the standard? jcrombie, I enjoy discussions like this and I'm happy to have this one, so please believe me when I say that I really mean no offense here, but you seem to have picked up some myths or misperceptions about some of the details of C. May I suggest you actually read the standard before making further statements about what it does or doesn't allow? Especially when trying to instruct a relative beginner? |
[QUOTE=jyb;299943]
Yes, it does do that, and it never says why. I mean, it goes to all this trouble to say that void * pointers can be freely assigned with (most) other pointers (i.e. without casting), but then it goes and casts the return value of malloc anyway. IMO that either shouldn't be done (preferable) or it should at least have an explanation. [/quote]Like I said earlier, I recalled it being explained somewhere, but I couldn't find it. [QUOTE=jyb;299943] But the author can maybe be excused because the fact is that this has always been such a common practice. I just happen to believe it's a bad practice. Want a concrete example of why? Fair enough. (In all honesty, this example is archaic. C99 got rid of the "implicit int" rule for undeclared functions. However, it's a valid example for the C89 standard and earlier.) Suppose you call malloc as follows: [code] int *p; p = (int *)malloc(10 * sizeof(int)); [/code] Seems reasonable, right? But suppose you also forgot to #include stdlib.h. Let's further suppose that pointers are 8 bytes and ints are 4 bytes. Because there's no declaration for malloc() in scope, it is treated as if it returns an int. I.e. only 4 bytes of return value are looked at. This is obviously bad, since it'll convert that 4 bytes to an 8-byte pointer by just putting in 4 bytes of 0, thereby losing some possibly significant bits that were returned. If you didn't have the cast in there, then the compiler would be *required* to give you an error or warning, saying that you're trying to assign an int to a pointer, and they're not compatible. But you put in a cast, which says to compiler, "Shut up, I know what I'm doing." So it dutifully doesn't warn you. Guess how long it takes you to track down the cause of the crash that occurs possibly much later? Now, as I said this example is archaic. In C99, if you don't have a declaration for malloc() in scope then the compiler will have to warn you anyway. However, it's a decent illustration (I think) of the dangers of telling the compiler to shut up when there's no reason to do so. I.e. that type safety is really quite useful, and you turn it off at your peril. [/QUOTE] Hmm... it has always seemed to me based both on what I read on Wikipedia and on the fact that C99 is a non-default option to gcc that it was never really embraced like the "Standard". (Just went and looked again, the [URL="http://en.wikipedia.org/wiki/C99"]C99 article[/URL] has a list of compliant compilers, and there aren't many. In addition, as it carries more weight than many of us would, MSVC++ has no plans to support.) (Edit: From the C11 article: "Due to delayed availability of conforming C99 implementations, C11 makes more features optional, to make it easier to comply with the core language standard.") I'll run some experiments with gcc -Wall and see what happens. As far as I know, this would only be C89. (Should I be using a different -W option?) |
[QUOTE=only_human;299938]Unicode Han Character '(same as 梴 裸) naked, to strip; to unclothe' (U+34A9)
Egyptian Hieroglyph. Mary's Dad: "How did you get the beans above the frank?" Ted: "I dont know, it's not like it was a well thought out plan." Sumero-Akkadian Cuneiform [URL="www.sumerian.org/sumlogo.htm"]Sumerian's phonetically more complex logograms[/URL] Perhaps also chosen for salaciousness. In the future kids will be giggling over obscure font characters instead of un-Bowdlerized dictionaries.[/QUOTE]Give that man a cigar! Did anyone here seriously contemplate that I wouldn't attempt to put sub-messages within the substantive text or that, given the opportunity, there wouldn't be attempts at some form of humorous word-play? Dubslow: you need to install more fonts. All three render find in Chrome on my Fedora system. Paul |
[QUOTE=Dubslow;299948]Hmm... it has always seemed to me based both on what I read on Wikipedia and on the fact that C99 is a non-default option to gcc that it was never really embraced like the "Standard". (Just went and looked again, the [URL="http://en.wikipedia.org/wiki/C99"]C99 article[/URL] has a list of compliant compilers, and there aren't many. In addition, as it carries more weight than many of us would, MSVC++ has no plans to support.) (Edit: From the C11 article: "Due to delayed availability of conforming C99 implementations, C11 makes more features optional, to make it easier to comply with the core language standard.")
I'll run some experiments with gcc -Wall and see what happens. As far as I know, this would only be C89. (Should I be using a different -W option?)[/QUOTE] Here are some data points: [code] jyb% cat foo.c int *Foo(void) { int *p; p = malloc(10 * sizeof(int)); return p; } jyb% gcc -c foo.c foo.c: In function ‘Foo’: foo.c:4: warning: incompatible implicit declaration of built-in function ‘malloc’ jyb% gcc -c foo.c -Wall foo.c: In function ‘Foo’: foo.c:4: warning: implicit declaration of function ‘malloc’ foo.c:4: warning: incompatible implicit declaration of built-in function ‘malloc’ jyb% gcc -c foo.c -std=c89 foo.c: In function ‘Foo’: foo.c:4: warning: incompatible implicit declaration of built-in function ‘malloc’ jyb% gcc -c foo.c -std=c99 foo.c: In function ‘Foo’: foo.c:4: warning: implicit declaration of function ‘malloc’ foo.c:4: warning: incompatible implicit declaration of built-in function ‘malloc’ jyb% gcc -v Using built-in specs. Target: i686-apple-darwin11 Configured with: /private/var/tmp/llvmgcc42/llvmgcc42-2335.15~42/src/configure --disable-checking --enable-werror --prefix=/Developer/usr/llvm-gcc-4.2 --mandir=/share/man --enable-languages=c,objc,c++,obj-c++ --program-prefix=llvm- --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ --with-slibdir=/usr/lib --build=i686-apple-darwin11 --enable-llvm=/private/var/tmp/llvmgcc42/llvmgcc42-2335.15~42/dst-llvmCore/Developer/usr/local --program-prefix=i686-apple-darwin11- --host=x86_64-apple-darwin11 --target=i686-apple-darwin11 --with-gxx-include-dir=/usr/include/c++/4.2.1 Thread model: posix gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00) [/code] |
[QUOTE=jyb;299947]Good grief, where do you get the idea that I think that bad behavior which is standard-conforming shouldn't be criticized? I don't mean that rhetorically! Really: what did I say that makes you think that?
No. Once again I'm sorry, but you're just wrong about this. There's no coincidence. The standard states quite clearly that a line like "int *p = 0;" MUST set p to be a null pointer. NULL isn't anything mysterious or malleable; it MUST be defined as either 0 or (void *)0. I.e. it's nothing more than a syntactic convenience to make 0 look more "pointery". What isn't defined is what the internal representation of a null pointer looks like. I.e. is it all-bits 0, or something else? But from the programmer's perspective, this just doesn't matter. Again, it's just like the double value 1.0, which has some completely obscure (except to IEEE-754 nerds) representation. As a C programmer, you don't have to know that. Nope. Both of these examples are perfectly correct. The standard guarantees that comparing a pointer to 0 (which is EXACTLY the same as comparing it to NULL, by definition), will compare it to a null pointer. So again I ask you, how is anybody suffering because of the standard? jcrombie, I enjoy discussions like this and I'm happy to have this one, so please believe me when I say that I really mean no offense here, but you seem to have picked up some myths or misperceptions about some of the details of C. May I suggest you actually read the standard before making further statements about what it does or doesn't allow? Especially when trying to instruct a relative beginner?[/QUOTE] Mea culpa. Of course it's not the job of the language definers to specify the implementation details. That includes the bitwise representation of integer "0", pointer NULL, sizeof(int), ..... et cetera. You position is entirely self-consistent because it exists only in an abstract sense. Everything can be defined to just work (Don't get me wrong! -- this is a good thing!) If I may just point out one thing -- C has been known as a powerful language because of it's closeness to the machine. Knowing what's going on "under the hood" is a very small step away. In my opinion, it is a good thing to know and exercises the full potential of C by getting the best performance from real-world computers. For a real example, say I wanted to set an array of 1,000,000 pointers to NULL. Should I be forced to iterate through 1,000,000 times setting each array element to NULL or can I use the forbidden knowledge that NULL is de facto just a bunch of 0 bits and memset() it? Going the slow route costs real $ and real resources. My 2 cents. The last thing I want to do is confuse a novice. I honestly don't believe I said anything misleading here. Please point it out if I did. I would love to move on and start talking about best practices. Cheers |
[QUOTE=jcrombie;299988]Mea culpa. Of course it's not the job of the language definers to specify the implementation details. That includes the bitwise representation of integer "0", pointer NULL, sizeof(int), ..... et cetera. You position is entirely self-consistent because it exists only in an abstract sense. Everything can be defined to just work (Don't get me wrong! -- this is a good thing!)
If I may just point out one thing -- C has been known as a powerful language because of it's closeness to the machine. Knowing what's going on "under the hood" is a very small step away. In my opinion, it is a good thing to know and exercises the full potential of C by getting the best performance from real-world computers. For a real example, say I wanted to set an array of 1,000,000 pointers to NULL. Should I be forced to iterate through 1,000,000 times setting each array element to NULL or can I use the forbidden knowledge that NULL is de facto just a bunch of 0 bits and memset() it? Going the slow route costs real $ and real resources. My 2 cents. The last thing I want to do is confuse a novice. I honestly don't believe I said anything misleading here. Please point it out if I did. I would love to move on and start talking about best practices. Cheers[/QUOTE] Language holy wars serve no useful purpose. Combatants never reach a resolution. As a former colleague once said "It is possible to write Fortran in any language". |
1 Attachment(s)
[QUOTE=xilman;299955]Give that man a cigar!
Did anyone here seriously contemplate that I wouldn't attempt to put sub-messages within the substantive text or that, given the opportunity, there wouldn't be attempts at some form of humorous word-play? Dubslow: you need to install more fonts. All three render find in Chrome on my Fedora system. Paul[/QUOTE] I honestly couldn't understand his post. I went on a spree, and now the Chinese character renders. Any clue where I can get dead languages? Edit: Found this: [url]http://www.alanwood.net/unicode/fonts.html[/url] and this: [url]http://www.wazu.jp/[/url] What should I grab for the Sumerian? The hieroglyph looks funny. |
[QUOTE=jcrombie;299988]Mea culpa. Of course it's not the job of the language definers to specify the implementation details. That includes the bitwise representation of integer "0", pointer NULL, sizeof(int), ..... et cetera. You position is entirely self-consistent because it exists only in an abstract sense. Everything can be defined to just work (Don't get me wrong! -- this is a good thing!)
If I may just point out one thing -- C has been known as a powerful language because of it's closeness to the machine. Knowing what's going on "under the hood" is a very small step away. In my opinion, it is a good thing to know and exercises the full potential of C by getting the best performance from real-world computers. For a real example, say I wanted to set an array of 1,000,000 pointers to NULL. Should I be forced to iterate through 1,000,000 times setting each array element to NULL or can I use the forbidden knowledge that NULL is de facto just a bunch of 0 bits and memset() it? Going the slow route costs real $ and real resources. My 2 cents. [/QUOTE] Ah yes, this is a very good example. It captures the essence of the distinction between the use of 0 (or NULL) and the actual representation. The answer is that *if you want the program to be maximally portable* (i.e. work in *any* possible environment that supports C), then you would not be able to use memset, you would have to individually assign the pointers. But note my emphasis: there's nothing preventing you from using memset if you happen to know that all the environments you care about represent null pointers with 0-bits. So what have you really lost here? The standard doesn't (indeed can't) prevent you from doing things which are not defined by the standard. It simply says they're not defined. So in your example, if the memset method is something you really think should work, then there's a choice of ways to think about it: 1) The standard should blow off all environments which might want to use a different bit pattern for null pointers, and just require that they be represented with 0s. Then the memset method will be guaranteed to work. 2) Just use the memset method anyway, even though the standard makes no such guarantee. You apparently prefer choice #1, but I don't especially see why. Either way your code will work on the same set of machines/environments (which is to say, pretty much everywhere these days). [QUOTE=jcrombie;299988] The last thing I want to do is confuse a novice. I honestly don't believe I said anything misleading here. Please point it out if I did. [/QUOTE] Well look, I definitely don't want to get into an argument here. I think (hope) that we've managed to clear everything up, and that (most importantly) things have been resolved in a way which is minimally confusing for Dubslow and anyone else in his position following this discussion. If you really want me to point out your statements which are misleading I will (maybe by PM?), but I don't think it'll serve to move the conversation forward much. [QUOTE=jcrombie;299988] I would love to move on and start talking about best practices. Cheers[/QUOTE] Sounds good to me. |
| All times are UTC. The time now is 08:00. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.