![]() |
[QUOTE=Aramis Wyler;336040]IWS that sapience can be proven by observing the organism in question deciding on (or determining) an option that was previously unknown to him/her/it, and should be assumed for any member of a species that has a member that has proven it.[/QUOTE]
To determine whether something was "previously unknown" to an organism with which you cannot communicate, you must have observed the organism since its beginnings (birth, hatching, or maybe even conception). That is possible. But you have not observed the entire species as it evolved. How can you determine whether the knowledge required to select the particular option has been gained by the thinking efforts of that organism or was passed down from its ancestors in genes? |
[QUOTE=Brian-E;336053]How can you determine whether the knowledge required to select the particular option has been gained by the thinking efforts of that organism or was passed down from its ancestors in genes?[/QUOTE]
Well, if it was passed down in it's genes (Genetic Knowledge? Whole different thread) then it would be known before hand and so would not pass muster of working the matter out itself. I do not think this invalidates the test, it merely makes it hard for you to tell if they passed - especially since you've added the condition that you can't communicate with it. I think that's an odd assertion because I feel like we've discussed in this thread here that just as sapience is a capacity of conciousness, so is communication. Anyway, while I could try to rationalize a way out of the genetic memory and non-communicating conditions, I don't feel the need at the moment, as neither of those conditions applied to either R. Daneel Olivaw or Data, as they were constructed (and with voices). |
If you're limiting your discussion to testing for sentience/consciousness/free will/sapience in intelligent robots or biological creatures with which we can communicate, I agree that your test presents few difficulties.
But our scope was broader than that. |
[QUOTE=ewmayer;336039]Bzzzt! That assumes both that fluent communication is possible, and that the internal models of 'reality' (perhaps 'the environment' is better) are sufficiently similar. Both of which are dubious in a cross-species setting. Like so many other "tests", this one reduces to a same-species tautology.[/QUOTE]
I don't agree. The best counter example I can immediately give is me and my cats. Sometimes they want "cuddles". They will approach me or my girlfriend, and go "Meow", and then jump onto our lap for some "quality time". Other times they are hungry, and want some fresh "kib". They will go "Meow" to get our attention, and then run out of the room we're in towards one of the two kib bowls in another room or outside. (Note that this involves them having a model of their environment in their minds since they can't see the bowls from where they sought our attention.) (And, a sad example... Several years ago the "alpha male" cat of our clan was killed by a car. The other cats were (in my opinion) sad (as was, obviously, me), and they simply non-verbally communicated that they wanted comfort and reassurance while they grieved.) |
So can we add language-possessing, tool-using, and self-awareness
to the quallities we possess that we would expect to see in a being as part of having free will? If there's a checklist, maybe a few more capacities, and we might have a means to defining a free will test. |
Aramis,
What about a computer programmed to run through the given axioms of ZF-set theory. Would it not discover things it previously did not know? |
[QUOTE=davar55;336124]So can we add language-possessing, tool-using, and self-awareness to the quallities we possess that we would expect to see in a being as part of having free will? If there's a checklist, maybe a few more capacities, and we might have a means to defining a free will test.[/QUOTE]
Again, IMO, free will involves being able to make immediate decisions based on the possible future outcomes. I might be convinced that Roger Penrose is correct, and that a deterministic algorithm cannot exhibit free will, since it will always come to the same decision based on the same inputs. On the other hand, introduce a quantum computer (read: animal brains) and/or a truly random number generator feeding into the algorithm, and there's no reason why many forms of intelligence (including "artificial") can't exhibit free will. |
[QUOTE=Zeta-Flux;336125]Aramis,
What about a computer programmed to run through the given axioms of ZF-set theory. Would it not discover things it previously did not know?[/QUOTE] I think at the end of the program the computer wouldn't know anything more than it knew at the beginning. Just because a program can report output doesn't mean it has incorporated it into itself. If you ran it again wouldn't it discover all the same items again? That's because it didn't learn anything, let alone generate the new knowledge itself. EDIT/PS: Lots of topics floating around: sapience/wisdom, Intelligence/Knowledge, Free Will. I am not sure that any of them require each other. One could be self-aware, free as a bird, and dumb as a stump. The last limiting ones ability to communicate complicated thoughts or use tools. |
[QUOTE=Aramis Wyler;336132]I think at the end of the program the computer wouldn't know anything more than it knew at the beginning. Just because a program can report output doesn't mean it has incorporated it into itself.[/QUOTE]
You are exactly correct. Facts != Knowledge. Here's the test: Ask the computer to run the program again. If it runs it again, it has no knowledge. If it says "I've already done that. Here's the answer I already gave. Are you [I][U]sure[/U][/I] you want me to run this program again with the exact same inputs?" ...then we've entered a new era.... |
[QUOTE=chalsall;336135] If it says "I've already done that. Here's the answer I already gave. Are you [I][U]sure[/U][/I] you want me to run this program again with the exact same inputs?" ...then we've entered a new era....[/QUOTE]... or else the programmer just added a few lines of code without actually conferring sentience.
|
[QUOTE=cheesehead;336138]... or else the programmer just added a few lines of code without actually conferring sentience.[/QUOTE]
Possible. Not likely. If I'm wrong, please demonstrate the capability. |
| All times are UTC. The time now is 10:59. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.