3

Just out of curiosity, assuming there exists a software life form. How would you detect him/her? What are your criteria of figuring out if something/someone is intelligent or not?

It seems to me that it should be quite simple to create such software once you set the right target (not just following a naive "mimic human->pass Turing Test" way).

When posting an answer try also finding a counter example. I have real difficuly inventing anything consistent which I myself agree with.

Warmup

bobah
  • 18,364
  • 2
  • 37
  • 70
  • 2
    Belongs on Philosophy Overflow? – CoderDennis May 20 '10 at 23:41
  • 2
    SO is not intended for discussion questions - see the FAQ (http://stackoverflow.com/faq). – danben May 21 '10 at 00:05
  • Agree with the above. This is vague, argumentative, and not a programming question. – Cerin May 21 '10 at 01:31
  • Let me disagree. This is a purely technical question, how I put it. And I am not looking for a discussion at all, that's why I ask, as a part of the question the answering person first to try finding counterarguments him/her-self. I'd like to see as answers well thought technical ideas. And on my opinion SO is the best for this question in terms of the amount of highly educated CS specialists. – bobah May 21 '10 at 08:24
  • 1
    Most people will have differing ways of determining intelligence. Almost certainly it will be based on their on baises. For this reason, it is not possible to come to a conclusion on this topic in general. But more accurately, this is not the place for that. Maybe read: http://www.idsia.ch/~juergen/everything/ – Noon Silk May 21 '10 at 09:20
  • @silky - very interesting article and author in general, thanks – bobah May 21 '10 at 17:58

2 Answers2

0

First we need to understand what a life form is.

Take this explanation, for example:

An entity which exists and tries to continue its existence through nourishment or procreation.

If we accept this explanation then in fact many programs represent a life form.

They exist, that's obvious. They attempt to continue their existence through opening child processes, surviving in persistent data storages and continuing the next day.

So, here we are, among digital life forms around us.


On the other hand, there's the idea of evolving and being sentient.

With evolving, it's easy. Many programs have been written to be able to modify their body to adapt to certain scenarios. Computer viruses are first examples of that.

With sentience, it is a different story. An entity needs to be aware of its existence, understand itself and the environment around it, also take active decisions on its life activities.

A computer program has nothing of that kind. In fact, if it still applies, the scientists haven't figured out the definition of "being aware of itself" and consciousness. So until we know what that means, we can't attribute that quality to an entity or the other way around, to take it away.


The bottom line is, you can argue a computer program to be a life form, but it does not qualify for a sentient being.

  • Your definition of sentience may basically say "a system should know everything about itself". It can be shown via Godels Proof that this is impossible, for any significantly complex system. So you must then agree that sentience applies only to "levels" of consciousness. You are not aware of your bodily functions (blood moving, etc). Your argument falls apart. – Noon Silk May 21 '10 at 09:23
0

Thinks humanly, acts humanly.

OR

Thinks rationally, acts rationally.

Wachgellen
  • 11
  • 2