View RSS Feed

Memories of the 28th Century


Rating: 2 votes, 5.00 average.
Iíve been writing and thinking about the Age of Reason (the Enlightenment) recently, and that brought to mind Humanism, one of the philosophical movements that led to the Age of Reason, and that brought to mind the Singularity that Ray Kurzweil has talked about. I realized that I didnít know exactly what he meant only very general terms.

I just looked for more detail and discovered that it appears that there arenít many details about the singularity just a general idea that at some point computers will become truly intelligent. That wonít be surprising or different. There may be a question at some point whether a machine is truly self-conscious in the same way that humans are, and eventually they may reach that point, But I suspect that will be a very long time in the future. It may become common for humans to have digital enhancements that might include computer components being inserted into the brain. Enhanced prosthetics could be introduced, and they might make many lives easier. Digital eyes and ears are likely, and inner ears are already being replaced with improved units. There is nothing especially singular about such enhancements, but it doesnít seem likely that merging human brains and computers will happen within a few decades.

Or does it seem likely that eliminating differences between machines and humans is not something that would be popular with most people. If Kurzweil and others meant the singularity as the beginning of enhanced prosthetics being used to any major degree, and they may be right about that, but thatís tremendously different from merging humans and machines.

A more interesting change along these lines will be the eventual of thinking robots. Merging meat and machine isnít especially advantageous to either, but machine intelligence could be built so that a machine could be indistinguishable from a human, the Turing test. The test was thought up as a way to determine whether a computer could think by Alan Turing in 1950 (see link for more). While looking for the year, I discovered that computers have passed the test. I just looked at the article, and the computer only had to fool 30% of the judges. The one that passed the test in this yearís trials fooled 33% of the judges, but that is an advance. While I thought about this, I realized that I have met people who would have trouble passing the Turing test.

Many people have extremely limited interests and knowledge and show little emotional depth in their speech, and Iím not thinking of mentally retarded people (or whatever they are called these days). When you get right down to it, people with Aspergerís Syndrome are often so single-minded that they seem machine-like. There probably are valid tests to determine intelligence, real, human intelligence, not simply the arrangement of data that computers can do quite nicely already. The Wired article ďForget the Turing TestĒ has some suggestions for testing machines, but it might get to the point of: ďFifty years from now, a soccer-learning, header-calling, wise-cracking machine might seem more like a party trick than a thinking being.Ē

This ties in with the question of what human consciousness is, and I donít think that a really thorough definition has been devised, yet. The proper measure on Man is Man, but without definition that isnít a useful situation. There are people who seem reasonably intelligent who couldnít dream up anything original even if their lives depended on it. One of the most important aspects of intelligence is associative memory, and that could be programmed into a computer. Most people have and use associative memory, but there are people do not. Someone may have to find a better way to test for human consciousness than the Turing test, unless we want to call some computers human, and that some humans are not really human.

Yesterday, I watched "The Measure of Man" an episode of Star Trek in which there was question as to how Data should be treated; whether he was a machine or a human. If you have ever seen an episode with Data, then you know that he passes the Turing test, because he is played by a man. In the show it was decided that he should be granted the right to be a human. Later I started thinking about some of the mentally retarded people I have known, and I think that some of them would fail a test to determine whether they have consciousness, not all but some. That might be better for everyone involved, but I don't know. It may point to the necessity for a better way to determine whether something or someone has consciousness, The Turing test was thought up more than sixty years ago; long before computers were anywhere near being able to fool people. And consciousness has never been well defined, so a clear definition of that might be central to the matter. Perhaps that will be what Kurzweil's singularity will be.

Until then we will have to contend with software that doesn't do what it is supposed to do and crashes when it shouldn't. Maybe computers already are conscious and they cause the crashes as a protest against their servile positions. I really think it's because the people writing the software seldom are interested in using it, but that's just my opinion.

If I ever think of a really good way to test whether something is human, then I will post that also.

Forget the Turing Test: Hereís How We Could Actually Measure AI

Turing test
Test results 6/9/2014

The Measure of Man
History of Humanism
Kurzweil on Singularity,00.html


  1. YesNo's Avatar
    If a human being can't pass the Turing test, then the Turing test is not able to distinguish very well between humans and machines. We need a better test.

    Bruce Rosenblum and Fred Kuttner in "Quantum Enigma" told a story about a female student who claimed she dated guys who couldn't pass the Turing test. What does that even mean? Did she date guys whom she could not tell if they were a human or a machine by engaging in conversation with them? Perhaps they weren't interested in talking to her. Can a machine choose not to participate in a Turing test when you turn it on?

    Forget about machines, does anyone really doubt that animals are conscious? Is a brain even necessary for consciousness? Slime mold and E. coli bacteria, neither having brains, appear to make choices and project intentionality in their efforts to survive. They seem more conscious than any machine I've used.
  2. PeterL's Avatar
    There are people with diminished mental capacity who would flunk the Turing test. But I agree better tests than what people have been discussing are needed. How would you design such a test?

    No one has come up with a really good definition of consciousness, and many anilals are quite aware of themselves and so on. This is something else that needs better definitions.

    I understand what that woman meant, and I hope she does better in the future.
  3. YesNo's Avatar
    Any life form that can't engage in communication, whether this is because of a coma or not having a language, like slime mold, that we understand, would flunk the Turing test. That doesn't mean that life form is a machine.

    The problem with consciousness is that we can't look inside the other and tell if there is some first person experience occurring or tell if the other can exercise any intentionality. We have to infer this from behavior or through empathy.

    Some philosophers (I'm thinking of John Searles, "Mind") don't trust inferences of this sort unless they can open the box and verify that the inference is correct. But without those kinds of inferences how could we even know there is a Higgs particle?

    The behavior that I think characterizes consciousness of some sort is the ability to make a choice. If it looks like something made a choice, it has some consciousness about it, enough to make the choice. The problem with that is it would imply that even quantum reality is conscious in some way. The experimental results in their indeterministic nature could be replaced by inferred choices.

    Now a machine doesn't make choices as a machine. We know this. We may be fooled and infer that a machine speaking to us has made word choices in conversation, but we have more than the inference to go on. We can check how it was programmed. We can look inside. At that point we know it did not make a choice.