About Rationally Speaking


Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Thursday, November 05, 2009

David Chalmers and the Singularity that will probably not come

David Chalmers is a philosopher of mind, best known for his argument about the difficulty of what he termed the “hard problem” of consciousness, which he typically discusses by way of a thought experiment featuring zombies who act and talk exactly like humans, and yet have no conscious thought (I explained clearly what I think of that sort of thing in my essay on “The Zombification of Philosophy”).

Yesterday I had the pleasure of seeing Chalmers in action live at the Graduate Center of the City University of New York. He didn’t talk about zombies, telling us instead his thoughts about the so-called Singularity, the alleged moment when artificial intelligence will surpass human intelligence, resulting in either all hell breaking loose or the next glorious stage in human evolution — depending on whether you typically see the glass as half empty or half full. The talk made clear to me what Chalmers’ problem is (other than his really bad hair cut (1)): he reads too much science fiction, and is apparently unable to snap out of the necessary suspension of disbelief when he comes back to the real world. Let me explain.

Chalmers’ (and other advocates of the possibility of a Singularity) argument starts off with the simple observation that machines have gained computing power at an extraordinary rate over the past several years, a trend that one can extrapolate to a near future explosion of intelligence. Too bad that, as any student of statistics 101 ought to know, extrapolation is a really bad way of making predictions, unless one can be reasonably assured of understanding the underlying causal phenomena (which we don’t, in the case of intelligence). (I asked a question along these lines to Chalmers in the Q&A and he denied having used the word extrapolation at all; I checked with several colleagues over wine and cheese, and they all confirmed that he did — several times.)

Be that as it may, Chalmers went on to present his main argument for the Singularity, which goes something like this:

1. There will soon be AI (i.e., Artificial Intelligence)
2. There will then soon be a transition from AI to AI+
3. There will then soon be a transition from AI+ to AI++

Therefore, there will be AI++

All three premises and the conclusion where followed by a parenthetical statement to the effect that each holds only “absent defeaters,” i.e., absent anything that may get in the way of any of the above.

Chalmers was obviously very proud of his argument, but I got the sense that few people were impressed, and I certainly wasn’t. First off, he consistently refused to define what AI++, AI+, or even, for that matter, AI, actually mean. This, in a philosophy talk, is a pretty grave sin, because philosophical analysis doesn’t get off the ground unless we are reasonably clear on what it is that we are talking about. Indeed, much of philosophical analysis aims at clarifying concepts and their relations. You would have been hard pressed (and increasingly frustrated) in finding any philosophical analysis whatsoever in Chalmers’ talk.

Second, Chalmers did not provide a single reason for any of his moves, simply stating each premise and adding that if AI is possible, then there is no reason to believe that AI+ (whatever that is) is not also possible, indeed likely, and so on. But, my friend, if you are making a novel claim, the burden of proof is on you to argue that there are positive reasons to think that what you are suggesting may be true, not on the rest of us to prove that it is not. Shifting the burden of proof is the oldest trick in the rhetorical toolbox, and not one that a self-respecting philosopher should deploy in front of his peers (or anywhere else, for that matter).

Third, note the parenthetical disclaimer that any of the premises, as well as the conclusion, will not actually hold if a “defeater” gets in the way. When asked during the Q&A what he meant by defeaters, Chalmers pretty much said anything that humans or nature could throw at the development of artificial intelligence. But if that is the case, and if we are not provided with a classification and analysis of such defeaters, then the entire argument amounts to “X is true (unless something proves X not to be true).” Not that impressive.

The other elephant in the room, of course, is the very concept of “intelligence,” artificial or human. This is a notoriously difficult concept to unpack, and even more so to measure quantitatively (which would be necessary to tell the difference between AI and AI+ or AI++). Several people noted this problem, including myself in the Q&A, but Chalmers cavalierly brushed it aside saying that his argument does not hinge on human intelligence, or computational power, or intelligence in a broader sense, but only on an unspecified quantity “G” which he quickly associated with an unspecified set of cognitive capacities through an equally unspecified mathematical mapping function (adding that “more work would have to be done” to flesh out such notion — no kidding). Really? But wait a minute, if we started this whole discussion about the Singularity using an argument based on extrapolation of computational power, shouldn’t our discussion be limited to computational power? (Which, needless to say, is not at all the same as intelligence.) And if we are talking about AI, what on earth does the “I” stand for in there, if not intelligence — presumably of a human-like kind?

In fact, the problem with the AI effort in general is that we have little progress to show after decades of attempts, likely for the very good reason that human intelligence may not be algorithmic, at least not in the same sense in which computer programs are (2). I am most certainly not invoking mysticism or dualism here, I think that intelligence (and consciousness) are the result of the activity of a physical brain substrate, but the very fact that we can build machines with a degree of computing power and speed that greatly exceeds those of the human mind, and yet are nowhere near being “intelligent,” should make it pretty clear that the problem is not computing power or speed.

After the deployment of the above mentioned highly questionable “argument,” things just got bizarre in Chalmers’ talk. He rapidly proceeded to tell us that A++ will happen by simulated evolution in a virtual environment — thereby making a blurred and confused mix out of different notions such as natural selection, artificial selection, physical evolution and virtual evolution.

Which naturally raised the question of how do we control the Singularity and stop “them” from pushing us into extinction. Chalmers’ preferred solution is either to prevent the “leaking” of AI++ into our world, or to select for moral values during the (virtual) evolutionary process. Silly me, I thought that the easiest way to stop the threat of AI++ would be to simply unplug the machines running the alleged virtual world and be done with them. (Incidentally, what does it mean for a virtual intelligence to exist? How does it “leak” into our world? Like a Star Trek hologram gone nuts?)

Then the level of unsubstantiated absurdity escalated even faster: perhaps we are in fact one example of virtual intelligence, said Chalmers, and our Creator may be getting ready to turn us off because we may be about to leak out into his/her/its world. But if not, then we might want to think about how to integrate ourselves into AI++, which naturally could be done by “uploading” our neural structure (Chalmers’ recommendation is one neuron at a time) into the virtual intelligence — again, whatever that might mean.

Finally, Chalmers — evidently troubled by his own mortality (well, who isn’t?) — expressed the hope that A++ will have the technology (and interest, I assume) to reverse engineer his brain, perhaps out of a collection of scans, books, and videos of him, and bring him back to life. You see, he doesn’t think he will live long enough to actually see the Singularity happen. And that’s the only part of the talk on which we actually agreed.

The reason I went on for so long about Chalmers’ abysmal performance is because this is precisely the sort of thing that gives philosophy a bad name. It is nice to see philosophers taking a serious interest in science and bringing their discipline’s tools and perspectives to the high table of important social debates about the future of technology. But the attempt becomes a not particularly funny joke when a well known philosopher starts out by deploying a really bad argument and ends up sounding more cuckoo than trekkie fans at their annual convention. Now, if you will excuse me I’ll go back to the next episode of Battlestar Galactica, where you can find all the basic ideas discussed by Chalmers presented in an immensely more entertaining manner than his talk.

Postscripts:

(1) Much has been made of this alleged "ad hominem" attack I made on Chalmers. People, lighten up a bit, this is a column for general observations and discussion, which I always try to pepper with some humor. Rationally Speaking (and, indeed, any blog) is not a place for scholarly discussions. Besides, have you seen Chalmers' haircut?? ;-)

(2) Perhaps predictably, this phrase has been taken out of context by several people who are sympathetic with Chalmers' notions, and who have used it to accuse me of "vitalism," the long discredited position that biological organisms rely on some sort of quasi-mystical forces outside of the realm of standard physics. Baloney. All I meant to say, as it should have been clear from my clarification ("at least not in the same sense in which computer programs are") is that I don't think human brains are directly analogous to computers, which is a much more limited, and indeed quite obvious, statement. Incidentally, "algorithm" can be defined broadly as any method to resolve a particular problem in a finite number of steps. What, exactly, is the problem that "brains" are supposed to be solving? Survival? Reproduction? What's for dinner? What movie to go to? All of the above? One needs to be weary of the fact that if a term is defined broadly enough then it inevitably subsumes everything, which means that it explains nothing.

31 comments:

  1. "He rapidly proceeded to tell us that A++ will happen by simulated evolution in a virtual environment"

    This is not so bizarre. See Creation: Life and How to Make It

    Dawkins refers to that book all the time.

    ReplyDelete
  2. Ah well, if Dawkins refers to it, it must be true... :-)

    First off, despite all the speculation, we are not even close to doing it. Second, what Chalmers was talking about is a very different kind of thing: not just artificial/synthetic life, but super-intelligence.

    ReplyDelete
  3. Hilarious! At least your review was.

    Why does anyone take Chalmers seriously? I speculate that is only because he, whilst opposed to Dennet, ironically has in common with Dennet, that they together have given philosophical permission to the AI community to continue with their high faluting but unsustantiated claims and research - what he calls the "easy problem" of consciouness.

    ReplyDelete
  4. While Chalmers may not have backed his contentions with solid evidence, I would not be surprised if his general premise were true. I expect that we will develop something that will be accepted as "AI". Once we reach that point the headlong rush to cross some threshhold that we will designate as AI+, and then AI++, will be almost certain. It is too bad Chalmers didn't have any good ideas to present as to what those conditions might represent. Still, the idea itself is not unreasonable and it might be a good idea to plan ahead for the eventuality.
    My own personal expectation is that humans will accomodate and incorporate the technology, just as we will accomodate human genetic engineering. In fact I expect that we will combine the two. I probably won't be around to see it, and I might not like what I would see if I were, but I might think it grand.
    All of this, of course, unless we hit one of those "defeaters": a nuclear war? Christian or Islamic fundamentalist ruling the world? An asteroid or mega-volcano?

    ReplyDelete
  5. Very hilarious indeed, though I do think that the human mind is both computational and algorithmic, if understood in a broad sense, but that is a far cry from Chalmers' absurd claims. It has obviously lured people into thinking that our brain is directly analogous to a computer, which it is not.

    Roger Penrose famously tried to prove that the human mind is non-algorithmic, which he thinks allows us to intuit mathematical theorem's such as Gödel's, and has to do with quantum mechanical processes in the brain. Dennett among others has explained why the whole argument is flawed.

    Anyway, it's a matter of definitions, and I don't think your arguments depends on it.

    By the way, what was all the fuss on the 'hard problem' about, if Chalmers is now so optimistic about the future of AI?

    ReplyDelete
  6. As a non-philosopher, whenever I encounter arguments like this I'm always tempted to ignore the whole thing and "count the dancing angels" instead.
    Whatever "intelligence" is it's not just our ability to "compute" accurately. It's also our ability to handle random mistakes and/or inconsistencies. Even if someone could figure out how the heck to “program” the “dumbness” that is part of human “intelligence,” why would they want to? The resources it took could be put to much better uses.
    The limited computing ability of machines is a direct result of human ability to program them. Therefore, it's highly improbable that a computer will ever be greater than the sum of the people who program it. It may be faster and more accurate, but those are the exact qualities that make it less human.
    Besides, intelligence for humans is a combination of the abilities of each individual human and our ability to work collectively and learn from each other (including those who have lived thousands of years in the past). Moreover, every increase in mechanical computing only frees up humans to increase the qualities that make up our intelligence.
    I'm not worried about computers taking over. I'm too busy worrying about the brick and the wheel. They've had a lot more time to evolve, so I'm keeping my eye on them.

    ReplyDelete
  7. Ciao Massimo,
    I love this post, especially the speaker's premise, which reminded me of other classic paradoxes. If you draw the curve in a certain way and look at the increase in speed of, for instance, competitive cyclists over the last century, at some point we'll be pedaling past the sonic barrier. Absent defeaters of course :)

    PS: rather than Star Trek I think the "leaking" he was referring to was in Matrix style.

    ReplyDelete
  8. duboisist said:
    "The limited computing ability of machines is a direct result of human ability to program them. Therefore, it's highly improbable that a computer will ever be greater than the sum of the people who program it."

    That is probably why they insist on some kind of evolution being part of it. That seems to be the only mechanism we know of where complicated, purposeful things come into being without something else having constructed them.

    That would be an interesting question to ask of sigularity portenders. Do they envision it arising from awesome computing power being at the finger tips of modeled human intelligence, or something we cannot conceive of evolving out of some digital soup?

    hmmm...

    ReplyDelete
  9. Very interesting post. I find it fascinating to view the current “singularity” focus as a 21st century take on the Platonic dualist tradition, which I view as foundational to many of the values held by our Western culture. There's very little to differentiate people like Ray Kurzweil from Neoplatonic mystics (other than technology).

    I've recently published a post on this topic, called Infinition: Our Acceleration to the Infinite, which readers might find interesting...

    Here's the link:

    http://jeremylent.wordpress.com/2009/11/04/infinition-our-acceleration-to-the-infinite/

    ReplyDelete
  10. In our own lab working on some aspects of machine intelligence the focus is not on attempting to recreate any sense of human intelligence but instead to concentrate on adaptive problem solving that mitigates human cognitive biases. It doubtless will never result in a machine that spouts poetry but it does result in useful tools that can manage a wide range of problems.

    ReplyDelete
  11. I have never understood the 'consciousness is non algorithmic' argument. The very universe appears to be algorithmic. Well maybe there is something different about intelligence. But that must be seen as a very strange claim that requires strong evidence.

    Penrose has made some suggestions about human intelligence being non algorithmic. But he understands just how strange this claim is. And his arguments are pretty unconvincing.

    ReplyDelete
  12. I have nothing intelligent to add to this thread...

    Sincerely,
    Computer 65435

    ReplyDelete
  13. If Chalmers continues to believe that anything that is conceivable is possible, then he is on the right track, and he gets a pass on both the zombie and AI constructions. Here is a link to his talk...

    http://www.vimeo.com/7320820

    He does define the AI,+, and ++, I will take a stab and say AI+ is any software acknowledged as more intelligent than anything a human can come up with, such software is said not to exist today. AI+ software will then be tasked with creating super-intelligent AI software - getting us to AI++, and supposedly this singularity??

    Jumping back to the zombies, I don't pretend to understand the definition of zombies as people without consciousness, how they are different from robots etc. I do understand some of the above AI arguments, and do not buy them because it isn't so much that machine intelligence holds the promise of further 'paradigm shifts' in technology, but machine integration into all aspects of our lives and our culture. I am not sure about the current state of AI, but it used to be about building a better:
    (1) chess player,
    (2) natural language processor,
    (3) robots,
    (4) learner, and representation of the things learned
    (5) network

    None of these things naturally point to a computer takeover, they point to better weather forecasters and quicker wars. But just imagine one thousand Twittering big brothers warning you not to eat that second hot dog; an environment where your moves are broadcast with or without the benefit of your keyboard. That is very much a big deal - to us, not the hardware running Twitter

    One can say that in this scenario it is still a human show, even though the computers provide the capability - not what Chalmers was getting at. He may think that somewhere in the advent of machine intelligence, there arises a machine-specific agenda. If he does he is correct, but it is no different than our own. The agenda is survival.

    ReplyDelete
  14. ppnl,

    I guess it depends on what one means by "algorithmic." I don't actually buy Wolfram's argument that the universe is algorithmic, but for the purposes of my post, all one needs to agree to is that computational power in the sense of the sort of computers we build isn't what gets you intelligence.

    ReplyDelete
  15. What can it mean for the universe not to be algorithmic? All being algorithmic means is that it is computable. If it isn't computable then in what sense can it be understood at all?

    And all computers are in some deep sense equal in that they can all compute the same set of problems.

    So what are the alternatives? If you want your AI to have eyes you will need an algorithm to extract edge, color and motion information from the raw visual input. You will need algorithms to identify specific objects from that. And you will need algorithms to connect those objects with your knowledge of the world.

    So at the very least you need to solve some interesting algorithmic problems to build an AI. What other kinds of problems need to be solved?

    ReplyDelete
  16. I find it interesting that both the AI field and the modern era of molecular biology (after Watson & Crick's famous paper) began at about the same time. Yet the molecular biologists seem to have more to show for their efforts than AI researchers, even though AI has absorbed the lives of some very smart people who often had plenty of resources at their disposal (funding and technical support from corporations and governments).

    ReplyDelete
  17. ppnl,

    there are plenty of things I understand but I'm not convinced are "computable," like my appreciation for some kinds of art but not others, or the fact that I'm attracted to some women and not others.

    Besides, who said that the universe in its totality is understandable? And who said that even if it is this can be done by the human mind?

    Yes, all (human-built) computers share common characteristics, but this still doesn't equate computational power with intelligence, otherwise Deep Blue would be vastly more intelligent than Kasparov. It isn't, it's just better at playing chess (a highly computable algorithmic activity).

    ReplyDelete
  18. Hiya, first time here, subscribed, loving the discussion. Also, I'm a pet philosopher working in the AI field, and as such have spent many years fiddling with it (including writing software that mimics human behavior in virtual settings and then learn it and discard the mimicking algorithms. Fun).

    First, AI is never about the I of it; that's just marketing, not reality, never has been, never in the human way of understanding intelligence.

    Second, I find indeed that Chalmers is in big-time speculation mode without any true knowledge of what that singularity would mean in technical terms. Let's check out that dreaded word "Intelligence" first;

    "Capacities to reason, to plan, to solve problems, to think abstractly, to comprehend ideas, to use language, and to learn."

    AI today can reason over first-order logic in perfect systems only (meaning, closed databases with clean facts that are certified to be true), but cannot deal with lies and falsehoods at all (in fact, even the fuzziness of what truth is breaks most of the Semantic Web layers that deal with inferencing) which the world is and always will be full of. In fact, some are starting to believe (like me) that the true paradigm for intelligence is to go along with falsehood, and that such notions needs to drive the movement forward. Funnily enough, basing your systems on truth seems to be a dud.

    Oh, and AI certainly can learn (neural nets proving that reasonably early on), but all in a very mathematical way. For example, no system that I'm aware of can unlearn over fact-statements, where there's a big gap between knowledge as-is and as-inferred. The human brain does this with ease.

    But the rest of those things? Our brain does all of this without almost any discernible effort, yet we cannot for the life of us make software that can tell the difference between smoke and steam (oh, I've written software that does it in terms of visual and behavior cues, but knowing the difference? Hah!)

    Until machines understand *why* this discussion is really moot. And to be honest, I'm not even sure we'll ever get there. How many millions of proteins exists in a single cell of all your 10 thousand trillion cells, again?

    People tend to think that the brain is getting better mapped all the time. People, there's brains within brains within brains, all not even knowing they are there. *That* is intelligence. What we're doing is toying with simplistic machines. I don't expect a singularity in anything but the movies, and even there I find them annoying. :)

    ReplyDelete
  19. Deep Blue was not better at chess than Kasparov. It lost the first match, and most grandmasters agree that it only won the rematch because Kasparov tried an uncharacteristically risky strategy that he wouldn't normally have used.

    The best computers are now a lot better than Kasparov ever was, but I just wanted to correct Pigliucci's error.

    ReplyDelete
  20. Chalmers has always struck me as one who has always sided on the so horribly wrong he could only appeal to those who have at best read Aristotle, and then only shallowly. He gets credit for defining the ``hard problem''; which is to say he gets credit for relabelling what was already recognized as at best merely mistaken as something ``fundamental''.

    We (experimental psychologists) have designed all sorts of effective, convincing ways to demonstrate the influence, effectiveness, and role of explicit (conscious) control in many tasks. The upshot of most of that work is that conscious awareness mostly *follows* action, and is almost always confabulated ``explanations'' for what transpired. That is, it rarely (at least proximately) is why we do what we do, despite our convinced proclamations otherwise.

    That is not to say we can't learn (develop biases for) other local behaviours; indeed, that is what we do. But to attribute a direct, functional, causal role to awareness for most behaviours is, well, just silly. That is not to say consciousness plays no role; indeed, it no doubt adjusts the set-points of the behavioural biases as in Nicholas Humphey's and Powers's musings.

    ReplyDelete
  21. Joseph,

    thanks for correcting my inaccurate statement about Deep Blue. The point, however, remains: we can now build computers that are at least as good as humans at a particular task, and which certainly have hugely more computational power and speed than human brains. Yet, they ain't even close to anything like intelligence. Which to me strongly suggests that intelligence is not (just) a matter of computational power.

    ReplyDelete
  22. No the biggest computers have only a small fraction of a human's processing power. And a desktop computer has been estimated to have the processing power of an ant. Intel's I7 processor has about 730 million transistors. A human brain has a hundred billion neurons.

    And all computers are equal in a deep mathematical sense. The program that ran on Deep Blue could in principle be recompiled to run on the processor in my desktop and it would produce the same moves. It would just do it a million times slower.

    To say that any process is not computable comes close to saying that it is magic. It is a very strange claim that would require strong evidence.

    Maybe the universe isn't totally understandable. Maybe there really is magic. Maybe there is even a God. Can science survive taking these possibilities seriously?

    Again look into Roger Penrose's views on the subject. He is wrong about many things but he at least knows how strange it is to claim that something is noncomputable.

    ReplyDelete
  23. ppnl,

    please do not accuse me of mysticism, invoking god and other such nonsense. If you read this blog even in a cursory fashion you will easily find out how strange that accusation is.

    As for computing power, it depends on what you mean by it. I don't think there is any question that modern computers can process more information and much faster than a human being can do - just try beating a computer at almost anything it can do well.

    I find it interesting that you equate transistors with neurons. I know it's a common analogy, but that's all it is, an analogy. Neurons don't work anywhere near in the same way as transistors.

    About the computability of the universe: if by computability one simply means that there are ways to describe complex systems, then that meaning is far too broad and loses any interest.

    I should note that an algorithm is often defined as a step-wise procedure to solve a problem. What is the problem that the universe is supposed to solve, and who or what wrote the procedure? (Note: I do not actually believe the universe was written by anybody.) If, on the other hand, you are much more sensibly saying that the universe is a series of things and phenomena that occur because of a continuously spreading chain of causes and effects, ok we agree, but in what sense would that be an "algorithm"?

    Yes, Penrose gave us some good food for thought, but I still sense that the AI community, and certainly Chalmers, are much too quick to conclude by mere analogy that human intelligence is a matter of computer-like computation. They are making a huge claim, and with very little evidence to back it up...

    ReplyDelete
  24. I have been thinking about the singularity since Eric Drexler started working on nanotechnology in the late 70s.

    Finally had to resort to fiction to capture the ambiguous features of what some might consider a good outcome.

    A couple of libertarian salesmen sell an AI and nanotech based clinic seed to an impoverished African village with unforeseen consequences.

    Google henson clinic seed to find it.

    Keith Henson

    ReplyDelete
  25. Sir,

    Yes I read your blog regularly and I understand how strange it is to accuse you of mysticism. No offense was intended. I was just trying to express how strange a claim non computability is.

    Yes a computer can beat humans at just about anything it can do well. But then most animals can beat humans at what they do well. For example a chimp can easily beat humans in a simple memory test. See here for example:

    http://abcnews.go.com/WN/story?id=3948256&page=1&page=1

    But this is a poor measure of computing power. It is just a measure of focus. Humans beat computers at most tasks we find important for survival. Computer programs are very poor at pattern recognition problems for example. They don't have the computational horsepower.

    And I don't really directly compare neurons to transistors. I just use then as a kind of back of the envelope calculation. In reality it would probably take many transistors to give the functionality of a neuron. A transistor is a simple switch. A neurons output is a complex and changeable function of many inputs.

    Yes claiming that the universe is algorithmic is simply claiming that the universe is a continuous spreading chain of cause and effect and that the essence of that chain can be captured in a computer program. This is what we are left with when we dump the teleological language.

    For example I can model a tornado with a computer. I can use that model to predict what damage the tornado will do. It is a tool for understanding tornadoes. This is the sense in which a tornado is algorithmic.

    In principle I should be able to model the causal chain of a brain the same way. I should be able to use it to explore and explain the causal chain of the brain. If consciousness or intelligence or any other supposed property of the brain arises from that causal chain then an accurate enough computer model should show how.

    The difference is that a computer model of a tornado cannot actually destroy my house. But a computer model of a brain if given control of a body should be able to walk and talk as a human would.

    When did the model become the thing? This is where zombie arguments come in.

    I'm not trying to defend Chalmers or any specific zombie argument. But in order to understand the pull of zombie arguments you need to understand the universality of computers and how they are used as a tool of reductionism. Any denial that the brain can be programmed into a computer begins to feel like a rejection of reductionism. Its exactly like claiming that there are some aspects of a tornado that cannot even in principle be modeled and predicted with a computer and so is beyond any possibility of understanding.

    Try thinking about AI from an engineering point of view. What do you need beyond a computers ability to model any possible causal chain?

    ReplyDelete
  26. ppnl,

    no offense taken, don't worry. I actually agree with many of your points, I just don't see how they get us anywhere close to Chalmers' wild conclusions.

    ReplyDelete
  27. I was not trying to defend Chalmers' position. I was only pointing out what I saw as weaknesses in yours. That is modern computers have nowhere near the processing power of our brain. And that understanding the brain will involve taking it apart and understanding it as a set of algorithmic processes. The computer will be an indispensable tool for doing this. And doing this will involve being able to run those algorithmic process on computer hardware.

    ReplyDelete
  28. I was subjected to a 'debate' about Ray Kurzweil's book concerning the same thing.

    The postmodernist doing clean up, simply explained that Kurzweil was engaged in solipsistic discourse.

    Which sadly, was all that needed to be said.

    ReplyDelete
  29. G most likely is the g factor, the general factor of intelligence. Intelligence is not that difficult to measure. Look into psychometrics if you want to understand how IQ-tests work etc.

    ReplyDelete
    Replies
    1. In the last three years IBM created Watson

      http://en.wikipedia.org/wiki/Watson_%28computer%29

      I wonder what a computer designed for IQ test would score?

      Delete

Note: Only a member of this blog may post a comment.