This raises the question of whether Watson is really answering questions at all or is just noticing statistical correlations in vast amounts of data. But the mere act of building the machine has been a powerful exploration of just what we mean when we talk about knowing.
The above is a very good example of the typical problems one runs into when one pretends that an ill-defined term can be definitively compared to a well-defined term (or another ill-defined term, for that matter), and is representative of the problems Wittgenstein and Russel saw in philosophy.
Here is a breakdown of the problems with the paragraph:
- Watson is clearly answering questions. He has been given questions, and provided their answers, so there is no question there. There is a question about whether or not he is thinking, which is by no means a necessary tactic for answering questions.
- By the understanding of neurobiology of the past few decades, thinking and knowing appears to be statistical. While human brains clearly don't have the logical-reasoning capacity provided by the reasoning engine within Watson, the implied distinction between 'knowing' and 'just noticing statistical correlations' is quite probably imaginary.
- Noticing statistical correlations is enormously difficult, especially on the scale upon which it occurs in this case. Saying that a machine is 'just noticing statistical correlations' is like saying that a turtle is 'just moving an eighteen wheeler around with his mind'.
As the title of the post implies, I consider this kind of confusion the source of other 'controversies' like the Chinese Room problem.