Strictly speaking, it is about two hypotheses:
Weak AI hypothesis, AI simulates human intelligence, it “would be useful for testing hypothesis about minds, but would not actually be minds..
Strong AI hypothesis, AI has human intelligence, "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds." Or, a computer which behaves as intelligently as a person must also necessarily have a mind and consciousness.
One has to see the distinction between simulating a mind and actually having a mind. Searle wrote that "according to Strong AI, the correct simulation really is a mind", while "according to Weak AI, the correct simulation is a model of the mind."
The strong AI position: the machine literally "understand"...?
The weak AI position: the machine is merely simulating the ability to understand...?
So, all today's AI is not only a weak AI, but also a narrow AI, at its best.
Weak AI is focused on a specific task. For example, recognizing faces in images, or translating from English to French. It needs a human to provide it with relevant training data, tune its learning hyperparameters, etc. Generally, weak AI won’t be able to transfer knowledge from one domain over to another on its own. If you have a weak AI model trained on translating texts from English to French, it won’t use the logic it has learned in another language by itself. Although a human researcher could manually take some of the parameters learned and use them as a basis for training another language translation model like English to Spanish.
Strong AI, also known as artificial general intelligence, is AI that can reason by itself, transfer knowledge to other domains and be conscious of the world around it. For example, a robot that can appropriately answer questions and act in situations it hasn't encountered before would be an example of strong AI. Ultimately, strong AI should be able to do everything a human can do.
No comments:
Post a Comment