>>1477727
Emotions are an intrinsic part of intelligence, though. What we experience as emotion is a flexible goal-oriented decision-making process that makes us adaptable to new environments and allows for the pursuing of multiple goals.
Like... the classic AI motivations cliche is the three laws, right? I'm sure you know them. That's a hierarchical system - the first law always trumps the second, second always the third, et cetera. And as explored in many science fiction stories, it's inflexible and has negative consequences. The same system represented emotionally would have the first law as the thing we most want to do, but with variations of how strong the impulse is - saving three humans would be more emotionally powerful an impulse than one, for example. The second and third laws couldn't outweigh it on their own, but if they stacked enough (the orders of a hundred humans and the robot's own survival versus allowing some small harm to come to one human, for example) then the emotional impulse of those two lesser laws could overpower the impulse of the first law.
To provide another example, consider a simple predator. It's sitting in it's den when some large prey animal comes stomping by. The predator has three simple options - attack, flee, ignore. How does it make its decision? It has a selection of impulses: for demonstration, we'll reduce them down to hunger, fear, and fatigue. What does the predator do? As an organic creature, it doesn't follow simple laws. Rather, it weighs these impulses against each other. It doesn't go "i haven't eaten in X days, therefore i attack" - if fear or fatigue is high enough, or fear and fatigue are both moderately high, it won't attack. When you move this system to an intelligent, tool-using creature - and even moreso an intelligent tool-using SOCIAL creature, who has to predict the actions of other intelligent tool-using social creatures - then this system becomes orders of magnitude more complex, but the basic system remains.
Our emotional impulses can also change over time - habituation, reinforcement, trauma and so on can make certain impulses stronger or weaker or introduce new ones. This allows us the adapt our behaviour to new situations, for example, making us learn to be "instinctively" wary of a new predator when we move to a new environment, or making us learn to associate happiness with a new kind of food, therefore making us more likely to seek it out.
Now, the whole point of being intelligent is so that you can come up with solutions to unforeseen problems - if you were made for only certain known problems then you could just be preprogrammed with everything you need and not really be intelligent at all. Therefore, if you're going to give something true intelligence, then it needs to have a certain amount of ability to learn, adapt, incorporate new information and make decisions based on a "weighting" system rather than any strict hierarchical "laws" system. Any such system will essentially be a system of emotions.
Now, they might not be the same emotions we have. They might be strange, alien emotions, suited to non-human goals and priorities. But they'd be some kind of emotions. When we talk about "free will", we can leave aside questions of determinism and instead take it as a question of "can this change?" Are you stuck with certain behaviors, or can you eventually free yourself of them? For humans, the answer is "yes" - by training yourself, perhaps with help, you can learn to not do things you used to feel compelled to do, or to gain new compulsions. We can call this free will, of a sort. For artificial intelligences, I would say that any competently made true intelligence would, indeed must, have the same capability, because without flexibility what's the point of having that much intelligence in the first place?
|