- UID
- 20
- Online time
- Hours
- Posts
- Reg time
- 24-8-2017
- Last login
- 1-1-1970
|

▼ When science-fiction worlds introduce robots that look and behave like people, sooner or later those worlds' inhabitants confront the question of robot self-awareness. If a machine is built to truly mimic a human, its "brain" must be complex enough not only to process information as ours does, but also to achieve certain types of abstract thinking that make us human. This includes recognition of our "selves" and our place in the world, a state known as consciousness.
One example of a sci-fi struggle to define AI consciousness is AMC's "Humans" (Tues. 10/9c starting June 5). At this point in the series, human-like machines called Synths have become self-aware; as they band together in communities to live independent lives and define who they are, they must also battle for acceptance and survival against the hostile humans who created and used them.
But what exactly might "consciousness" mean for artificial intelligence (AI) in the real world, and how close is AI to reaching that goal?
Philosophers have described consciousness as having a unique sense of self coupled with an awareness of what's going on around you. And neuroscientists have offered their own perspective on how consciousness might be quantified, through analysis of a person's brain activity as it integrates and interprets sensory data.
However, applying those rules to AI is tricky. In some ways, the processing abilities of AI are not unlike those that take place in human brains. Sophisticated AI systems use a process called deep learningto solve computational tasks quickly, using networks of layered algorithms that communicate with each other to solve more and more complex problems.
It's a strategy very similar to that of our own brains, where information speeds across connections between neurons. In a neural network, deep learning enables AI to teach itself how to identify disease, win a strategy gameagainst the best human player in the world, or write a pop song.
But to accomplish these feats, any neural network still relies on a human programmer setting the tasks and selecting the datafor it to learn from. Consciousness for AI would mean that neural networks could make those initial choices themselves, "deviating from the programmers' intentions and doing their own thing," Edith Elkind, a professor of computing science at the University of Oxford in the U.K., told Live Science in an email. (▪ ▪ ▪)
► Read the full note here: Source |
|