Sunday 21 August 2011

IBM Makes Brainy Breakthrough in Computing

IBM (NYSE: IBM) researchers unveiled a new generation of
experimental computer chips
Thursday. The chips are designed
to mimic the brain's abilities of
perception, action and cognition.
The development could lead to advances in computers that
require much less power and
space than current technology. IBM's first neurosynaptic
computing chips recreate the
spiking neurons and synapses of
biological systems such as the
brain by using advanced
algorithms and silicon circuitry. The first two prototype chips
have been fabricated and are
undergoing testing. Systems built with these chips
will be called "cognitive
computers" and won't be
programmed the same way as
traditional computers. Cognitive
computers are expected to learn through experiences, find
correlations, create hypotheses
and remember, mimicking the
brain's structural and synaptic
plasticity. IBM is combining
principles from nanoscience, neuroscience and
supercomputing to kick off a
multi-year cognitive computing
initiative. IBM and its university
collaborators have been
awarded approximately US$21
million in new funding from the
Defense Advanced Research
Projects Agency (DARPA) for phase two of the Systems of
Neuromorphic Adaptive Plastic
Scalable Electronics (SyNAPSE)
project. SyNAPSE's goal is to
create a system to analyze
complex information from multiple sensory modalities at once and
to dramatically rewire itself to
adapt to the environment. Computerized Learning, My
Dear Watson Giving a computer the ability to
learn changes the way it works.
Today's computers react to
human input and programming,
while a learning computer would
process information differently. The new chip is a step closer to
actual learning than even the
Watson computer used on
"Jeopardy" to respond to
questions. "You can think of this like
Watson, since both systems have
the ability to learn," Kelly Sims,
communications manger at IBM
Research, told TechNewsWorld. "It
has to be a learning system because you can't program a
computer for every possible
question. The computer has to
be able to guess what the
different data mean. Typical
computer calculations are limited by programming. You can't
program a computer to know
everything." IBM is implementing its hardware
knowledge to fabricate a new
type of computer. IBM dreams of
a computer that is more aware
of its surroundings and is ready
to learn from them. "What we're trying to do is
create something that can take
in different information from
senses and come up with an
understanding of what is going
on," said Sims. "Watson was designed to answer deep
questions and answers, to
understand and find answers in
natural language. The underlying
hardware involved is the same
hardware we've been using for 50 years." Computers today are reaching
the walls of physics, which could
stymie the uninterrupted
progress of faster and deeper
computations. "Computers today face a number
of problems. One is Moore's Law.
We're coming to physical
limitations. You have to make the
computer go faster to get more
out of them. We could build a computer that could know
almost everything, but it would
be the size of Manhattan and it
would take just as much energy." Sims notes that the new chip is
completely different from
previous computers, even
Watson. "We found a separation in the
road from the old computers to
a new direction, and this chip is
the new way," Sims said. "For
some things, the regular
computer is perfect. It's been left brain. Now we're adding
right-brain capability." AI for Real? This new chip brings technology
yet another step closer to
replicating a human-like
intelligence. The adaptation of
brain-like structures into
computer chips could be a leap toward a much more complex
artificial intelligence. "This is a pretty cool
development," Roger Kay,
founder and principle of Endpoint
Technologies, told
TechNewsWorld. "It's complicated,
but essentially, it arranges computing elements like
processing, memory, and
communications in a manner
closer to the way our brains
store and send information in
neurons, axons and synapses. The most important changes
include the way processors are
smaller and less complex and are
associated with their own
memory, Kay noted. "The difference is the way
individual elements communicate
in a many-to-many fashion," he
said. "This computing engine has to be
hooked up to inputs such as
sensors -- eyes and ears -- and
outputs like actuators -- hands
and mouths," said Kay. "When a
sensor sees the color red, a 'red' neuron fires, alerting many
others around it with the
message 'I have red.' Another
neuron associated with a fire
engine, one with a Ferrari, and
one with blood would all fire back, maybe with a query, 'Is it
liquid? Does it make noise?' And
others in those departments fire
off input requests. An
assessment comes back, 'makes
high pitch whine,' and several neurons team to make an
output 'could be Ferrari.'
Eventually, this sort of
computing architecture could
change many things, but not for
many years. This is a long-term research project."

No comments:

Post a Comment