Just wondering, since we've reached 1 teraflop per PC, yet we are still not able to model an insect's brain. Has anyone seen a decent implementation of a self-learning, self-developing neural network?

- 6,298
- 2
- 47
- 58

- 2,670
- 3
- 30
- 49
-
Where did you hear that PC's have reached a teraflop? The fastest PC processor I know of is 70 gigaflops... and if you meant supercomputers, well those have breached petaflops last year. – DeadHead May 16 '09 at 23:09
-
sorry, I ment with the GPU included. The CUDA-enabled GPU is able to run a neural net propagation algorithm very efficiently. Not sure about learning and node weight adaptation - this shold probably be handled by the main CPU. – Andy May 17 '09 at 07:19
-
2I think this is less a question of size/speed of a neural network but a question of if the extra complexities of cognition and learning in brains can be modelled using those techniques. Maybe some meeting of neural-networs/perceptrons and cellular automata? – Aiden Bell May 17 '09 at 11:26
-
Similarly, anyone have any thoughts on Singularity? – Aiden Bell May 17 '09 at 18:31
-
An update as of 2013: the world's 4th fastest supercomputer was able to simulate 1 second of activity on a network equivalent in size to 1% of the human brain. This took 40 minutes and 83,000 processors. So we're really still not there yet. However, this does suggest that exascale computing will enable such simulations. – seaotternerd Dec 10 '13 at 04:15
8 Answers
I saw an interesting experiment mapping the physical neural layout of a rat's brain to a digital neural network with weighting modelled on the neuron chemistry of each component taken using MRI and others. Quite interesting. (new scientist or Focus, 2 issues ago?)
IBM Blue Brain comes to mind http://news.bbc.co.uk/1/hi/sci/tech/8012496.stm
The problem is computation power as you rightly point out. But for a sequence of stimuli to a neural network the range of calculations tends to be exponential as that stimuli encounters deeper nested nodes. Any complex weighting algorithm means that time spent at each node can get expensive. Domain specific neural-maps tend to be quicker because they are specialized. Brains in mammals have many general paths, making it harder to teach them, and for a computer to model a real mammal brain in a given space/time.
Real brains also have tons of cross-talk like static (some people think this is where creativity or original thought stems from). Brains also don't learn using 'direct' stimulus/reward ... they use past experience of non-related matter to create their own learning. Recreating the neurons is one thing in a computational space, creating an accurate learning is another. Never-mind the dopamine (octopamine in insects) and other neurological chemicals.
imagine giving a digital brain LSD or anti-depressants. As a real simulation. Awesome. That would be a complex simulation I suspect.

- 28,212
- 4
- 75
- 119
-
1More complexity still: by the time the human brain is fully-developed, *each individual neuron is a distinct entity*, genetically and functionally. ~20 years of asynchronous mutagen & neurotransmitter exposure, as well as developmental differences and plasticity behaviors (certain neurons filling pathway gaps like aluminum tape in a reactor core) between individual cells, make a canonical model of a human brain a nebulous construct, if not a useless one. And that's before you approach the question of simulating the recently-discovered quantum mechanical components of biological systems... – manglano Aug 24 '15 at 15:07
I think you're kind of making the assumption that our idea of how neural networks work is a good model for the brain at a large-scale level; I'm not sure that is a good assumption. Hell, not too many years ago we didn't think the glial cells were important to mental functions, and it was the idea for a long time that there is no neurogenesis after the brain matures.
On the other hand, neural networks do seem to handle some apparently complex functions pretty well.
So, here's a little puzzle question for you: how many teraflops or petaflops do you think a human brain's computation represents?

- 110,348
- 25
- 193
- 263
-
1More than we have. We stand more chance of growing a human brain and giving it digital input/output. Maybe overclock it or specializing it a bit. Emotion-driven computing using artificial chemical stimuli. It could invoke pain receptors on wrong predictions. – Aiden Bell May 16 '09 at 22:47
-
5
-
:) if the brain has a problem with it, I will plug in my neuro-leads and have virtual dual with our virtual bodies. Unless it's female. Then it's different. – Aiden Bell May 16 '09 at 22:53
-
-
3Well, we can make an estimate. The brain has about 100 billion = 10^11 neurons, and each neuron solves a Hodkins-Huxley equation in roughly 100 milliseconds. The HH equation takes on the order of 10,000 FLOPS to solve once. So I get 10^16 flops. so that's what, 10 petaflops, no? Of course, then the question is *what values* to feed to each of those HH solutions. – Charlie Martin May 16 '09 at 23:08
-
haha, well, personally, I don't believe that the computing power is going to be an issue in this. The major problem is designing the AI and having it properly function, and learn as fast as humans (or some other animal) do. – DeadHead May 16 '09 at 23:24
-
Petaflops is not the right kind of benchmark here. It would be a game changing event if we made something that could do the same kinds of things as a human brain could, only 100x slower. – Albinofrenchy May 17 '09 at 06:56
-
Jeff Hawkins would say that a neural net is a poor approximation of a brain. His "On Intelligence" is a terrific read.

- 305,152
- 44
- 369
- 561
-
1Yes! You may want to check www.numemta.com and their NuPIC software. This is based on Hierarchical Temporal Memory technology, itself based on concepts developped by Jeff Hawkins in this book. – mjv Sep 08 '09 at 06:49
It's the structure. Even if we had computers today with the same or higher performance than a human brain (there are different predictions when we'll get there, but there are still a few years to go), we still need to program it. And while we know a lot of the brain today, there are still many, many more things we do not know. And these aren't just details, but large areas that are not understood at all.
Focusing only on the Tera-/Peta-FLOPS is like looking only at megapixels with digital cameras: it focuses on only one value when there are many factors involved (and there are a few more of those in a brain than in a camera). I also believe that many of the estimates just how many FLOPS would be needed to simulate a brain are way off - but that's a different discussion altogether.

- 583
- 4
- 10
-
Robert, that's exactly what I ment by the initial question - the processing power is sort of there, but we realise there is absolutely no understanding on how to use this to power to model a simple learning process. (an idea for a startup? :-) – Andy May 17 '09 at 07:23
-
You might be able to evaluate the potential for a startup if you are an AI expert. These things are usually university spinouts. – Aiden Bell May 17 '09 at 11:42
Just wondering, we've reached 1 teraflop per PC, and we are still not able to model an insect's brain. has anyone seen a decent implementation of a self-learning self-developing neural network?
We can already model brains. The question these days, is how fast, and how accurate.
In the beginning, there was effort expended on trying to find the most abstract representation of neurons with the least amount of physical properties needed.
This led to the invention of the perceptron at Cornell University, which is a very simple model indeed. In fact, it may have been too simple, as the famous MIT AI professor, Marvin Minsky, wrote a paper which mistakenly concluded that it would be impossible for this type of model to learn XOR (a basic logic gate that could be emulated by every computer we have today). Unfortunately, his paper plunged neural network research into the dark ages for at least 10 years.
While probably not as impressive as many would like, there are learning networks that are already in existence that can do visual and speech learning and recognition.
And even though we have faster CPUs, it is still not the same as a neuron. Neurons in our brain are, at the very least, parallel adder units. So imagine 100 billion simulated human neurons, adding each second, sending their outputs to 100 trillion connections with a "clock" of about 20hz. The amount of computation going on here far exceeds the petaflops of processing power we have, especially when our cpus are mostly serial instead of parallel.

- 45,913
- 27
- 138
- 182
-
3It is for this reason, I think that FPGAs are more suited to simulating a brain than a traditional CPU – Earlz Feb 05 '11 at 04:34
In 2007, they simulated the equivalent of a half mouse brain for 10 seconds at half the actual speed: http://news.bbc.co.uk/1/hi/technology/6600965.stm

- 98,437
- 31
- 224
- 236
There is a worm named C. Elegance and its anatomy is completely know to us. Every cell is mapped out and every neuron is well studied. This worm has an interesting property by birth and that is it follows or grow towards only those temperature regions in which it was born. Here is link to the paper. This paper has implementation of the property with neuronal model. And there are some students who have built robot that only follows dark regions in the region having different shades of light, using this neuronal model. This work could have been done using other methods as well but this method is more noise resilient as proved by paper to which I have given link above.

- 97
- 1
- 11