chickenhead
11-13-2006, 01:13 PM
Have been reading this book called "The Singularity is Near" http://www.amazon.com/Singularity-Near-Humans-Transcend-Biology/dp/0670033847
I bought this book not expecting to buy into his overall conclusions, but more because it seemed like a decent primer on some very interesting developments in AI and human brain research.
My conclusion after the first chapter was that this guy is certainly a bit kooky, what he envisions as our destiny may be, but I do not see it any time soon. In other words, the human race may ultimately head towards his idea of utopia, but I think it will be a different wave of people.
BUT, there are many things to be optimistic/scared about, I think, so long as we don't destroy ourselves in the very near future (we will likely try very hard). This book certainly has me more interested in AI research, as that seems to be the lynchpin for most all of his predictions.
The idea that we are quickly nearing the point where we will have the computational ability to fully simulate the human brain, coupled with the exponential increase in sophistication of our brain modeling tools could lead to some very very interesting things.
The central idea is that once we have developed an AI system that is slightly smarter than ourselves, we will quickly recognize an exponential increase in intelligence, due to the AI being able to modify itself, in essence building even more intelligent systems.
Its interesting and frightening to think that all these sorts of things are possibly much much closer than we generally think. According to this guy at least, who is taken seriously by some pretty well respected people, we are looking at these sorts of systems within a short few decades.
I bought this book not expecting to buy into his overall conclusions, but more because it seemed like a decent primer on some very interesting developments in AI and human brain research.
My conclusion after the first chapter was that this guy is certainly a bit kooky, what he envisions as our destiny may be, but I do not see it any time soon. In other words, the human race may ultimately head towards his idea of utopia, but I think it will be a different wave of people.
BUT, there are many things to be optimistic/scared about, I think, so long as we don't destroy ourselves in the very near future (we will likely try very hard). This book certainly has me more interested in AI research, as that seems to be the lynchpin for most all of his predictions.
The idea that we are quickly nearing the point where we will have the computational ability to fully simulate the human brain, coupled with the exponential increase in sophistication of our brain modeling tools could lead to some very very interesting things.
The central idea is that once we have developed an AI system that is slightly smarter than ourselves, we will quickly recognize an exponential increase in intelligence, due to the AI being able to modify itself, in essence building even more intelligent systems.
Its interesting and frightening to think that all these sorts of things are possibly much much closer than we generally think. According to this guy at least, who is taken seriously by some pretty well respected people, we are looking at these sorts of systems within a short few decades.