| Hans wrote:
>You're right about me making some wrong assumptions, I've been reading
about
>neural networks a bit more and some of my original assumptions were way off
>base. What I need to do is experiment with them for a while to see what
>useful things can be done with neural nets. Then try and figure out what
>the most useful form of an opcode would be. I will review your earlier
>comments concerning this. I thought it might be useful to write some
>temporary opcodes for experimenting though to make things easier.
It is important to understand that neural networks don't do anything
magical. Their mystique comes in no small part from their evocative
name, which is generally misrepresentative of their function. This is
not to say that much of the pattern-recognition literature surrounding
ANNs is not of good quality from which we can learn, but if they were
called by a more accurate and less flashy term such as "gradient-descent
function approximation heuristics through iterative feedback" I suspect
the interest in them would be much less.
The only thing ANNs do is provide a technique for approximating unknown
discrimination functions. At this level of abstraction, they aren't
doing anything different from myriad other methods such as Gaussian
Mixture Models, K-Nearest Neighbor, or various recursive-spatial-
partitioning schemes. Many of these latter methods are easier to
train and use, computationally simpler, and give better results for
certain sorts of problems than ANNs do. ANNs are just one out of
a large world of formalized pattern-recognition techniques. There is
nothing "brainlike" or "neuralistic" about their operation.
My rant for the day! :)
Best,
-- Eric
|