Do Intelligent Robots Need Emotion?

What's your opinion?

Why are neural networks always described as so complicated you can't even use it?

 



.

Why are neural networks always described as so complicated you can't even use it?

.

I don’t think they’re always described like that, but they are complex yes.

What’s your source for that description?

.

Because a bunch of people say it's going to be a mess and the most important thing about mental networks and the next generation of signal processing and software is that ai is going to be built on vr and nobody knows that and then they brag about it

.

A2A: I don’t think that neural networks are “always” described that way. People can and do use various forms of artificial neural network technology for many kinds of useful applications.


The complexity in using these things comes from a lack of good theory to tell the user what size and shape of network will be best (or good enough) for a given problem. By that I mean the number of layers, number of units in each layer, activation functions, hyper-parameters such as the learning rate, when and where to use features such as convolutional or recurrent layers, and so on.


So generally you guess what kind of network will do best on a problem, and you usually end up trying a lot of different guesses before you find one that works. The guessing gets better (at least a little bit) as the user gains experience, but it’s still a black art.


It also is very hard to understand what is going on inside a network, so if it exhibits some anomalous behavior for certain inputs, it’s hard to figure out the reason or what the fix would be.

.

Because we are talking about using our neurology system to function as servers and workstations plus hundreds of intermediary equipment without neurologically support infrastructures. Experiments of using neurology as a mean of communication have been conducted more than ten years ago but still we have not seen any of its application till today all due to such challenges. But there was some type of experiments in the same idea taken placed in the Soviet Union at the height of the Cold War. It was distant relevant to the idea so the tech had sometimes failed. Soviet spies were equipped with special methods of sending /receiving information through interaction with signs, objects, verbal sentences (codes/scripted text). Each of them was attached with a hidden sensor to decode hidden texts and he can view them inside his mind. They also used a technique of brain collaboration by exchanging electrical pulses which got decoded by sensor converting them into images.

.

Well - People are using neural networks, so clearly there is some understanding.


Neural Networks are only complicated, poorly understood, or hard to interpret if the underlying training data is poorly understood. NNs are useful because the geometry of the feature space is complex, multimodal and confused. In fact, if there are sub-populations, are separate, clear explanations that may be applied to particular groups, it is generally a good idea to train to those groups in subnets. It should be understood that if you have a handle on the geometry and complexity of the decision space, the required number of layers and nodes can be estimated.


The other part of the complexity is the training and convergence to optimum performance. This can get mind-bogglingly complex, since depending on the complexity, order of presentation, initial conditions, and character of the confusion boundaries, convergence can vary tremendously, and may be mathematically chaotic. Such is stochastic convergence.

.

They are not so complicated and it is still much more easier to use them than design them. One can use existing models even outside the deep learning frameworks, e.g. networks models for vision can be used in OpenCV. Of course for each model one has to know how to prepare suitable input to the network and how interpret output and these processes are a bit complicated. See e.g. www.agentspace.org

.

I don’t think they are usually described as too complicated to use. From what I have heard, neural networks are described as so complicated largely because there’s still a lot we are learning about them theoretically to understand how to best design and use them.


There’s still a lot of questions about how to construct the networks so that you can have a good generalizing model for whatever dataset you are investigating. How should the neural network architecture be? What activation function(s) should you use? What optimization scheme should you use? These questions and more ultimately influence the function space you will optimize over for some problem and there’s a lot we have yet to learn about these things.

.

Larger neural networks can have hundreds, thousands, even millions of neurons, and we have yet to fully understand what exactly is going on inside those networks. As a result we can’t bug check them, as we can’t identify if the neural network is working properly without testing them, meaning certain bugs that didn’t appear in testing could present themselves during official use. For example: you could train a neural network to drive a car. during testing it might seem to be doing fine, so you let it out onto the real road. Then, out of nowhere, it drives straight into a Starbucks, and you have no idea why, as you don’t fully understand how the thing works.

.






https://www.quora.com/Why-are-neural-networks-always-described-as-so-complicated-you-cant-even-use-it/log

Read More