Python 简单的多层神经网络实现
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/15395835/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Simple multi layer neural network implementation
提问by Animattronic
some time ago I have started my adventure with machine learning (during last 2 years of my studies). I have read a lot of books and written a lot of code with machine learning algorithms EXCEPT neural networks, which were out of my scope. I'm very interested in this topic, but I have a huge problem: All the books I have read have two main issues:
前段时间我开始了机器学习的冒险(在我学习的最后 2 年期间)。我读了很多书,写了很多关于机器学习算法的代码,除了神经网络,这超出了我的范围。我对这个话题很感兴趣,但我有一个很大的问题:我读过的所有书都有两个主要问题:
- Contain tones of maths equations. After lecture I'm quite familiar with them and by hand, on the paper I can do the calculations.
- Contain big examples embedded in some complicated context (for example investigating internet shop sales rates) and to get inside neural networks implementation, I have to write lot of code to reproduce the context. What is missing - SIMPLE straightforward implementation without a lot of context and equations.
- 包含数学方程式的音调。听完课我就很熟悉了,用手在纸上就可以计算了。
- 包含嵌入在一些复杂上下文中的大示例(例如调查网店销售率)并深入了解神经网络实现,我必须编写大量代码来重现上下文。缺少什么 - 简单直接的实现,没有很多上下文和方程。
Could you please advise me, where I can find SIMPLE implementation of multi layer perception (neural network) ? I don't need theoretical knowledge, and don want also context-embedded examples. I prefer some scripting languages to save time and effort - 99% of my previous works were done in Python.
你能告诉我,我在哪里可以找到多层感知(神经网络)的简单实现?我不需要理论知识,也不需要上下文嵌入的例子。我更喜欢一些脚本语言来节省时间和精力——我之前 99% 的工作都是用 Python 完成的。
Here is the list of books I have read before (and not found what I wanted):
这是我以前读过的书的清单(没有找到我想要的):
- Machine learning in action
- Programming Collective Intelligence
- Machine Learning: An Algorithmic Perspective
- Introduction to neural networks in Java
- Introduction to neural networks in C#
- 机器学习在行动
- 编程集体智慧
- 机器学习:算法视角
- Java 神经网络简介
- C#神经网络简介
回答by Salem
Have you tried PyBrain? It seems very well documented.
回答by Pedrom
Hmm this is tricky. I had the same problem before and I couldn't find anything between good but heavily math loaded explanation and ready to use implementations.
嗯,这很棘手。我之前遇到过同样的问题,我在良好但大量数学负载的解释和准备使用的实现之间找不到任何东西。
The problem with ready to useimplementations like PyBrain is that they hide the details, so people interested in learning how to implement ANNs are in need of something else. Reading the code of such solutions can be challenging too because they often use heuristics to improve performance and that makes the code harder to follow for a starter.
像 PyBrain 这样的即用型实现的问题在于它们隐藏了细节,因此对学习如何实现 ANN 感兴趣的人需要其他东西。阅读此类解决方案的代码也可能具有挑战性,因为它们经常使用启发式方法来提高性能,这使得初学者难以遵循代码。
However, there are a few of resources you could use:
但是,您可以使用一些资源:
http://msdn.microsoft.com/en-us/magazine/jj658979.aspx
http://msdn.microsoft.com/en-us/magazine/jj658979.aspx
http://itee.uq.edu.au/~cogs2010/cmc/chapters/BackProp/
http://itee.uq.edu.au/~cogs2010/cmc/chapters/BackProp/
http://www.codeproject.com/Articles/19323/Image-Recognition-with-Neural-Networks
http://www.codeproject.com/Articles/19323/Image-Recognition-with-Neural-Networks
http://freedelta.free.fr/r/php-code-samples/artificial-intelligence-neural-network-backpropagation/
http://freedelta.free.fr/r/php-code-samples/artificial-intelligence-neural-network-backpropagation/
回答by schreon
Here is an example of how you can implement a feedforward neural network using numpy. First import numpy and specify the dimensions of your inputs and your targets.
这是一个如何使用 numpy 实现前馈神经网络的示例。首先导入 numpy 并指定输入和目标的尺寸。
import numpy as np
input_dim = 1000
target_dim = 10
We will build the network structure now. As suggested in Bishop's great "Pattern Recognition and Machine Learning", you can simply consider the last row of your numpy matrices as bias weights and the last column of your activations as bias neurons. Input/output dimensions of the first/last weight matrix needs to be 1 greater, then.
我们现在将构建网络结构。正如 Bishop 伟大的“模式识别和机器学习”所建议的那样,您可以简单地将 numpy 矩阵的最后一行视为偏置权重,将激活的最后一列视为偏置神经元。那么第一个/最后一个权重矩阵的输入/输出维度需要大1。
dimensions = [input_dim+1, 500, 500, target_dim+1]
weight_matrices = []
for i in range(len(dimensions)-1):
weight_matrix = np.ones((dimensions[i], dimensions[i]))
weight_matrices.append(weight_matrix)
If your inputs are stored in a 2d numpy matrix, where each row corresponds to one sample and the columns correspond to the attributes of your samples, you can propagate through the network like this: (assuming logistic sigmoid function as activation function)
如果您的输入存储在 2d numpy 矩阵中,其中每一行对应一个样本,列对应样本的属性,则可以像这样通过网络传播:(假设逻辑 sigmoid 函数作为激活函数)
def activate_network(inputs):
activations = [] # we store the activations for each layer here
a = np.ones((inputs.shape[0], inputs.shape[1]+1)) #add the bias to the inputs
a[:,:-1] = inputs
for w in weight_matrices:
x = a.dot(w) # sum of weighted inputs
a = 1. / (1. - np.exp(-x)) # apply logistic sigmoid activation
a[:,-1] = 1. # bias for the next layer.
activations.append(a)
return activations
The last element in activationswill be the output of your network, but be careful, you need to omit the additional column for the biases, so your output will be activations[-1][:,:-1].
中的最后一个元素activations将是您网络的输出,但请注意,您需要省略偏差的附加列,因此您的输出将为activations[-1][:,:-1].
To train a network, you need to implement backpropagation which takes a few additional lines of code. You need to loop from the last element of activationsto the first, basically. Make sure to set the bias column in the error signal to zero for each layer before each backpropagation step.
要训练网络,您需要实现反向传播,这需要一些额外的代码行。activations基本上,您需要从最后一个元素循环到第一个元素。在每个反向传播步骤之前,确保将每一层的误差信号中的偏差列设置为零。
回答by jorgenkg
A simple implementation
一个简单的实现
Here is a readable implementation using classes in Python. This implementation trades efficiency for understandability:
这是使用Python. 此实现以效率换取可理解性:
import math
import random
BIAS = -1
"""
To view the structure of the Neural Network, type
print network_name
"""
class Neuron:
def __init__(self, n_inputs ):
self.n_inputs = n_inputs
self.set_weights( [random.uniform(0,1) for x in range(0,n_inputs+1)] ) # +1 for bias weight
def sum(self, inputs ):
# Does not include the bias
return sum(val*self.weights[i] for i,val in enumerate(inputs))
def set_weights(self, weights ):
self.weights = weights
def __str__(self):
return 'Weights: %s, Bias: %s' % ( str(self.weights[:-1]),str(self.weights[-1]) )
class NeuronLayer:
def __init__(self, n_neurons, n_inputs):
self.n_neurons = n_neurons
self.neurons = [Neuron( n_inputs ) for _ in range(0,self.n_neurons)]
def __str__(self):
return 'Layer:\n\t'+'\n\t'.join([str(neuron) for neuron in self.neurons])+''
class NeuralNetwork:
def __init__(self, n_inputs, n_outputs, n_neurons_to_hl, n_hidden_layers):
self.n_inputs = n_inputs
self.n_outputs = n_outputs
self.n_hidden_layers = n_hidden_layers
self.n_neurons_to_hl = n_neurons_to_hl
# Do not touch
self._create_network()
self._n_weights = None
# end
def _create_network(self):
if self.n_hidden_layers>0:
# create the first layer
self.layers = [NeuronLayer( self.n_neurons_to_hl,self.n_inputs )]
# create hidden layers
self.layers += [NeuronLayer( self.n_neurons_to_hl,self.n_neurons_to_hl ) for _ in range(0,self.n_hidden_layers)]
# hidden-to-output layer
self.layers += [NeuronLayer( self.n_outputs,self.n_neurons_to_hl )]
else:
# If we don't require hidden layers
self.layers = [NeuronLayer( self.n_outputs,self.n_inputs )]
def get_weights(self):
weights = []
for layer in self.layers:
for neuron in layer.neurons:
weights += neuron.weights
return weights
@property
def n_weights(self):
if not self._n_weights:
self._n_weights = 0
for layer in self.layers:
for neuron in layer.neurons:
self._n_weights += neuron.n_inputs+1 # +1 for bias weight
return self._n_weights
def set_weights(self, weights ):
assert len(weights)==self.n_weights, "Incorrect amount of weights."
stop = 0
for layer in self.layers:
for neuron in layer.neurons:
start, stop = stop, stop+(neuron.n_inputs+1)
neuron.set_weights( weights[start:stop] )
return self
def update(self, inputs ):
assert len(inputs)==self.n_inputs, "Incorrect amount of inputs."
for layer in self.layers:
outputs = []
for neuron in layer.neurons:
tot = neuron.sum(inputs) + neuron.weights[-1]*BIAS
outputs.append( self.sigmoid(tot) )
inputs = outputs
return outputs
def sigmoid(self, activation,response=1 ):
# the activation function
try:
return 1/(1+math.e**(-activation/response))
except OverflowError:
return float("inf")
def __str__(self):
return '\n'.join([str(i+1)+' '+str(layer) for i,layer in enumerate(self.layers)])
A more efficient implementation (with learning)
更有效的实施(通过学习)
If you are looking for a more efficient example of a neural network with learning (backpropagation), take a look at my neural network Github repository here.
如果您正在寻找具有学习(反向传播)功能的神经网络的更有效示例,请在此处查看我的神经网络 Github 存储库。

