神经网络示例源代码(最好是 Python)

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/1514573/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-11-03 22:26:57  来源:igfitidea点击:

Neural Network Example Source-code (preferably Python)

pythonneural-network

提问by Fifth-Edition

I wonder if anyone has some example code of a Neural network in python. If someone know of some sort of tutorial with a complete walkthrough that would be awesome, but just example source would be great as well!

我想知道是否有人在 python 中有一些神经网络的示例代码。如果有人知道某种带有完整演练的教程,那会很棒,但仅示例源也很棒!

Thanks

谢谢

采纳答案by jfs

回答by bayer

Here is a simple example by Armin Rigo: http://codespeak.net/pypy/dist/demo/bpnn.py. If you want to use more sophisticated stuff, there is also http://pybrain.org.

这是 Armin Rigo 的一个简单示例:http: //codespeak.net/pypy/dist/demo/bpnn.py。如果你想使用更复杂的东西,还有http://pybrain.org

Edit:Link is broken. Anyway, the current way to go with neural nets in python is probably Theano.

编辑:链接已损坏。无论如何,目前在 python 中使用神经网络的方法可能是Theano

回答by maazza

Found this interresting discusion on ubuntu forums http://ubuntuforums.org/showthread.php?t=320257

在 ubuntu 论坛上发现了这个有趣的讨论 http://ubuntuforums.org/showthread.php?t=320257

import time
import random

# Learning rate:
# Lower  = slower
# Higher = less precise
rate=.2

# Create random weights
inWeight=[random.uniform(0, 1), random.uniform(0, 1)]

# Start neuron with no stimuli
inNeuron=[0.0, 0.0]

# Learning table (or gate)
test =[[0.0, 0.0, 0.0]]
test+=[[0.0, 1.0, 1.0]]
test+=[[1.0, 0.0, 1.0]]
test+=[[1.0, 1.0, 1.0]]

# Calculate response from neural input
def outNeuron(midThresh):
    global inNeuron, inWeight
    s=inNeuron[0]*inWeight[0] + inNeuron[1]*inWeight[1]
    if s>midThresh:
        return 1.0
    else:
        return 0.0

# Display results of test
def display(out, real):
        if out == real:
            print str(out)+" should be "+str(real)+" ***"
        else:
            print str(out)+" should be "+str(real)

while 1:
    # Loop through each lesson in the learning table
    for i in range(len(test)):
        # Stimulate neurons with test input
        inNeuron[0]=test[i][0]
        inNeuron[1]=test[i][1]
        # Adjust weight of neuron #1
        # based on feedback, then display
        out = outNeuron(2)
        inWeight[0]+=rate*(test[i][2]-out)
        display(out, test[i][2])
        # Adjust weight of neuron #2
        # based on feedback, then display
        out = outNeuron(2)
        inWeight[1]+=rate*(test[i][2]-out)
        display(out, test[i][2])
        # Delay
        time.sleep(1)

EDIT: there is also a framework named chainer https://pypi.python.org/pypi/chainer/1.0.0

编辑:还有一个名为 chainer https://pypi.python.org/pypi/chainer/1.0.0的框架

回答by Akavall

Here is a probabilistic neural network tutorial :http://www.youtube.com/watch?v=uAKu4g7lBxU

这是一个概率神经网络教程:http: //www.youtube.com/watch?v=uAKu4g7lBxU

And my Python Implementation:

还有我的 Python 实现:

import math

data = {'o' : [(0.2, 0.5), (0.5, 0.7)],
        'x' : [(0.8, 0.8), (0.4, 0.5)],
        'i' : [(0.8, 0.5), (0.6, 0.3), (0.3, 0.2)]}

class Prob_Neural_Network(object):
    def __init__(self, data):
        self.data = data

    def predict(self, new_point, sigma):
        res_dict = {}
        np = new_point
        for k, v in self.data.iteritems():
            res_dict[k] = sum(self.gaussian_func(np[0], np[1], p[0], p[1], sigma) for p in v)
        return max(res_dict.iteritems(), key=lambda k : k[1])

    def gaussian_func(self, x, y, x_0, y_0, sigma):
        return  math.e ** (-1 *((x - x_0) ** 2 + (y - y_0) ** 2) / ((2 * (sigma ** 2))))

prob_nn = Prob_Neural_Network(data)
res = prob_nn.predict((0.2, 0.6), 0.1)

Result:

结果:

>>> res
('o', 0.6132686067117191)

回答by Jukka Matilainen

You might want to take a look at Monte:

你可能想看看Monte

Monte (python) is a Python framework for building gradient based learning machines, like neural networks, conditional random fields, logistic regression, etc. Monte contains modules (that hold parameters, a cost-function and a gradient-function) and trainers (that can adapt a module's parameters by minimizing its cost-function on training data).

Modules are usually composed of other modules, which can in turn contain other modules, etc. Gradients of decomposable systems like these can be computed with back-propagation.

Monte (python) 是一个 Python 框架,用于构建基于梯度的学习机器,如神经网络、条件随机场、逻辑回归等。 Monte 包含模块(保存参数、成本函数和梯度函数)和训练器(用于可以通过最小化其在训练数据上的成本函数来调整模块的参数)。

模块通常由其他模块组成,这些模块又可以包含其他模块等。像这样的可分解系统的梯度可以通过反向传播来计算。