手撕神经网络代码
时间: 2025-04-28 22:29:25 浏览: 21
### 实现简单神经网络
为了帮助理解神经网络工作原理,下面提供了一个基于 Python 的简单神经网络实现案例。此代码展示了如何创建一个具有单个隐藏层的小型前馈神经网络,并通过梯度下降法训练它来拟合给定的数据集。
```python
import numpy as np
class NeuralNetwork:
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# 初始化权重矩阵
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes ** -0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes ** -0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
# 定义激活函数及其导数
self.activation_function = lambda x : 1 / (1 + np.exp(-x))
self.activation_derivative = lambda y : y * (1-y)
def train(self, inputs_list, targets_list):
""" 训练模型 """
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
# 前向传播
hidden_outputs = self.activation_function(np.dot(self.weights_input_to_hidden, inputs))
final_outputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
# 反向传播误差
output_errors = targets - final_outputs
hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors)
hidden_grad = hidden_errors * self.activation_derivative(hidden_outputs)
# 更新权重
self.weights_hidden_to_output += self.lr * np.dot(output_errors, hidden_outputs.T)
self.weights_input_to_hidden += self.lr * np.dot(hidden_grad, inputs.T)
def main():
n_records, n_features = features.shape
network = NeuralNetwork(n_features, 3, 1, 0.01)
epochs = 500
for e in range(epochs):
for i in range(len(features)):
network.train([features[i]], [labels[i]])
if __name__ == '__main__':
from sklearn.datasets import make_classification
X, y = make_classification(n_samples=1000, n_features=4, random_state=17)
features = X[:10]
labels = [[i] for i in y[:10]]
main()
```
上述代码定义了一个名为 `NeuralNetwork` 类,用于表示一个多层感知器(MLP),其中包含了初始化方法、正向传递过程以及反向传播更新规则[^1]。此外还给出了完整的训练流程,在每次迭代过程中调整参数使得损失最小化。
阅读全文
相关推荐


















