一、Stacking集成算法
1.1 原理
下面,我们来实际应用下Stacking是如何集成算法的:(参考案例:https://www.cnblogs.com/Christina-Notebook/p/10063146.html)
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-wXgWEbRV-1620887337278)(./5.png)]
- 首先将数据分为训练集和测试集。再将训练集分为5份:train1,……,train5
- 选定基模型。并在训练集上对每个模型进行5折交叉验证,这样得到在训练集上的5个预测值和在测试集上的1份预测值B1。将5份纵向重叠合并起来得到A1。三个基模型得到A1, A2, A3和B1, B2, B3
- 三个基模型训练完成后,将三个模型在训练集上的预测值作为三个特征A1, A2, A3,使用LR模型进行训练,建立LR模型
- 使用训练好的LR模型,在B1, B2, B3上进行预测,得到最终的预测类别或概率
1.2 实现
sklearn没有直接对Stacking的方法,需要下载mlxtend工具包
1.2.1 简单堆叠3折CV分类
from sklearn import datasets
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
from mlxtend.classifier import StackingCVClassifier
Random_seed = 42
iris = datasets.load_iris()
X, y = iris.data[:, 1:3], iris.target
# 第一层:基模型
clf1 = KNeighborsClassifier(n_neighbors=1)
clf2 = RandomForestClassifier(random_state = Random_seed)
clf3 = GaussianNB()
# 第二层:逻辑回归模型
lg = LogisticRegression()
sclf = StackingCVClassifier(classifiers=[clf1, clf2, clf3], #第一层分类器
meta_classifier = lg, #第二层分类器
random_state = Random_seed)
print('3-fold cross validation:\n')
for clf, label in zip([clf1, clf2, clf3, sclf], ['KNN', 'Random Forest', 'Naive Bayes', 'StackingClassifier']):
scores = cross_val_score(clf, X, y , cv=3, scoring='accuracy')
print("Accuracy: %0.2f (+/- %0.2f) [%s]" %(scores.mean(), scores.std(),label))
3-fold cross validation:
Accuracy: 0.91 (+/- 0.01) [KNN]
Accuracy: 0.95 (+/- 0.01) [Random Forest]
Accuracy: 0.91 (+/- 0.02) [Naive Bayes]
Accuracy: 0.93 (+/- 0.02) [StackingClassifier]
# 绘制决策边界
from mlxtend.plotting import plot_decision_regions
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import itertools
gs = gridspec.GridSpec(2, 2)
fig = plt.figure(figsize=(10,8))
for clf, lab, grd in zip([clf1, clf2, clf3, sclf],
['KNN', 'Random Forest', 'Naive Bayes', 'StackingCVClassifier'