序号 | 概况 | 表达式 | 场景 | sklearn | keras | R |
---|---|---|---|---|---|---|
1 | 交叉熵 | H ( p , q ) = ∑ x ( p ( x ) × log ( 1 q ( x ) ) ) = ∫ x P ( x ) × log ( Q ( x ) ) d x H(p, q)=\sum_x(p(x) \times \text{log}(\frac{1}{q(x)})) = \int_x P(x) \times \text{log}(Q(x))\mathrm{d}x H(p,q)=∑x(p(x)×log(q(x)1))=∫xP(x)×log(Q(x))dx − log ( p ( y ∣ y ^ ) ) = − ( y × log ( p ^ ) + ( 1 − y ) × log ( 1 − p ^ ) ) -\text{log}(p(y\mid\hat{y})) = -(y \times \text{log}(\hat{p}) + (1 - y) \times \text{log}(1 - \hat{p})) −log(p(y∣y^))=−(y×log(p^)+(1−y)×log(1−p^)) |
分类 | sklearn.metrics.log_loss | keras.losses.BinaryCrossentropy(class) keras.losses.binary_crossentropy(function) binary_crossentropy |
LogLoss |
2 | 多分类:one-hot |
同上 | keras.losses.CategoricalCrossentropy(class) keras.losses.categorical_crossentropy(function) categorical_crossentropy |
MultiLogLoss | ||
3 | 同上 | keras.losses.SparseCategoricalCrossentropy(class) keras.losses.sparse_categorical_crossentropy(function) sparse_categorical_crossentropy |
||||
4 | keras: loss = y ^ − y × log ( y ^ ) \text{loss} = \hat{y} - y \times \text{log}(\hat{y}) loss=y^−y×log(y^) Sklearn: loss = 2 × ( y × log ( y y ^ ) + y ^ − y ) ‾ \text{loss} = \overline{2 \times (y \times \text{log}(\frac{y}{\hat{y}}) + \hat{y} - y)} loss= |
建模之常见损失函数(Keras, Sklearn, R)
最新推荐文章于 2025-04-15 10:50:33 发布