There are also some mistakes in the calculation method. When you calculate recall@1
result for S1 and S2 should be (0 + 0)/2=0 (Since S1 has the highest top-1 score to T4 label not to the ground T1 and S2 has the highest top-1 score to T1 not to the Ground T3 label).
import tensorflow as tf
import numpy as np
y_true = np.array([
[0],
[2]
]).astype(np.int64)
y_true = tf.identity(y_true)
y_pred = np.array([
[1, 2, 1, 4],
[3, 2, 1, 0]
]).astype(np.float32)
y_pred = tf.identity(y_pred)
k = 1
_, update_recall = tf.metrics.recall_at_k(y_true, y_pred, k)
tmp_rank = tf.nn.top_k(y_pred, k)
stream_vars = [i
for i in tf.local_variables()
]
with tf.Session() as sess:
sess.run(tf.local_variables_initializer())
print("update_recall: ", sess.run(update_recall))
print("STREAM_VARS: ", (sess.run(stream_vars)))
print("TMP_RANK: ", sess.run(tmp_rank))
#print
update_recall: 0.0
STREAM_VARS: [0.0, 2.0]
TMP_RANK: TopKV2(values = array([
[4.],
[3.]
], dtype = float32), indices = array([
[3],
[0]
], dtype = int32))
Last updated 2022-06-28 UTC.
Computes recall@k of the predictions with respect to sparse labels.
tf.compat.v1.metrics.recall_at_k( labels, predictions, k, class_id = None, weights = None, metrics_collections = None, updates_collections = None, name = None )
For estimation of the metric over a stream of data, the function creates an update_op operation that updates these variables and returns the recall_at_<k>. Internally, a top_k operation computes a Tensor indicating the top k predictions. Set operations applied to top_k and labels calculate the true positives and false negatives weighted by weights. Then update_op increments true_positive_at_<k> and false_negative_at_<k> using these values., update_op: Operation that increments true_positives and false_negatives variables appropriately, and whose value matches recall.,If class_id is specified, we calculate recall by considering only the entries in the batch for which class_id is in the label, and computing the fraction of them for which class_id is in the top-k predictions. If class_id is not specified, we'll calculate recall as how often on average a class among the labels of a batch entry is in the top-k predictions., ValueError: If weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple.
tf.metrics.recall_at_k
recall_at_k( labels, predictions, k, class_id = None, weights = None, metrics_collections = None, updates_collections = None, name = None )
Updated July 21st, 2022,3 min read | Jakub Czakon | Posted June 22, 2020
In Keras, metrics are passed during the compile stage as shown below. You can pass several metrics by comma separating them.
from keras
import metrics
model.compile(loss = 'mean_squared_error', optimizer = 'sgd',
metrics = [metrics.mae,
metrics.categorical_accuracy
])
binary_accuracy, for example, computes the mean accuracy rate across all predictions for binary classification problems.
keras.metrics.binary_accuracy(y_true, y_pred, threshold = 0.5)
The accuracy metric computes the accuracy rate across all predictions. y_true represents the true labels while y_pred represents the predicted ones.
keras.metrics.accuracy(y_true, y_pred)
categorical_accuracy metric computes the mean accuracy rate across all predictions.
keras.metrics.categorical_accuracy(y_true, y_pred)
sparse_categorical_accuracy is similar to the categorical_accuracy but mostly used when making predictions for sparse targets. A great example of this is working with text in deep learning problems such as word2vec. In this case, one works with thousands of classes with the aim of predicting the next word. This task produces a situation where the y_true is a huge matrix that is almost all zeros, a perfect spot to use a sparse matrix.
keras.metrics.sparse_categorical_accuracy(y_true, y_pred)
Sensitivity measures the proportion of actual positives that are correctly identified as such (tp / (tp + fn)). Specificity measures the proportion of actual negatives that are correctly identified as such (tn / (tn + fp)).,This metric creates four local variables, true_positives, true_negatives, false_positives and false_negatives that are used to compute the sensitivity at the given specificity. The threshold for the given specificity value is computed and used to evaluate the corresponding sensitivity.,Result computation is an idempotent operation that simply calculates the metric value using the state variables.,This function is called between epochs/steps, when a metric is evaluated during training.
View aliases
Main aliases
tf.metrics.SensitivityAtSpecificity
See Migration guide for more details.
tf.keras.metrics.SensitivityAtSpecificity( specificity, num_thresholds = 200, name = None, dtype = None )
Standalone usage:
m = tf.keras.metrics.SensitivityAtSpecificity(0.5) m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8]) m.result().numpy() 0.5
m = tf.keras.metrics.SensitivityAtSpecificity(0.5)
m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8])
m.result().numpy()
0.5
m.reset_states() m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8], sample_weight = [1, 1, 2, 2, 1]) m.result().numpy() 0.333333
reset_states()
result()