For the people coming for Tensorflow serving or Estimator loading, this error occurs because the values in the feature dictionary need to be in batches.
data = {
"signature_name": "predict",
"inputs": {
k: [v]
for k,
v in inputs.items()
}
}
The source code generating this error reads as follows:
OP_REQUIRES(context, axis >= 0 && axis < input_dims,
errors::InvalidArgument("Expected dimension in the range [",
-input_dims, ", ", input_dims,
"), but got ", dim));
For code like
tf.equal(tf.argmax(y, 1), tf.argmax(labels, 1))
which is often used when calculating accuracy, you can change to
tf.equal(tf.argmax(y, -1), tf.argmax(labels, -1))
according to the source code:
// tensorflow/compiler/tf2xla/kernels/index_ops_cpu.cc:58
OP_REQUIRES(ctx, axis >= 0 && axis < input_dims,
errors::InvalidArgument("Expected dimension in the range [",
-input_dims, ", ", input_dims,
"), but got ", dim));
I solved this problem. Check the expression of batch_labels
# if use one hot code use # y_true_cls = tf.argmax(y_true, dimension = 1) # if not one hot code use y_true_cls = y_true
解决:Expected dimension in the range [-1, 1), but got 1 ayumifire: 老铁,问下咋改
做分类任务的时候,经常会使用交叉熵函数作为loss来计算:形式如下
def evaluation(logits, labels):
with tf.variable_scope('accuracy') as scope:
correct_prediction = tf.equal(tf.argmax(labels, 1), tf.argmax(logits, 1))
correct = tf.cast(correct_prediction, tf.float32)
accuracy = tf.reduce_mean(correct)
return accuracy
如下。。。。。。。。。
correct_prediction = tf.equal(tf.argmax(labels, -1), tf.argmax(logits, -1))
Last updated 2022-06-28 UTC.
View aliases
Compat aliases for migration
See Migration guide for more details.
tf.errors.InvalidArgumentError( node_def, op, message, * args )
Example:
tf.reshape([1, 2, 3], (2, ))
Traceback(most recent call last):
InvalidArgumentError: ...
[r/python] Python Tensorflow tf.concat raising strange error when concatenating 2 tensors,I don't know if this is the appropriate subreddit or if i should post it in the python subreddit also but when attempting to run my implementation for my thesis i get this strange error.,The code that is causing this error is below.,I'm a bot, bleep, bloop. Someone has linked to this thread from another place on reddit:
Traceback (most recent call last):
File "D:\School\Last_Year\thesis_actual\thesis\thesis_prototype\implementation.py", line 314, in <module>
new_tensor = tf.concat([tensor,array_tensor],axis=2)
File "D:\School\Last_Year\thesis_actual\thesis\thesis_prototype\venv\lib\site-packages\tensorflow\python\util\dispatch.py", line 180, in wrapper
return target(*args, **kwargs)
File "D:\School\Last_Year\thesis_actual\thesis\thesis_prototype\venv\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1271, in concat
return gen_array_ops.concat_v2(values=values, axis=axis, name=name)
File "D:\School\Last_Year\thesis_actual\thesis\thesis_prototype\venv\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 1378, in concat_v2
_six.raise_from(_core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: ConcatOp : Expected concatenating dimensions in the range [0, 0), but got 2 [Op:ConcatV2] name: concat
The code that is causing this error is below.
new_validation = copy.deepcopy(validation) float_validation = [ [] ] for data in new_validation: new_ident = float(data[0]) new_converted_values = [] for values in data[1][0]: new_converted_values.append(float(values)) new_converted_values = np.asarray(new_converted_values) float_validation.append((new_ident, new_converted_values)) float_validation.pop(0) tensor = None array_tensor = None dummy_tensor = tf.convert_to_tensor(0.0, dtype = tf.float32) dummy_tensor_2 = tf.convert_to_tensor(0.0, dtype = tf.float32) dataset = tf.data.Dataset.from_tensors(tensors = (dummy_tensor, dummy_tensor_2)) for row in float_validation: print(row[0]) tensor = tf.convert_to_tensor(row[0], dtype = tf.float32) array_tensor = tf.convert_to_tensor(row[1], dtype = tf.float32) new_tensor = tf.concat([tensor, array_tensor], axis = 2) # dataset = dataset.concatenate(tf.data.Dataset.from_tensors(tensors = (tensor, array_tensor))) dataset = dataset.concatenate(tf.data.Dataset.from_tensors(tensors = (new_tensor)))
The MySQL server writes some error messages to its error log, and sends others to client programs. , Message: Slave is not configured or failed to initialize properly. You must at least set --server-id to enable either a master or a slave. Additional error messages can be found in the MySQL error log. , In addition to the errors in the following list, the server can also produce error messages that have error codes in the range from 1 to 999. See Chapter 4, Global Error Message Reference , InnoDB reports this error when a table cannot be created. If the error message refers to error 150, table creation failed because a foreign key constraint was not correctly formed. If the error message refers to error −1, table creation probably failed because the table includes a column name that matched the name of an internal InnoDB table.
Examples:
mysql > SELECT i FROM t INNER JOIN t AS t2;
ERROR 1052(23000): Column 'i' in field list is ambiguous
mysql > SELECT * FROM t LEFT JOIN t AS t2 ON i = i;
ERROR 1052(23000): Column 'i' in on clause is ambiguous
Qualify the column with the appropriate table name:
mysql > SELECT t2.i FROM t INNER JOIN t AS t2;
Qualify the column with the appropriate table name:
mysql > SELECT t2.i FROM t INNER JOIN t AS t2;
Modify the query to avoid the need for qualification:
mysql > SELECT * FROM t LEFT JOIN t AS t2 USING(i);