I'm trying to run a training job for a object detection model on google cloud. It fails after logging the following from each ps-replica.
Check failed: DeviceNameUtils::ParseFullName(new_base, &parsed_name)
{
insertId: "1am4lt7g2ytgyip"
jsonPayload: {
created: 1532870862.316736
levelname: "CRITICAL"
lineno: 27
message: "Check failed: DeviceNameUtils::ParseFullName(new_base, &parsed_name) "
pathname: "tensorflow/core/common_runtime/renamed_device.cc"
}
labels: {
compute.googleapis.com/resource_id: "8188383009228980271"
compute.googleapis.com/resource_name: "cmle-training-ps-1d73aafb3a-0-7bjnw"
compute.googleapis.com/zone: "us-central1-a"
ml.googleapis.com/job_id: "object_detection_07_29_2018_14_17_36"
ml.googleapis.com/job_id/log_area: "root"
ml.googleapis.com/task_name: "ps-replica-0"
ml.googleapis.com/trial_id: ""
}
logName: "projects/object-detection-210310/logs/ps-replica-0"
receiveTimestamp: "2018-07-29T13:27:48.515404065Z"
resource: {
labels: {
job_id: "object_detection_07_29_2018_14_17_36"
project_id: "object-detection-210310"
task_name: "ps-replica-0"
}
type: "ml_job"
}
severity: "CRITICAL"
timestamp: "2018-07-29T13:27:42.316735982Z"
}
Followed by this:
-ps-replica-1
Command '['python', '-m', u'object_detection.model_main', u'--
model_dir=gs://aka_b1/train/', u'--
pipeline_config_path=gs://aka_b1/data/ssd_mobilenet_v1_coco.config', '--job-
dir', u'gs://aka_b1/train/']' returned non-zero exit status -6
{
insertId: "1d4klnfg3ihl2be"
jsonPayload: {
created: 1532870863.971174
levelname: "ERROR"
lineno: 879
message: "Command '['python', '-m', u'object_detection.model_main', u'--model_dir=gs://aka_b1/train/', u'--pipeline_config_path=gs://aka_b1/data/ssd_mobilenet_v1_coco.config', '--job-dir', u'gs://aka_b1/train/']' returned non-zero exit status -6"
pathname: "/runcloudml.py"
}
labels: {
compute.googleapis.com/resource_id: "7345648913232166992"
compute.googleapis.com/resource_name: "cmle-training-ps-1d73aafb3a-1-tjx4f"
compute.googleapis.com/zone: "us-central1-a"
ml.googleapis.com/job_id: "object_detection_07_29_2018_14_17_36"
ml.googleapis.com/job_id/log_area: "root"
ml.googleapis.com/task_name: "ps-replica-1"
ml.googleapis.com/trial_id: ""
}
logName: "projects/object-detection-210310/logs/ps-replica-1"
receiveTimestamp: "2018-07-29T13:27:47.591698250Z"
resource: {
labels: {
job_id: "object_detection_07_29_2018_14_17_36"
project_id: "object-detection-210310"
task_name: "ps-replica-1"
}
type: "ml_job"
}
severity: "ERROR"
timestamp: "2018-07-29T13:27:43.971174001Z"
}
I tried after replacing tfrecords, config file and ckpt files of a successfull training job I ran earlier. But the issue remains. Only difference is the bucket name, which I changed in the config file and at the training job submission command.
Please help.