У меня проблема с нехваткой памяти, когда я выбираю следующую конфигурацию (config.yaml):
trainingInput:
scaleTier: CUSTOM
masterType: large_model
workerType: complex_model_m
parameterServerType: large_model
workerCount: 10
parameterServerCount: 10
Я следовал руководству Google по criteo_tft: https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/criteo_tft/config-large.yaml
Эта ссылка говорит, что они смогли обучить данные 1 ТБ! Я был так вдохновлен, чтобы попробовать!!!
Мой набор данных является категориальным, поэтому он создает довольно большую матрицу после горячего кодирования (двумерный массив numpy размером 520000 x 4000). Я могу обучить свой набор данных на локальной машине с 32 ГБ памяти, но я не могу сделать то же самое в облаке!!!
Вот мои ошибки:
ERROR 2017-12-18 12:57:37 +1100 worker-replica-1 Using TensorFlow
backend.
ERROR 2017-12-18 12:57:37 +1100 worker-replica-4 Using TensorFlow
backend.
INFO 2017-12-18 12:57:37 +1100 worker-replica-0 Running command:
python -m trainer.task --train-file gs://my_bucket/my_training_file.csv --
job-dir gs://my_bucket/my_bucket_20171218_125645
ERROR 2017-12-18 12:57:38 +1100 worker-replica-2 Using TensorFlow
backend.
ERROR 2017-12-18 12:57:40 +1100 worker-replica-0 Using TensorFlow
backend.
ERROR 2017-12-18 12:57:53 +1100 worker-replica-3 Command
'['python', '-m', u'trainer.task', u'--train-file',
u'gs://my_bucket/my_training_file.csv', '--job-dir',
u'gs://my_bucket/my_bucket_20171218_125645']' returned non-zero exit status -9
INFO 2017-12-18 12:57:53 +1100 worker-replica-3 Module
completed; cleaning up.
INFO 2017-12-18 12:57:53 +1100 worker-replica-3 Clean up
finished.
ERROR 2017-12-18 12:57:56 +1100 worker-replica-4 Command
'['python', '-m', u'trainer.task', u'--train-file',
u'gs://my_bucket/my_training_file.csv', '--job-dir',
u'gs://my_bucket/my_bucket_20171218_125645']' returned non-zero exit status -9
INFO 2017-12-18 12:57:56 +1100 worker-replica-4 Module
completed; cleaning up.
INFO 2017-12-18 12:57:56 +1100 worker-replica-4 Clean up
finished.
ERROR 2017-12-18 12:57:58 +1100 worker-replica-2 Command
'['python', '-m', u'trainer.task', u'--train-file',
u'gs://my_bucket/my_training_file.csv', '--job-dir',
u'gs://my_bucket/my_bucket_20171218_125645']' returned non-zero exit status -9
INFO 2017-12-18 12:57:58 +1100 worker-replica-2 Module
completed; cleaning up.
INFO 2017-12-18 12:57:58 +1100 worker-replica-2 Clean up
finished.
ERROR 2017-12-18 12:57:59 +1100 worker-replica-1 Command
'['python', '-m', u'trainer.task', u'--train-file',
u'gs://my_bucket/my_training_file.csv', '--job-dir',
u'gs://my_bucket/my_bucket_20171218_125645']' returned non-zero exit status -9
INFO 2017-12-18 12:57:59 +1100 worker-replica-1 Module
completed; cleaning up.
INFO 2017-12-18 12:57:59 +1100 worker-replica-1 Clean up finished.
ERROR 2017-12-18 12:58:01 +1100 worker-replica-0 Command
'['python', '-m', u'trainer.task', u'--train-file',
u'gs://my_bucket/my_training_file.csv', '--job-dir',
u'gs://my_bucket/my_bucket_20171218_125645']' returned non-zero exit status -9
INFO 2017-12-18 12:58:01 +1100 worker-replica-0 Module
completed; cleaning up.
INFO 2017-12-18 12:58:01 +1100 worker-replica-0 Clean up finished.
ERROR 2017-12-18 12:58:43 +1100 service The replica worker 0 ran
out-of-memory and exited with a non-zero status of 247. The replica worker 1
ran out-of-memory and exited with a non-zero status of 247. The replica
worker 2 ran out-of-memory and exited with a non-zero status of 247. The
replica worker 3 ran out-of-memory and exited with a non-zero status of 247.
The replica worker 4 ran out-of-memory and exited with a non-zero status of
247. To find out more about why your job exited please check the logs:
https://console.cloud.google.com/logs/viewer?project=a_project_id........(link to to my cloud log)
INFO 2017-12-18 12:58:44 +1100 ps-replica-0 Signal 15 (SIGTERM)
was caught. Terminated by service. This is normal behavior.
INFO 2017-12-18 12:58:44 +1100 ps-replica-1 Signal 15 (SIGTERM)
was caught. Terminated by service. This is normal behavior.
INFO 2017-12-18 12:58:44 +1100 ps-replica-0 Module completed;
cleaning up.
INFO 2017-12-18 12:58:44 +1100 ps-replica-0 Clean up finished.
INFO 2017-12-18 12:58:44 +1100 ps-replica-1 Module completed;
cleaning up.
INFO 2017-12-18 12:58:44 +1100 ps-replica-1 Clean up finished.
INFO 2017-12-18 12:58:44 +1100 ps-replica-2 Signal 15
(SIGTERM) was caught. Terminated by service. This is normal behavior.
INFO 2017-12-18 12:58:44 +1100 ps-replica-2 Module completed;
cleaning up.
INFO 2017-12-18 12:58:44 +1100 ps-replica-2 Clean up finished.
INFO 2017-12-18 12:58:44 +1100 ps-replica-3 Signal 15 (SIGTERM)
was caught. Terminated by service. This is normal behavior.
INFO 2017-12-18 12:58:44 +1100 ps-replica-5 Signal 15 (SIGTERM)
was caught. Terminated by service. This is normal behavior.
INFO 2017-12-18 12:58:44 +1100 ps-replica-3 Module completed;
cleaning up.
INFO 2017-12-18 12:58:44 +1100 ps-replica-3 Clean up finished.
INFO 2017-12-18 12:58:44 +1100 ps-replica-5 Module completed;
cleaning up.
INFO 2017-12-18 12:58:44 +1100 ps-replica-5 Clean up finished.
INFO 2017-12-18 12:58:44 +1100 ps-replica-4 Signal 15 (SIGTERM)
was caught. Terminated by service. This is normal behavior.
INFO 2017-12-18 12:58:44 +1100 ps-replica-4 Module completed;
cleaning up.
INFO 2017-12-18 12:58:44 +1100 ps-replica-4 Clean up finished.
INFO 2017-12-18 12:59:28 +1100 service Finished tearing down
TensorFlow.
INFO 2017-12-18 13:00:17 +1100 service Job failed.##
Пожалуйста, не беспокойтесь об «Использовании бэкенда TensorFlow». ошибка, поскольку я получил ее, даже если задание обучения успешно для другого меньшего набора данных.
Кто-нибудь может объяснить, что вызывает нехватку памяти (ошибка 247) и как я могу написать свой файл config.yaml, чтобы избежать таких проблем, и обучить свои данные в облаке?