python /home/admin/mtr/script_for_cron.py -j datou_current3 -m 10 -a ' -a 4892 ' -s datou_current_4892 -M 0 -S 0 -U 95,95,80 import MySQLdb succeeded Import error (python version) ['/Users/moilerat/Documents/Fotonower/install/caffe/distribute/python', '/home/admin/workarea/git/Velours/python/prod', '/home/admin/workarea/install/caffe_cuda8_python3/python', '/home/admin/workarea/install/darknet', '/home/admin/workarea/git/Velours/python', '/home/admin/workarea/install/caffe_frcnn_python3/py-faster-rcnn/caffe-fast-rcnn/python', '/home/admin/mtr/.credentials', '/home/admin/workarea/install/caffe/python', '/home/admin/workarea/install/caffe_frcnn/py-faster-rcnn/tools', '/home/admin/workarea/git/fotonowerpip', '/home/admin/workarea/install/segment-anything', '/home/admin/workarea/git/pyfvs', '/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload', '/home/admin/.local/lib/python3.8/site-packages', '/usr/local/lib/python3.8/dist-packages', '/usr/lib/python3/dist-packages'] process id : 3667359 load datou : 4892 # VR 17-11-17 : to create in DB ! Here we check the datou graph and we reorder steps ! Tree builded and cycle checked, now we need to re-order the steps ! We have currenlty an error because there is no dependence between the last step for the case tile - detect - glue We can either keep the depence of, it is better to keep an order compatible with the id of steps if we do not have sons, so a lexical order : (number_son, step_id) All sons are already in current list ! DONE and to test : checkNoCycle ! Here we check the consistency of inputs/outputs number between the given ones and the db ! eke 1-6-18 : checkConsistencyNbInputNbOutput should be processed after step reordering ! Number of inputs / outputs for each step checked ! Here we check the consistency of outputs/inputs types during steps connections eke 1-6-18 : checkConsistencyTypeOutputInput should be processed after checkConsistencyNbInputNbOutput ! DataTypes for each output/input checked ! Unexpected type for variable list_input_json ERROR or WARNING : can't parse json string Expecting value: line 1 column 1 (char 0) Tried to parse : chemin de la photo was removed should we ? [ (photo_id, hashtag_id_0, score_0), (photo_id, hashtag_id_1, score_1), ...] was removed should we ? [ (photo_id, hashtag_id_0, score_0), (photo_id, hashtag_id_1, score_1), ...] was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? chemin de la photo was removed should we ? load thcls load pdts Running datou job : batch_current TODO datou_current to load to do maybe to take outside batchDatouExec no input labels no input values updating current state to 1 we have a portfolio with more photos than limit : 7456>1000 please execute split_portfolio.py -i 20244820 -l 1000 size over we load limit photo not treated list_input_json: {} Current got : datou_id : 4892, datou_cur_ids : ['2562432'] with mtr_portfolio_ids : ['20244820'] and first list_photo_ids : [] new path : /proc/3667359/ Inside batchDatouExec : verbose : 0 # VR 17-11-17 : to create in DB ! Here we check the datou graph and we reorder steps ! Tree builded and cycle checked, now we need to re-order the steps ! We have currenlty an error because there is no dependence between the last step for the case tile - detect - glue We can either keep the depence of, it is better to keep an order compatible with the id of steps if we do not have sons, so a lexical order : (number_son, step_id) All sons are already in current list ! DONE and to test : checkNoCycle ! Here we check the consistency of inputs/outputs number between the given ones and the db ! eke 1-6-18 : checkConsistencyNbInputNbOutput should be processed after step reordering ! Number of inputs / outputs for each step checked ! Here we check the consistency of outputs/inputs types during steps connections eke 1-6-18 : checkConsistencyTypeOutputInput should be processed after checkConsistencyNbInputNbOutput ! DataTypes for each output/input checked ! List Step Type Loaded in datou : tfhub_classification2, argmax over limit max, limiting to limit_max 300 list_input_json : {} origin We have 1 , WARNING: data may be incomplete, need to offset and complete ! BFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFwe have missing 0 photos in the step downloads : photo missing : [] try to delete the photos missing in DB length of list_filenames : 300 ; length of list_pids : 300 ; length of list_args : 300 time to download the photos : 56.64026188850403 About to test input to load we should then remove the video here, and this would fix the bug of datou_current ! Calling datou_exec Inside datou_exec : verbose : 0 number of steps : 2 step1:tfhub_classification2 Fri Feb 7 12:50:25 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Beginning of datou_step TFHub with tf2 ! nombre de thcls : 2 we are using the classfication for multi_thcl [3513, 3890] begin to check gpu status inside check gpu memory 2025-02-07 12:50:29.095405: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2025-02-07 12:50:29.128531: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:41:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5 coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s 2025-02-07 12:50:29.128841: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-02-07 12:50:29.132471: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2025-02-07 12:50:29.143890: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2025-02-07 12:50:29.145228: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2025-02-07 12:50:29.165147: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2025-02-07 12:50:29.168359: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2025-02-07 12:50:29.215054: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2025-02-07 12:50:29.216661: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0 2025-02-07 12:50:29.218422: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2025-02-07 12:50:29.247139: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 3493065000 Hz 2025-02-07 12:50:29.248735: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7ff0c8000b60 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2025-02-07 12:50:29.248755: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2025-02-07 12:50:29.485450: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x293ea9d0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2025-02-07 12:50:29.485518: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA GeForce RTX 2080 Ti, Compute Capability 7.5 2025-02-07 12:50:29.486476: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:41:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5 coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s 2025-02-07 12:50:29.486566: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-02-07 12:50:29.486585: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2025-02-07 12:50:29.486602: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2025-02-07 12:50:29.486619: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2025-02-07 12:50:29.486635: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2025-02-07 12:50:29.486650: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2025-02-07 12:50:29.486667: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2025-02-07 12:50:29.487502: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0 2025-02-07 12:50:29.487552: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-02-07 12:50:29.488102: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix: 2025-02-07 12:50:29.488117: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 2025-02-07 12:50:29.488125: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N 2025-02-07 12:50:29.488961: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:41:00.0, compute capability: 7.5) l 3637 free memory gpu now : 3358 max_wait_temp : 1 max_wait : 5 1 Physical GPUs, 1 Logical GPUs tagging for thcl : 3513 To do loadFromThcl(), then load ParamDescType : thcl3513 thcls : [{'id': 3513, 'mtr_user_id': 31, 'name': 'Rungis_amount_dechets_fall_2018_v2_tf', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': '05102018_Papier_non_papier_dense,05102018_Papier_non_papier_peu_dense,05102018_Papier_non_papier_presque_vide,05102018_Papier_non_papier_tres_dense,05102018_Papier_non_papier_tres_peu_dense', 'svm_portfolios_learning': '1108385,1108386,1108388,1108384,1108387', 'photo_hashtag_type': 4557, 'photo_desc_type': 5767, 'type_classification': 'tf_classification2', 'hashtag_id_list': '2107751013,2107751014,2107751015,2107751016,2107751017'}] thcl {'id': 3513, 'mtr_user_id': 31, 'name': 'Rungis_amount_dechets_fall_2018_v2_tf', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': '05102018_Papier_non_papier_dense,05102018_Papier_non_papier_peu_dense,05102018_Papier_non_papier_presque_vide,05102018_Papier_non_papier_tres_dense,05102018_Papier_non_papier_tres_peu_dense', 'svm_portfolios_learning': '1108385,1108386,1108388,1108384,1108387', 'photo_hashtag_type': 4557, 'photo_desc_type': 5767, 'type_classification': 'tf_classification2', 'hashtag_id_list': '2107751013,2107751014,2107751015,2107751016,2107751017'} Update svm_hashtag_type_desc : 5767 FOUND : 1 Here is data_from_sql_as_vec to set the ParamDescriptorType : (5767, 'Rungis_amount_dechets_fall_2018_v2_tf', 2048, 2048, 'Rungis_amount_dechets_fall_2018_v2_tf', 'pool5', 10.0, None, None, 256, None, 0, None, 8, None, None, -1000.0, 3, datetime.datetime(2023, 3, 16, 15, 52, 10), datetime.datetime(2023, 3, 16, 15, 52, 10)) model_name : Rungis_amount_dechets_fall_2018_v2_tf model_param file didn't exist model_name : Rungis_amount_dechets_fall_2018_v2_tf model_type : tf_classification2 list file need : ['Confusion_Matrix.png', 'Precision_Recall_05102018_Papier_non_papier_dense.jpg', 'Precision_Recall_05102018_Papier_non_papier_peu_dense.jpg', 'Precision_Recall_05102018_Papier_non_papier_presque_vide.jpg', 'Precision_Recall_05102018_Papier_non_papier_tres_dense.jpg', 'Precision_Recall_05102018_Papier_non_papier_tres_peu_dense.jpg', 'Result_Summary.txt', 'checkpoint', 'model_checkpoint.ckpt.data-00000-of-00001', 'model_checkpoint.ckpt.data-00000-of-00002', 'model_checkpoint.ckpt.data-00001-of-00002', 'model_checkpoint.ckpt.index', 'model_weights.h5'] file exist in s3 : ['Confusion_Matrix.png', 'Precision_Recall_05102018_Papier_non_papier_dense.jpg', 'Precision_Recall_05102018_Papier_non_papier_peu_dense.jpg', 'Precision_Recall_05102018_Papier_non_papier_presque_vide.jpg', 'Precision_Recall_05102018_Papier_non_papier_tres_dense.jpg', 'Precision_Recall_05102018_Papier_non_papier_tres_peu_dense.jpg', 'Result_Summary.txt', 'checkpoint', 'model_checkpoint.ckpt.data-00000-of-00001', 'model_checkpoint.ckpt.data-00000-of-00002', 'model_checkpoint.ckpt.data-00001-of-00002', 'model_checkpoint.ckpt.index', 'model_weights.h5'] file manque in s3 : [] 2025-02-07 12:50:36.332903: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 3.02G (3246391296 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory local folder : /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/Confusion_Matrix.png size_local : 67810 size in s3 : 67810 create time local : 2023-10-30 16:21:37 create time in s3 : 2023-10-30 14:09:29 Confusion_Matrix.png already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/Precision_Recall_05102018_Papier_non_papier_dense.jpg size_local : 73949 size in s3 : 73949 create time local : 2023-10-30 16:21:38 create time in s3 : 2023-10-30 14:09:30 Precision_Recall_05102018_Papier_non_papier_dense.jpg already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/Precision_Recall_05102018_Papier_non_papier_peu_dense.jpg size_local : 85572 size in s3 : 85572 create time local : 2023-10-30 16:21:38 create time in s3 : 2023-10-30 14:09:39 Precision_Recall_05102018_Papier_non_papier_peu_dense.jpg already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/Precision_Recall_05102018_Papier_non_papier_presque_vide.jpg size_local : 72361 size in s3 : 72361 create time local : 2023-10-30 16:21:38 create time in s3 : 2023-10-30 14:09:37 Precision_Recall_05102018_Papier_non_papier_presque_vide.jpg already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/Precision_Recall_05102018_Papier_non_papier_tres_dense.jpg size_local : 83567 size in s3 : 83567 create time local : 2023-10-30 16:21:38 create time in s3 : 2023-10-30 14:09:48 Precision_Recall_05102018_Papier_non_papier_tres_dense.jpg already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/Precision_Recall_05102018_Papier_non_papier_tres_peu_dense.jpg size_local : 71611 size in s3 : 71611 create time local : 2023-10-30 16:21:38 create time in s3 : 2023-10-30 14:09:29 Precision_Recall_05102018_Papier_non_papier_tres_peu_dense.jpg already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/Result_Summary.txt size_local : 1058 size in s3 : 1058 create time local : 2023-10-30 16:21:38 create time in s3 : 2023-10-30 14:09:30 Result_Summary.txt already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/checkpoint size_local : 99 size in s3 : 99 create time local : 2023-10-30 16:21:38 create time in s3 : 2023-10-30 14:09:36 checkpoint already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/model_checkpoint.ckpt.data-00000-of-00001 size_local : 188538519 size in s3 : 188538519 create time local : 2023-10-30 16:21:41 create time in s3 : 2023-10-30 14:09:40 model_checkpoint.ckpt.data-00000-of-00001 already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/model_checkpoint.ckpt.data-00000-of-00002 size_local : 216572 size in s3 : 216572 create time local : 2023-10-30 16:21:41 create time in s3 : 2023-03-16 14:52:09 model_checkpoint.ckpt.data-00000-of-00002 already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/model_checkpoint.ckpt.data-00001-of-00002 size_local : 32279708 size in s3 : 32279708 create time local : 2023-10-30 16:21:42 create time in s3 : 2023-03-16 14:52:07 model_checkpoint.ckpt.data-00001-of-00002 already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/model_checkpoint.ckpt.index size_local : 28001 size in s3 : 28001 create time local : 2023-10-30 16:21:42 create time in s3 : 2023-10-30 14:09:30 model_checkpoint.ckpt.index already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/model_weights.h5 size_local : 94501976 size in s3 : 94501976 create time local : 2023-10-30 16:21:44 create time in s3 : 2023-10-30 14:09:37 model_weights.h5 already exist and didn't need to update ERROR in datou_step_exec, will save and exit ! assertion failed: [0] [Op:Assert] name: EagerVariableNameReuse File "/home/admin/workarea/git/Velours/python/mtr/datou/datou_lib.py", line 2329, in datou_exec output = datou_step_exec(sNext, args, cache, context, map_info, verbose, mtr_user_id) File "/home/admin/workarea/git/Velours/python/mtr/datou/datou_lib.py", line 2523, in datou_step_exec return lib_process.datou_step_tfhub2(param, json_param, args, cache, context, map_info, verbose) File "/home/admin/workarea/git/Velours/python/mtr/datou/lib_step_exec/lib_step_process.py", line 3138, in datou_step_tfhub2 this_model = model_evaluator(model_name, model_type=model_type, fc_size=fc_size,use_multi_inputs=use_multi_inputs) File "/home/admin/workarea/git/Velours/python/mtr/tfhub2/evaluate.py", line 156, in __init__ self.model, _, _ = create_tfhub_model(module_handle=self.tfhub_module, File "/home/admin/workarea/git/Velours/python/mtr/tfhub2/evaluate.py", line 77, in create_tfhub_model hub.KerasLayer(module_handle, trainable=do_fine_tuning, name="module"), File "/home/admin/.local/lib/python3.8/site-packages/tensorflow_hub/keras_layer.py", line 152, in __init__ self._func = load_module(handle, tags, self._load_options) File "/home/admin/.local/lib/python3.8/site-packages/tensorflow_hub/keras_layer.py", line 421, in load_module return module_v2.load(handle, tags=tags, options=set_load_options) File "/home/admin/.local/lib/python3.8/site-packages/tensorflow_hub/module_v2.py", line 106, in load obj = tf.compat.v1.saved_model.load_v2(module_path, tags=tags) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 578, in load return load_internal(export_dir, tags) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 602, in load_internal loader = loader_cls(object_graph_proto, File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 123, in __init__ self._load_all() File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 134, in _load_all self._load_nodes() File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 264, in _load_nodes node, setter = self._recreate(proto, node_id) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 370, in _recreate return factory[kind]() File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 363, in "variable": lambda: self._recreate_variable(proto.variable), File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 426, in _recreate_variable return variables.Variable( File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 261, in __call__ return cls._variable_v2_call(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 243, in _variable_v2_call return previous_getter( File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 66, in getter return captured_getter(captured_previous, **kwargs) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 418, in uninitialized_variable_creator return resource_variable_ops.UninitializedVariable(**kwargs) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 263, in __call__ return super(VariableMetaclass, cls).__call__(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1795, in __init__ handle = _variable_handle_from_shape_and_dtype( File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 174, in _variable_handle_from_shape_and_dtype gen_logging_ops._assert( # pylint: disable=protected-access File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_logging_ops.py", line 55, in _assert _ops.raise_from_not_ok_status(e, name) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 6653, in raise_from_not_ok_status six.raise_from(core._status_to_exception(e.code, message), None) File "", line 3, in raise_from [1334982433, 1334982429, 1334982426, 1334982424, 1334982423, 1334982421, 1334982392, 1334982391, 1334982390, 1334982387, 1334982384, 1334982376, 1334982100, 1334982099, 1334982095, 1334982087, 1334982065, 1334982062, 1334982022, 1334981955, 1334981918, 1334981915, 1334981908, 1334981906, 1334981276, 1334981205, 1334981142, 1334981077, 1334981062, 1334981057, 1334981030, 1334981027, 1334981024, 1334981021, 1334981018, 1334981014, 1334980964, 1334980961, 1334980957, 1334980953, 1334980939, 1334980930, 1334980878, 1334980835, 1334980826, 1334980825, 1334980824, 1334980822, 1334980766, 1334980762, 1334980759, 1334980756, 1334980753, 1334980750, 1334980522, 1334980454, 1334980366, 1334980249, 1334980227, 1334980221, 1334980170, 1334980167, 1334980165, 1334980161, 1334980157, 1334980152, 1334980097, 1334980095, 1334980083, 1334980081, 1334980077, 1334980075, 1334980066, 1334980042, 1334980030, 1334980028, 1334980024, 1334980023, 1334979907, 1334979902, 1334979897, 1334979892, 1334979888, 1334979885, 1334979849, 1334979845, 1334979840, 1334979835, 1334979830, 1334979826, 1334979576, 1334979572, 1334979569, 1334979565, 1334979562, 1334979557, 1334979356, 1334979352, 1334979348, 1334979344, 1334979342, 1334979340, 1334979307, 1334979300, 1334979296, 1334979291, 1334979287, 1334979285, 1334979194, 1334979187, 1334979170, 1334979168, 1334979151, 1334979149, 1334979100, 1334979097, 1334979096, 1334979094, 1334979061, 1334979058, 1334978992, 1334978989, 1334978974, 1334978934, 1334978929, 1334978784, 1334978624, 1334978622, 1334978621, 1334978617, 1334978615, 1334978607, 1334978365, 1334978361, 1334978219, 1334978176, 1334978138, 1334978105, 1334977633, 1334977448, 1334977363, 1334977317, 1334977315, 1334977312, 1334976798, 1334976797, 1334976795, 1334976792, 1334976770, 1334976765, 1334976715, 1334976712, 1334976708, 1334976707, 1334976697, 1334976696, 1334976684, 1334976683, 1334976680, 1334976678, 1334976677, 1334976675, 1334976650, 1334976647, 1334976646, 1334976630, 1334976558, 1334976503, 1334976478, 1334976473, 1334976471, 1334976468, 1334976466, 1334976460, 1334976393, 1334976390, 1334976370, 1334976328, 1334976324, 1334976273, 1334976153, 1334976147, 1334976145, 1334976144, 1334976143, 1334976141, 1334976120, 1334976118, 1334976117, 1334976116, 1334976114, 1334976111, 1334975999, 1334975976, 1334975974, 1334975961, 1334975959, 1334975941, 1334975517, 1334975378, 1334975375, 1334975374, 1334975371, 1334975348, 1334975320, 1334975318, 1334975316, 1334975314, 1334975312, 1334975311, 1334975267, 1334975262, 1334975260, 1334975258, 1334975256, 1334975254, 1334975242, 1334975240, 1334975238, 1334975235, 1334975232, 1334975229, 1334975209, 1334975206, 1334975056, 1334975045, 1334975043, 1334975041, 1334974977, 1334974973, 1334974972, 1334974963, 1334974906, 1334974902, 1334974716, 1334974714, 1334974712, 1334974710, 1334974707, 1334974705, 1334974657, 1334974645, 1334974549, 1334974469, 1334974467, 1334974465, 1334974226, 1334974214, 1334974210, 1334974208, 1334974163, 1334974155, 1334973669, 1334973645, 1334973602, 1334973547, 1334973497, 1334973489, 1334973175, 1334973169, 1334973160, 1334973158, 1334973156, 1334973151, 1334973095, 1334973082, 1334973079, 1334973077, 1334973075, 1334973074, 1334973072, 1334973070, 1334973065, 1334973060, 1334973056, 1334973054, 1334972989, 1334972905, 1334972903, 1334972900, 1334972896, 1334972891, 1334972714, 1334972713, 1334972706, 1334972612, 1334972515, 1334972418, 1334972341, 1334972339, 1334972337, 1334972334, 1334972332, 1334972331, 1334972304, 1334972298, 1334972292, 1334972289, 1334972286, 1334972284] begin to insert list_values into mtr_datou_result : length of list_values in save_final : 300 time used for this insertion : 0.10424041748046875 save_final ERROR in last step tfhub_classification2, assertion failed: [0] [Op:Assert] name: EagerVariableNameReuse time spend for datou_step_exec : 11.475635528564453 time spend to save output : 0.11935925483703613 total time spend for step 0 : 11.59499478340149 need to delete datou_research and reload, so keep current state 1 caffe_path_current : About to save ! 2 After save, about to update current ! 11.44user 4.31system 1:13.46elapsed 21%CPU (0avgtext+0avgdata 1212000maxresident)k 291392inputs+565536outputs (1606major+407703minor)pagefaults 0swaps