python /home/admin/mtr/script_for_cron.py -j datou_current3 -m 10 -a ' -a 4892 ' -s datou_current_4892 -M 0 -S 0 -U 95,95,80 import MySQLdb succeeded Import error (python version) ['/Users/moilerat/Documents/Fotonower/install/caffe/distribute/python', '/home/admin/workarea/git/Velours/python/prod', '/home/admin/workarea/install/caffe_cuda8_python3/python', '/home/admin/workarea/install/darknet', '/home/admin/workarea/git/Velours/python', '/home/admin/workarea/install/caffe_frcnn_python3/py-faster-rcnn/caffe-fast-rcnn/python', '/home/admin/mtr/.credentials', '/home/admin/workarea/install/caffe/python', '/home/admin/workarea/install/caffe_frcnn/py-faster-rcnn/tools', '/home/admin/workarea/git/fotonowerpip', '/home/admin/workarea/install/segment-anything', '/home/admin/workarea/git/pyfvs', '/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload', '/home/admin/.local/lib/python3.8/site-packages', '/usr/local/lib/python3.8/dist-packages', '/usr/lib/python3/dist-packages'] process id : 3706198 load datou : 4892 # VR 17-11-17 : to create in DB ! Here we check the datou graph and we reorder steps ! Tree builded and cycle checked, now we need to re-order the steps ! We have currenlty an error because there is no dependence between the last step for the case tile - detect - glue We can either keep the depence of, it is better to keep an order compatible with the id of steps if we do not have sons, so a lexical order : (number_son, step_id) All sons are already in current list ! DONE and to test : checkNoCycle ! Here we check the consistency of inputs/outputs number between the given ones and the db ! eke 1-6-18 : checkConsistencyNbInputNbOutput should be processed after step reordering ! Number of inputs / outputs for each step checked ! Here we check the consistency of outputs/inputs types during steps connections eke 1-6-18 : checkConsistencyTypeOutputInput should be processed after checkConsistencyNbInputNbOutput ! DataTypes for each output/input checked ! Unexpected type for variable list_input_json ERROR or WARNING : can't parse json string Expecting value: line 1 column 1 (char 0) Tried to parse : chemin de la photo was removed should we ? [ (photo_id, hashtag_id_0, score_0), (photo_id, hashtag_id_1, score_1), ...] was removed should we ? [ (photo_id, hashtag_id_0, score_0), (photo_id, hashtag_id_1, score_1), ...] was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? chemin de la photo was removed should we ? load thcls load pdts Running datou job : batch_current TODO datou_current to load to do maybe to take outside batchDatouExec no input labels no input values updating current state to 1 we have a portfolio with more photos than limit : 2996>1000 please execute split_portfolio.py -i 20313919 -l 1000 size over we load limit photo not treated list_input_json: {} Current got : datou_id : 4892, datou_cur_ids : ['2560853'] with mtr_portfolio_ids : ['20313919'] and first list_photo_ids : [] new path : /proc/3706198/ Inside batchDatouExec : verbose : 0 # VR 17-11-17 : to create in DB ! Here we check the datou graph and we reorder steps ! Tree builded and cycle checked, now we need to re-order the steps ! We have currenlty an error because there is no dependence between the last step for the case tile - detect - glue We can either keep the depence of, it is better to keep an order compatible with the id of steps if we do not have sons, so a lexical order : (number_son, step_id) All sons are already in current list ! DONE and to test : checkNoCycle ! Here we check the consistency of inputs/outputs number between the given ones and the db ! eke 1-6-18 : checkConsistencyNbInputNbOutput should be processed after step reordering ! Number of inputs / outputs for each step checked ! Here we check the consistency of outputs/inputs types during steps connections eke 1-6-18 : checkConsistencyTypeOutputInput should be processed after checkConsistencyNbInputNbOutput ! DataTypes for each output/input checked ! List Step Type Loaded in datou : tfhub_classification2, argmax over limit max, limiting to limit_max 300 list_input_json : {} origin We have 1 , WARNING: data may be incomplete, need to offset and complete ! BFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFwe have missing 0 photos in the step downloads : photo missing : [] try to delete the photos missing in DB length of list_filenames : 205 ; length of list_pids : 205 ; length of list_args : 205 time to download the photos : 34.23596549034119 About to test input to load we should then remove the video here, and this would fix the bug of datou_current ! Calling datou_exec Inside datou_exec : verbose : 0 number of steps : 2 step1:tfhub_classification2 Fri Feb 7 13:01:04 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Beginning of datou_step TFHub with tf2 ! nombre de thcls : 2 we are using the classfication for multi_thcl [3513, 3890] begin to check gpu status inside check gpu memory inside check gpu memory inside check gpu memory havn't enough memory gpu , need / 3096 l 3632 free memory gpu now : 2338 wait 20 seconds inside check gpu memory havn't enough memory gpu , need / 3096 l 3632 free memory gpu now : 1944 wait 20 seconds inside check gpu memory havn't enough memory gpu , need / 3096 l 3632 free memory gpu now : 1725 wait 20 seconds inside check gpu memory 2025-02-07 13:02:12.297685: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2025-02-07 13:02:12.327467: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:41:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5 coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s 2025-02-07 13:02:12.327811: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-02-07 13:02:12.329692: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2025-02-07 13:02:12.332190: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2025-02-07 13:02:12.332454: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2025-02-07 13:02:12.336105: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2025-02-07 13:02:12.337701: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2025-02-07 13:02:12.344383: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2025-02-07 13:02:12.345588: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0 2025-02-07 13:02:12.345982: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2025-02-07 13:02:12.353381: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 3493065000 Hz 2025-02-07 13:02:12.354675: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f646c000b60 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2025-02-07 13:02:12.354716: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2025-02-07 13:02:12.580111: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1c627870 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2025-02-07 13:02:12.580153: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA GeForce RTX 2080 Ti, Compute Capability 7.5 2025-02-07 13:02:12.580982: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:41:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5 coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s 2025-02-07 13:02:12.581057: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-02-07 13:02:12.581086: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2025-02-07 13:02:12.581113: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2025-02-07 13:02:12.581139: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2025-02-07 13:02:12.581166: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2025-02-07 13:02:12.581191: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2025-02-07 13:02:12.581218: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2025-02-07 13:02:12.582537: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0 2025-02-07 13:02:12.582605: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-02-07 13:02:12.583464: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix: 2025-02-07 13:02:12.583491: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 2025-02-07 13:02:12.583505: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N 2025-02-07 13:02:12.584896: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:41:00.0, compute capability: 7.5) l 3637 free memory gpu now : 2916 max_wait_temp : 6 max_wait : 5 1 Physical GPUs, 1 Logical GPUs tagging for thcl : 3513 To do loadFromThcl(), then load ParamDescType : thcl3513 thcls : [{'id': 3513, 'mtr_user_id': 31, 'name': 'Rungis_amount_dechets_fall_2018_v2_tf', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': '05102018_Papier_non_papier_dense,05102018_Papier_non_papier_peu_dense,05102018_Papier_non_papier_presque_vide,05102018_Papier_non_papier_tres_dense,05102018_Papier_non_papier_tres_peu_dense', 'svm_portfolios_learning': '1108385,1108386,1108388,1108384,1108387', 'photo_hashtag_type': 4557, 'photo_desc_type': 5767, 'type_classification': 'tf_classification2', 'hashtag_id_list': '2107751013,2107751014,2107751015,2107751016,2107751017'}] thcl {'id': 3513, 'mtr_user_id': 31, 'name': 'Rungis_amount_dechets_fall_2018_v2_tf', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': '05102018_Papier_non_papier_dense,05102018_Papier_non_papier_peu_dense,05102018_Papier_non_papier_presque_vide,05102018_Papier_non_papier_tres_dense,05102018_Papier_non_papier_tres_peu_dense', 'svm_portfolios_learning': '1108385,1108386,1108388,1108384,1108387', 'photo_hashtag_type': 4557, 'photo_desc_type': 5767, 'type_classification': 'tf_classification2', 'hashtag_id_list': '2107751013,2107751014,2107751015,2107751016,2107751017'} Update svm_hashtag_type_desc : 5767 FOUND : 1 Here is data_from_sql_as_vec to set the ParamDescriptorType : (5767, 'Rungis_amount_dechets_fall_2018_v2_tf', 2048, 2048, 'Rungis_amount_dechets_fall_2018_v2_tf', 'pool5', 10.0, None, None, 256, None, 0, None, 8, None, None, -1000.0, 3, datetime.datetime(2023, 3, 16, 15, 52, 10), datetime.datetime(2023, 3, 16, 15, 52, 10)) model_name : Rungis_amount_dechets_fall_2018_v2_tf model_param file didn't exist model_name : Rungis_amount_dechets_fall_2018_v2_tf model_type : tf_classification2 list file need : ['Confusion_Matrix.png', 'Precision_Recall_05102018_Papier_non_papier_dense.jpg', 'Precision_Recall_05102018_Papier_non_papier_peu_dense.jpg', 'Precision_Recall_05102018_Papier_non_papier_presque_vide.jpg', 'Precision_Recall_05102018_Papier_non_papier_tres_dense.jpg', 'Precision_Recall_05102018_Papier_non_papier_tres_peu_dense.jpg', 'Result_Summary.txt', 'checkpoint', 'model_checkpoint.ckpt.data-00000-of-00001', 'model_checkpoint.ckpt.data-00000-of-00002', 'model_checkpoint.ckpt.data-00001-of-00002', 'model_checkpoint.ckpt.index', 'model_weights.h5'] file exist in s3 : ['Confusion_Matrix.png', 'Precision_Recall_05102018_Papier_non_papier_dense.jpg', 'Precision_Recall_05102018_Papier_non_papier_peu_dense.jpg', 'Precision_Recall_05102018_Papier_non_papier_presque_vide.jpg', 'Precision_Recall_05102018_Papier_non_papier_tres_dense.jpg', 'Precision_Recall_05102018_Papier_non_papier_tres_peu_dense.jpg', 'Result_Summary.txt', 'checkpoint', 'model_checkpoint.ckpt.data-00000-of-00001', 'model_checkpoint.ckpt.data-00000-of-00002', 'model_checkpoint.ckpt.data-00001-of-00002', 'model_checkpoint.ckpt.index', 'model_weights.h5'] file manque in s3 : [] 2025-02-07 13:02:19.945341: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 3.02G (3246391296 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-02-07 13:02:19.946259: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.72G (2921752064 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory local folder : /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/Confusion_Matrix.png size_local : 67810 size in s3 : 67810 create time local : 2023-10-30 16:21:37 create time in s3 : 2023-10-30 14:09:29 Confusion_Matrix.png already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/Precision_Recall_05102018_Papier_non_papier_dense.jpg size_local : 73949 size in s3 : 73949 create time local : 2023-10-30 16:21:38 create time in s3 : 2023-10-30 14:09:30 Precision_Recall_05102018_Papier_non_papier_dense.jpg already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/Precision_Recall_05102018_Papier_non_papier_peu_dense.jpg size_local : 85572 size in s3 : 85572 create time local : 2023-10-30 16:21:38 create time in s3 : 2023-10-30 14:09:39 Precision_Recall_05102018_Papier_non_papier_peu_dense.jpg already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/Precision_Recall_05102018_Papier_non_papier_presque_vide.jpg size_local : 72361 size in s3 : 72361 create time local : 2023-10-30 16:21:38 create time in s3 : 2023-10-30 14:09:37 Precision_Recall_05102018_Papier_non_papier_presque_vide.jpg already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/Precision_Recall_05102018_Papier_non_papier_tres_dense.jpg size_local : 83567 size in s3 : 83567 create time local : 2023-10-30 16:21:38 create time in s3 : 2023-10-30 14:09:48 Precision_Recall_05102018_Papier_non_papier_tres_dense.jpg already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/Precision_Recall_05102018_Papier_non_papier_tres_peu_dense.jpg size_local : 71611 size in s3 : 71611 create time local : 2023-10-30 16:21:38 create time in s3 : 2023-10-30 14:09:29 Precision_Recall_05102018_Papier_non_papier_tres_peu_dense.jpg already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/Result_Summary.txt size_local : 1058 size in s3 : 1058 create time local : 2023-10-30 16:21:38 create time in s3 : 2023-10-30 14:09:30 Result_Summary.txt already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/checkpoint size_local : 99 size in s3 : 99 create time local : 2023-10-30 16:21:38 create time in s3 : 2023-10-30 14:09:36 checkpoint already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/model_checkpoint.ckpt.data-00000-of-00001 size_local : 188538519 size in s3 : 188538519 create time local : 2023-10-30 16:21:41 create time in s3 : 2023-10-30 14:09:40 model_checkpoint.ckpt.data-00000-of-00001 already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/model_checkpoint.ckpt.data-00000-of-00002 size_local : 216572 size in s3 : 216572 create time local : 2023-10-30 16:21:41 create time in s3 : 2023-03-16 14:52:09 model_checkpoint.ckpt.data-00000-of-00002 already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/model_checkpoint.ckpt.data-00001-of-00002 size_local : 32279708 size in s3 : 32279708 create time local : 2023-10-30 16:21:42 create time in s3 : 2023-03-16 14:52:07 model_checkpoint.ckpt.data-00001-of-00002 already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/model_checkpoint.ckpt.index size_local : 28001 size in s3 : 28001 create time local : 2023-10-30 16:21:42 create time in s3 : 2023-10-30 14:09:30 model_checkpoint.ckpt.index already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/model_weights.h5 size_local : 94501976 size in s3 : 94501976 create time local : 2023-10-30 16:21:44 create time in s3 : 2023-10-30 14:09:37 model_weights.h5 already exist and didn't need to update ERROR in datou_step_exec, will save and exit ! assertion failed: [0] [Op:Assert] name: EagerVariableNameReuse File "/home/admin/workarea/git/Velours/python/mtr/datou/datou_lib.py", line 2329, in datou_exec output = datou_step_exec(sNext, args, cache, context, map_info, verbose, mtr_user_id) File "/home/admin/workarea/git/Velours/python/mtr/datou/datou_lib.py", line 2523, in datou_step_exec return lib_process.datou_step_tfhub2(param, json_param, args, cache, context, map_info, verbose) File "/home/admin/workarea/git/Velours/python/mtr/datou/lib_step_exec/lib_step_process.py", line 3138, in datou_step_tfhub2 this_model = model_evaluator(model_name, model_type=model_type, fc_size=fc_size,use_multi_inputs=use_multi_inputs) File "/home/admin/workarea/git/Velours/python/mtr/tfhub2/evaluate.py", line 156, in __init__ self.model, _, _ = create_tfhub_model(module_handle=self.tfhub_module, File "/home/admin/workarea/git/Velours/python/mtr/tfhub2/evaluate.py", line 77, in create_tfhub_model hub.KerasLayer(module_handle, trainable=do_fine_tuning, name="module"), File "/home/admin/.local/lib/python3.8/site-packages/tensorflow_hub/keras_layer.py", line 152, in __init__ self._func = load_module(handle, tags, self._load_options) File "/home/admin/.local/lib/python3.8/site-packages/tensorflow_hub/keras_layer.py", line 421, in load_module return module_v2.load(handle, tags=tags, options=set_load_options) File "/home/admin/.local/lib/python3.8/site-packages/tensorflow_hub/module_v2.py", line 106, in load obj = tf.compat.v1.saved_model.load_v2(module_path, tags=tags) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 578, in load return load_internal(export_dir, tags) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 602, in load_internal loader = loader_cls(object_graph_proto, File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 123, in __init__ self._load_all() File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 134, in _load_all self._load_nodes() File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 264, in _load_nodes node, setter = self._recreate(proto, node_id) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 370, in _recreate return factory[kind]() File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 363, in "variable": lambda: self._recreate_variable(proto.variable), File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 426, in _recreate_variable return variables.Variable( File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 261, in __call__ return cls._variable_v2_call(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 243, in _variable_v2_call return previous_getter( File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 66, in getter return captured_getter(captured_previous, **kwargs) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 418, in uninitialized_variable_creator return resource_variable_ops.UninitializedVariable(**kwargs) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 263, in __call__ return super(VariableMetaclass, cls).__call__(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1795, in __init__ handle = _variable_handle_from_shape_and_dtype( File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 174, in _variable_handle_from_shape_and_dtype gen_logging_ops._assert( # pylint: disable=protected-access File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_logging_ops.py", line 55, in _assert _ops.raise_from_not_ok_status(e, name) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 6653, in raise_from_not_ok_status six.raise_from(core._status_to_exception(e.code, message), None) File "", line 3, in raise_from [1335512119, 1335512098, 1335512064, 1335512013, 1335512011, 1335512007, 1335511735, 1335511664, 1335511656, 1335511650, 1335511645, 1335511621, 1335511449, 1335511351, 1335511262, 1335511173, 1335511065, 1335510971, 1335510800, 1335510795, 1335510788, 1335510785, 1335510782, 1335510747, 1335510715, 1335510707, 1335510695, 1335510691, 1335510687, 1335510685, 1335510661, 1335510654, 1335510648, 1335510640, 1335510637, 1335510636, 1335510586, 1335510581, 1335510576, 1335510570, 1335510537, 1335510498, 1335510082, 1335509969, 1335509961, 1335509956, 1335509953, 1335509951, 1335509829, 1335509783, 1335509779, 1335509775, 1335509760, 1335509461, 1335508593, 1335508531, 1335508457, 1335508376, 1335508273, 1335508162, 1335495820, 1335495817, 1335495815, 1335495813, 1335495731, 1335495725, 1335495720, 1335495712, 1335495702, 1335495675, 1335495651, 1335495648, 1335495644, 1335495641, 1335495593, 1335495588, 1335495412, 1335495403, 1335495377, 1335495316, 1335495265, 1335495238, 1335494906, 1335494824, 1335494798, 1335494795, 1335494792, 1335494788, 1335494764, 1335494760, 1335494756, 1335494753, 1335494748, 1335494736, 1335494221, 1335494169, 1335494088, 1335494052, 1335493952, 1335493885, 1335493441, 1335493438, 1335493431, 1335493426, 1335493414, 1335493411, 1335493363, 1335493299, 1335493281, 1335493145, 1335492955, 1335492762, 1335492184, 1335492094, 1335492055, 1335492000, 1335491955, 1335491912, 1335386220, 1335386214, 1335385933, 1335385840, 1335385741, 1335385561, 1335384676, 1335384595, 1335384506, 1335384504, 1335384497, 1335384494, 1335384492, 1335384489, 1335384486, 1335384483, 1335384431, 1335384427, 1335384425, 1335384353, 1335384345, 1335384342, 1335384323, 1335384321, 1335384314, 1335384308, 1335384307, 1335384302, 1335384266, 1335384261, 1335384257, 1335384253, 1335384249, 1335384228, 1335384224, 1335384201, 1335384197, 1335384194, 1335384190, 1335384119, 1335384115, 1335384100, 1335384081, 1335384078, 1335384069, 1335384064, 1335384058, 1335384051, 1335383992, 1335383989, 1335383986, 1335383710, 1335383707, 1335383704, 1335383701, 1335383698, 1335383672, 1335383501, 1335383498, 1335383495, 1335383492, 1335383489, 1335383486, 1335383391, 1335383361, 1335383302, 1335383244, 1335383189, 1335383152, 1335383054, 1335382976, 1335382900, 1335382822, 1335382156, 1335382068, 1335382062, 1335382059, 1335382057, 1335382054, 1335382040, 1335382035, 1335382033, 1335382030, 1335382028, 1335381996, 1335381963, 1335381961] begin to insert list_values into mtr_datou_result : length of list_values in save_final : 205 time used for this insertion : 0.44737696647644043 save_final ERROR in last step tfhub_classification2, assertion failed: [0] [Op:Assert] name: EagerVariableNameReuse time spend for datou_step_exec : 75.7302520275116 time spend to save output : 0.4564838409423828 total time spend for step 0 : 76.18673586845398 need to delete datou_research and reload, so keep current state 1 caffe_path_current : About to save ! 2 After save, about to update current ! 11.41user 3.72system 1:55.80elapsed 13%CPU (0avgtext+0avgdata 1029240maxresident)k 114728inputs+379056outputs (522major+479445minor)pagefaults 0swaps