python /home/admin/mtr/script_for_cron.py -j datou_current3 -m 10 -a ' -a 4892 ' -s datou_current_4892 -M 0 -S 0 -U 95,95,80 import MySQLdb succeeded Import error (python version) ['/Users/moilerat/Documents/Fotonower/install/caffe/distribute/python', '/home/admin/workarea/git/Velours/python/prod', '/home/admin/workarea/install/caffe_cuda8_python3/python', '/home/admin/workarea/install/darknet', '/home/admin/workarea/git/Velours/python', '/home/admin/workarea/install/caffe_frcnn_python3/py-faster-rcnn/caffe-fast-rcnn/python', '/home/admin/mtr/.credentials', '/home/admin/workarea/install/caffe/python', '/home/admin/workarea/install/caffe_frcnn/py-faster-rcnn/tools', '/home/admin/workarea/git/fotonowerpip', '/home/admin/workarea/install/segment-anything', '/home/admin/workarea/git/pyfvs', '/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload', '/home/admin/.local/lib/python3.8/site-packages', '/usr/local/lib/python3.8/dist-packages', '/usr/lib/python3/dist-packages'] process id : 3477630 load datou : 4892 # VR 17-11-17 : to create in DB ! Here we check the datou graph and we reorder steps ! Tree builded and cycle checked, now we need to re-order the steps ! We have currenlty an error because there is no dependence between the last step for the case tile - detect - glue We can either keep the depence of, it is better to keep an order compatible with the id of steps if we do not have sons, so a lexical order : (number_son, step_id) All sons are already in current list ! DONE and to test : checkNoCycle ! Here we check the consistency of inputs/outputs number between the given ones and the db ! eke 1-6-18 : checkConsistencyNbInputNbOutput should be processed after step reordering ! Number of inputs / outputs for each step checked ! Here we check the consistency of outputs/inputs types during steps connections eke 1-6-18 : checkConsistencyTypeOutputInput should be processed after checkConsistencyNbInputNbOutput ! DataTypes for each output/input checked ! Unexpected type for variable list_input_json ERROR or WARNING : can't parse json string Expecting value: line 1 column 1 (char 0) Tried to parse : chemin de la photo was removed should we ? [ (photo_id, hashtag_id_0, score_0), (photo_id, hashtag_id_1, score_1), ...] was removed should we ? [ (photo_id, hashtag_id_0, score_0), (photo_id, hashtag_id_1, score_1), ...] was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? chemin de la photo was removed should we ? load thcls load pdts Running datou job : batch_current TODO datou_current to load to do maybe to take outside batchDatouExec no input labels no input values updating current state to 1 we have a portfolio with more photos than limit : 1623>1000 please execute split_portfolio.py -i 20322867 -l 1000 size over we load limit photo not treated list_input_json: {} Current got : datou_id : 4892, datou_cur_ids : ['2561624'] with mtr_portfolio_ids : ['20322867'] and first list_photo_ids : [] new path : /proc/3477630/ Inside batchDatouExec : verbose : 0 # VR 17-11-17 : to create in DB ! Here we check the datou graph and we reorder steps ! Tree builded and cycle checked, now we need to re-order the steps ! We have currenlty an error because there is no dependence between the last step for the case tile - detect - glue We can either keep the depence of, it is better to keep an order compatible with the id of steps if we do not have sons, so a lexical order : (number_son, step_id) All sons are already in current list ! DONE and to test : checkNoCycle ! Here we check the consistency of inputs/outputs number between the given ones and the db ! eke 1-6-18 : checkConsistencyNbInputNbOutput should be processed after step reordering ! Number of inputs / outputs for each step checked ! Here we check the consistency of outputs/inputs types during steps connections eke 1-6-18 : checkConsistencyTypeOutputInput should be processed after checkConsistencyNbInputNbOutput ! DataTypes for each output/input checked ! List Step Type Loaded in datou : tfhub_classification2, argmax over limit max, limiting to limit_max 300 list_input_json : {} origin We have 1 , WARNING: data may be incomplete, need to offset and complete ! BFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFwe have missing 0 photos in the step downloads : photo missing : [] try to delete the photos missing in DB length of list_filenames : 123 ; length of list_pids : 123 ; length of list_args : 123 time to download the photos : 21.554373502731323 About to test input to load we should then remove the video here, and this would fix the bug of datou_current ! Calling datou_exec Inside datou_exec : verbose : 0 number of steps : 2 step1:tfhub_classification2 Fri Feb 7 12:07:48 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Beginning of datou_step TFHub with tf2 ! nombre de thcls : 2 we are using the classfication for multi_thcl [3513, 3890] begin to check gpu status inside check gpu memory havn't enough memory gpu , need / 3096 l 3632 free memory gpu now : 1944 wait 20 seconds inside check gpu memory 2025-02-07 12:08:19.928196: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2025-02-07 12:08:19.972355: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:41:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5 coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s 2025-02-07 12:08:19.975659: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-02-07 12:08:20.012035: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2025-02-07 12:08:20.035959: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2025-02-07 12:08:20.040648: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2025-02-07 12:08:20.084780: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2025-02-07 12:08:20.094478: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2025-02-07 12:08:20.200641: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2025-02-07 12:08:20.202065: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0 2025-02-07 12:08:20.204950: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2025-02-07 12:08:20.247150: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 3493065000 Hz 2025-02-07 12:08:20.249538: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f0a20000b60 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2025-02-07 12:08:20.249594: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2025-02-07 12:08:20.580417: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1d0f4f50 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2025-02-07 12:08:20.580501: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA GeForce RTX 2080 Ti, Compute Capability 7.5 2025-02-07 12:08:20.582465: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:41:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5 coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s 2025-02-07 12:08:20.582587: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-02-07 12:08:20.582620: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2025-02-07 12:08:20.582652: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2025-02-07 12:08:20.582682: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2025-02-07 12:08:20.582713: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2025-02-07 12:08:20.582742: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2025-02-07 12:08:20.582773: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2025-02-07 12:08:20.584105: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0 2025-02-07 12:08:20.585973: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-02-07 12:08:20.587485: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix: 2025-02-07 12:08:20.587513: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 2025-02-07 12:08:20.587528: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N 2025-02-07 12:08:20.589226: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:41:00.0, compute capability: 7.5) l 3637 free memory gpu now : 3358 max_wait_temp : 2 max_wait : 5 1 Physical GPUs, 1 Logical GPUs tagging for thcl : 3513 To do loadFromThcl(), then load ParamDescType : thcl3513 thcls : [{'id': 3513, 'mtr_user_id': 31, 'name': 'Rungis_amount_dechets_fall_2018_v2_tf', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': '05102018_Papier_non_papier_dense,05102018_Papier_non_papier_peu_dense,05102018_Papier_non_papier_presque_vide,05102018_Papier_non_papier_tres_dense,05102018_Papier_non_papier_tres_peu_dense', 'svm_portfolios_learning': '1108385,1108386,1108388,1108384,1108387', 'photo_hashtag_type': 4557, 'photo_desc_type': 5767, 'type_classification': 'tf_classification2', 'hashtag_id_list': '2107751013,2107751014,2107751015,2107751016,2107751017'}] thcl {'id': 3513, 'mtr_user_id': 31, 'name': 'Rungis_amount_dechets_fall_2018_v2_tf', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': '05102018_Papier_non_papier_dense,05102018_Papier_non_papier_peu_dense,05102018_Papier_non_papier_presque_vide,05102018_Papier_non_papier_tres_dense,05102018_Papier_non_papier_tres_peu_dense', 'svm_portfolios_learning': '1108385,1108386,1108388,1108384,1108387', 'photo_hashtag_type': 4557, 'photo_desc_type': 5767, 'type_classification': 'tf_classification2', 'hashtag_id_list': '2107751013,2107751014,2107751015,2107751016,2107751017'} Update svm_hashtag_type_desc : 5767 FOUND : 1 Here is data_from_sql_as_vec to set the ParamDescriptorType : (5767, 'Rungis_amount_dechets_fall_2018_v2_tf', 2048, 2048, 'Rungis_amount_dechets_fall_2018_v2_tf', 'pool5', 10.0, None, None, 256, None, 0, None, 8, None, None, -1000.0, 3, datetime.datetime(2023, 3, 16, 15, 52, 10), datetime.datetime(2023, 3, 16, 15, 52, 10)) model_name : Rungis_amount_dechets_fall_2018_v2_tf model_param file didn't exist model_name : Rungis_amount_dechets_fall_2018_v2_tf model_type : tf_classification2 list file need : ['Confusion_Matrix.png', 'Precision_Recall_05102018_Papier_non_papier_dense.jpg', 'Precision_Recall_05102018_Papier_non_papier_peu_dense.jpg', 'Precision_Recall_05102018_Papier_non_papier_presque_vide.jpg', 'Precision_Recall_05102018_Papier_non_papier_tres_dense.jpg', 'Precision_Recall_05102018_Papier_non_papier_tres_peu_dense.jpg', 'Result_Summary.txt', 'checkpoint', 'model_checkpoint.ckpt.data-00000-of-00001', 'model_checkpoint.ckpt.data-00000-of-00002', 'model_checkpoint.ckpt.data-00001-of-00002', 'model_checkpoint.ckpt.index', 'model_weights.h5'] file exist in s3 : ['Confusion_Matrix.png', 'Precision_Recall_05102018_Papier_non_papier_dense.jpg', 'Precision_Recall_05102018_Papier_non_papier_peu_dense.jpg', 'Precision_Recall_05102018_Papier_non_papier_presque_vide.jpg', 'Precision_Recall_05102018_Papier_non_papier_tres_dense.jpg', 'Precision_Recall_05102018_Papier_non_papier_tres_peu_dense.jpg', 'Result_Summary.txt', 'checkpoint', 'model_checkpoint.ckpt.data-00000-of-00001', 'model_checkpoint.ckpt.data-00000-of-00002', 'model_checkpoint.ckpt.data-00001-of-00002', 'model_checkpoint.ckpt.index', 'model_weights.h5'] file manque in s3 : [] local folder : /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/Confusion_Matrix.png size_local : 67810 size in s3 : 67810 create time local : 2023-10-30 16:21:37 create time in s3 : 2023-10-30 14:09:29 Confusion_Matrix.png already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/Precision_Recall_05102018_Papier_non_papier_dense.jpg size_local : 73949 size in s3 : 73949 create time local : 2023-10-30 16:21:38 create time in s3 : 2023-10-30 14:09:30 Precision_Recall_05102018_Papier_non_papier_dense.jpg already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/Precision_Recall_05102018_Papier_non_papier_peu_dense.jpg size_local : 85572 size in s3 : 85572 create time local : 2023-10-30 16:21:38 create time in s3 : 2023-10-30 14:09:39 Precision_Recall_05102018_Papier_non_papier_peu_dense.jpg already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/Precision_Recall_05102018_Papier_non_papier_presque_vide.jpg size_local : 72361 size in s3 : 72361 create time local : 2023-10-30 16:21:38 create time in s3 : 2023-10-30 14:09:37 Precision_Recall_05102018_Papier_non_papier_presque_vide.jpg already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/Precision_Recall_05102018_Papier_non_papier_tres_dense.jpg size_local : 83567 size in s3 : 83567 create time local : 2023-10-30 16:21:38 create time in s3 : 2023-10-30 14:09:48 Precision_Recall_05102018_Papier_non_papier_tres_dense.jpg already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/Precision_Recall_05102018_Papier_non_papier_tres_peu_dense.jpg size_local : 71611 size in s3 : 71611 create time local : 2023-10-30 16:21:38 create time in s3 : 2023-10-30 14:09:29 Precision_Recall_05102018_Papier_non_papier_tres_peu_dense.jpg already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/Result_Summary.txt size_local : 1058 size in s3 : 1058 create time local : 2023-10-30 16:21:38 create time in s3 : 2023-10-30 14:09:30 Result_Summary.txt already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/checkpoint size_local : 99 size in s3 : 99 create time local : 2023-10-30 16:21:38 create time in s3 : 2023-10-30 14:09:36 checkpoint already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/model_checkpoint.ckpt.data-00000-of-00001 size_local : 188538519 size in s3 : 188538519 create time local : 2023-10-30 16:21:41 create time in s3 : 2023-10-30 14:09:40 model_checkpoint.ckpt.data-00000-of-00001 already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/model_checkpoint.ckpt.data-00000-of-00002 size_local : 216572 size in s3 : 216572 create time local : 2023-10-30 16:21:41 create time in s3 : 2023-03-16 14:52:09 model_checkpoint.ckpt.data-00000-of-00002 already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/model_checkpoint.ckpt.data-00001-of-00002 size_local : 32279708 size in s3 : 32279708 create time local : 2023-10-30 16:21:42 create time in s3 : 2023-03-16 14:52:07 model_checkpoint.ckpt.data-00001-of-00002 already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/model_checkpoint.ckpt.index size_local : 28001 size in s3 : 28001 create time local : 2023-10-30 16:21:42 create time in s3 : 2023-10-30 14:09:30 model_checkpoint.ckpt.index already exist and didn't need to update /data/models_weight/Rungis_amount_dechets_fall_2018_v2_tf/model_weights.h5 size_local : 94501976 size in s3 : 94501976 create time local : 2023-10-30 16:21:44 create time in s3 : 2023-10-30 14:09:37 model_weights.h5 already exist and didn't need to update ERROR in datou_step_exec, will save and exit ! assertion failed: [0] [Op:Assert] name: EagerVariableNameReuse File "/home/admin/workarea/git/Velours/python/mtr/datou/datou_lib.py", line 2329, in datou_exec output = datou_step_exec(sNext, args, cache, context, map_info, verbose, mtr_user_id) File "/home/admin/workarea/git/Velours/python/mtr/datou/datou_lib.py", line 2523, in datou_step_exec return lib_process.datou_step_tfhub2(param, json_param, args, cache, context, map_info, verbose) File "/home/admin/workarea/git/Velours/python/mtr/datou/lib_step_exec/lib_step_process.py", line 3138, in datou_step_tfhub2 this_model = model_evaluator(model_name, model_type=model_type, fc_size=fc_size,use_multi_inputs=use_multi_inputs) File "/home/admin/workarea/git/Velours/python/mtr/tfhub2/evaluate.py", line 156, in __init__ self.model, _, _ = create_tfhub_model(module_handle=self.tfhub_module, File "/home/admin/workarea/git/Velours/python/mtr/tfhub2/evaluate.py", line 77, in create_tfhub_model hub.KerasLayer(module_handle, trainable=do_fine_tuning, name="module"), File "/home/admin/.local/lib/python3.8/site-packages/tensorflow_hub/keras_layer.py", line 152, in __init__ self._func = load_module(handle, tags, self._load_options) File "/home/admin/.local/lib/python3.8/site-packages/tensorflow_hub/keras_layer.py", line 421, in load_module return module_v2.load(handle, tags=tags, options=set_load_options) File "/home/admin/.local/lib/python3.8/site-packages/tensorflow_hub/module_v2.py", line 106, in load obj = tf.compat.v1.saved_model.load_v2(module_path, tags=tags) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 578, in load return load_internal(export_dir, tags) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 602, in load_internal loader = loader_cls(object_graph_proto, File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 123, in __init__ self._load_all() File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 134, in _load_all self._load_nodes() File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 264, in _load_nodes node, setter = self._recreate(proto, node_id) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 370, in _recreate return factory[kind]() File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 363, in "variable": lambda: self._recreate_variable(proto.variable), File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 426, in _recreate_variable return variables.Variable( File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 261, in __call__ return cls._variable_v2_call(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 243, in _variable_v2_call return previous_getter( File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 66, in getter return captured_getter(captured_previous, **kwargs) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/load.py", line 418, in uninitialized_variable_creator return resource_variable_ops.UninitializedVariable(**kwargs) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/variables.py", line 263, in __call__ return super(VariableMetaclass, cls).__call__(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1795, in __init__ handle = _variable_handle_from_shape_and_dtype( File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 174, in _variable_handle_from_shape_and_dtype gen_logging_ops._assert( # pylint: disable=protected-access File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_logging_ops.py", line 55, in _assert _ops.raise_from_not_ok_status(e, name) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 6653, in raise_from_not_ok_status six.raise_from(core._status_to_exception(e.code, message), None) File "", line 3, in raise_from [1335496314, 1335496309, 1335496305, 1335496302, 1335496298, 1335496295, 1335496277, 1335496273, 1335496267, 1335496262, 1335496258, 1335496256, 1335496236, 1335496231, 1335496227, 1335496221, 1335496218, 1335496215, 1335496188, 1335496182, 1335496177, 1335496174, 1335496170, 1335496167, 1335496114, 1335496110, 1335496107, 1335496104, 1335496100, 1335496096, 1335496066, 1335496063, 1335496061, 1335496059, 1335496055, 1335496052, 1335496013, 1335496006, 1335496005, 1335496001, 1335495999, 1335495997, 1335495992, 1335495991, 1335495980, 1335495979, 1335495973, 1335495972, 1335495863, 1335495861, 1335495829, 1335495825, 1335495821, 1335495818, 1335495738, 1335495733, 1335495727, 1335495723, 1335495719, 1335495711, 1335495657, 1335495654, 1335495650, 1335495646, 1335495643, 1335495640, 1335445475, 1335445472, 1335445471, 1335445433, 1335445431, 1335445428, 1335445421, 1335445406, 1335445397, 1335445376, 1335445373, 1335445370, 1335445365, 1335445360, 1335445355, 1335445330, 1335445320, 1335445316, 1335445311, 1335445308, 1335445305, 1335445274, 1335445266, 1335445262, 1335445252, 1335445249, 1335445246, 1335445245, 1335445242, 1335445241, 1335445237, 1335445235, 1335445232, 1335444398, 1335444110, 1335443982, 1335443904, 1335443817, 1335443759, 1335443084, 1335443080, 1335443074, 1335443055, 1335443043, 1335443040, 1335443009, 1335443002, 1335442999, 1335442976, 1335442974, 1335442964, 1335417428, 1335417425, 1335417421, 1335417418, 1335417416, 1335417414] begin to insert list_values into mtr_datou_result : length of list_values in save_final : 123 time used for this insertion : 0.05286359786987305 save_final ERROR in last step tfhub_classification2, assertion failed: [0] [Op:Assert] name: EagerVariableNameReuse time spend for datou_step_exec : 41.800642013549805 time spend to save output : 0.07554745674133301 total time spend for step 0 : 41.87618947029114 need to delete datou_research and reload, so keep current state 1 caffe_path_current : About to save ! 2 After save, about to update current ! 11.07user 6.41system 1:07.69elapsed 25%CPU (0avgtext+0avgdata 1231988maxresident)k 1437560inputs+236296outputs (14155major+456276minor)pagefaults 0swaps