python /home/admin/mtr/script_for_cron.py -j default -m 20 -a 'python3 ~/workarea/git/Velours/python/prod/datou.py -j batch_current -C 2497679' -s traitement_3459 -M 0 -S 0 -U 100,80,95 import MySQLdb succeeded Import error (python version) ['/Users/moilerat/Documents/Fotonower/install/caffe/distribute/python', '/home/admin/workarea/git/Velours/python/prod', '/home/admin/workarea/install/darknet', '/home/admin/workarea/git/Velours/python', '/home/admin/workarea/install/caffe_frcnn_python3/py-faster-rcnn/caffe-fast-rcnn/python', '/home/admin/mtr/.credentials', '/home/admin/workarea/install/caffe/python', '/home/admin/workarea/install/caffe_frcnn/py-faster-rcnn/tools', '/home/admin/workarea/git/fotonowerpip', '/home/admin/workarea/install/segment-anything', '/home/admin/workarea/git/pyfvs', '/home/admin/workarea/git/apy', '/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload', '/home/admin/.local/lib/python3.8/site-packages', '/usr/local/lib/python3.8/dist-packages', '/usr/lib/python3/dist-packages'] process id : 3077394 load datou : 0 # VR 17-11-17 : to create in DB ! Here we check the datou graph and we reorder steps ! Tree builded and cycle checked, now we need to re-order the steps ! We have currenlty an error because there is no dependence between the last step for the case tile - detect - glue We can either keep the depence of, it is better to keep an order compatible with the id of steps if we do not have sons, so a lexical order : (number_son, step_id) DONE and to test : checkNoCycle ! Here we check the consistency of inputs/outputs number between the given ones and the db ! eke 1-6-18 : checkConsistencyNbInputNbOutput should be processed after step reordering ! WARNING : step 0 init_dummy_multi_datou is not linked in the step_by_step architecture ! WARNING : step 1294 init_dummy_multi_datou is not linked in the step_by_step architecture ! Number of inputs / outputs for each step checked ! Here we check the consistency of outputs/inputs types during steps connections eke 1-6-18 : checkConsistencyTypeOutputInput should be processed after checkConsistencyNbInputNbOutput ! DataTypes for each output/input checked ! Unexpected type for variable list_input_json ERROR or WARNING : can't parse json string Expecting value: line 1 column 1 (char 0) Tried to parse : (photo_id, hashtag_id, score_max) was removed should we ? (x0, y0, x1, y1) was removed should we ? chemin de la photo was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? (x0, y0, x1, y1) was removed should we ? chemin de la photo was removed should we ? load thcls load pdts Running datou job : batch_current TODO datou_current to load to do maybe to take outside batchDatouExec updating current state to 1 list_input_json: [] Current got : datou_id : 3459, datou_cur_ids : ['2497679'] with mtr_portfolio_ids : ['19767304'] and first list_photo_ids : [] new path : /proc/3077394/ Inside batchDatouExec : verbose : 0 # VR 17-11-17 : to create in DB ! Here we check the datou graph and we reorder steps ! Tree builded and cycle checked, now we need to re-order the steps ! We have currenlty an error because there is no dependence between the last step for the case tile - detect - glue We can either keep the depence of, it is better to keep an order compatible with the id of steps if we do not have sons, so a lexical order : (number_son, step_id) All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! DONE and to test : checkNoCycle ! Here we check the consistency of inputs/outputs number between the given ones and the db ! eke 1-6-18 : checkConsistencyNbInputNbOutput should be processed after step reordering ! WARNING : number of outputs for step 11449 mask_detect is not consistent : 3 used against 2 in the step definition ! Step 11452 crop_condition have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! Step 11452 crop_condition have less outputs used (2) than in the step definition (3) : some outputs may be not used ! Step 11453 merge_mask_thcl_custom have less inputs used (2) than in the step definition (3) : maybe we manage optionnal inputs ! WARNING : number of outputs for step 11453 merge_mask_thcl_custom is not consistent : 4 used against 2 in the step definition ! WARNING : number of inputs for step 11454 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 11454 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! Step 11478 crop_condition have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! WARNING : number of outputs for step 11478 crop_condition is not consistent : 4 used against 3 in the step definition ! WARNING : number of inputs for step 11456 ventilate_hashtags_in_portfolio is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 11456 ventilate_hashtags_in_portfolio is not consistent : 2 used against 1 in the step definition ! Step 11455 final have less inputs used (2) than in the step definition (3) : maybe we manage optionnal inputs ! Step 11455 final have less outputs used (1) than in the step definition (2) : some outputs may be not used ! Step 11458 send_mail_cod have less inputs used (3) than in the step definition (5) : maybe we manage optionnal inputs ! Number of inputs / outputs for each step checked ! Here we check the consistency of outputs/inputs types during steps connections eke 1-6-18 : checkConsistencyTypeOutputInput should be processed after checkConsistencyNbInputNbOutput ! WARNING : type of output 2 of step 11449 doesn't seem to be define in the database( WARNING : type of input 2 of step 11452 doesn't seem to be define in the database( WARNING : output 1 of step 11449 have datatype=2 whereas input 1 of step 11453 have datatype=7 WARNING : type of output 2 of step 11453 doesn't seem to be define in the database( WARNING : type of input 1 of step 11454 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : type of output 3 of step 11453 doesn't seem to be define in the database( WARNING : type of input 1 of step 11456 doesn't seem to be define in the database( WARNING : type of output 1 of step 11456 doesn't seem to be define in the database( WARNING : type of input 3 of step 11455 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : output 0 of step 11456 have datatype=10 whereas input 3 of step 11458 have datatype=6 WARNING : type of input 5 of step 11458 doesn't seem to be define in the database( WARNING : output 0 of step 11477 have datatype=11 whereas input 5 of step 11458 have datatype=None WARNING : output 0 of step 11456 have datatype=10 whereas input 0 of step 11477 have datatype=18 WARNING : type of input 2 of step 11478 doesn't seem to be define in the database( WARNING : output 1 of step 11454 have datatype=7 whereas input 2 of step 11478 have datatype=None WARNING : type of output 3 of step 11478 doesn't seem to be define in the database( WARNING : type of input 2 of step 11456 doesn't seem to be define in the database( WARNING : output 0 of step 11453 have datatype=1 whereas input 0 of step 11454 have datatype=2 DataTypes for each output/input checked ! List Step Type Loaded in datou : mask_detect, crop_condition, thcl, merge_mask_thcl_custom, rle_unique_nms_with_priority, crop_condition, ventilate_hashtags_in_portfolio, final, velours_tree, send_mail_cod, split_time_score over limit max, limiting to limit_max 20 list_input_json : [] origin We have 1 , WARNING: data may be incomplete, need to offset and complete ! BFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFBFwe have missing 0 photos in the step downloads : photo missing : [] try to delete the photos missing in DB length of list_filenames : 20 ; length of list_pids : 20 ; length of list_args : 20 time to download the photos : 3.3764758110046387 About to test input to load we should then remove the video here, and this would fix the bug of datou_current ! Calling datou_exec Inside datou_exec : verbose : 0 number of steps : 11 step1:mask_detect Fri Feb 7 10:42:44 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Beginning of datou step mask_detect ! save_polygon : True begin detect begin to check gpu status inside check gpu memory l 3637 free memory gpu now : 5485 max_wait_temp : 1 max_wait : 0 gpu_flag : 0 2025-02-07 10:42:47.181094: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2025-02-07 10:42:47.207039: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 3493065000 Hz 2025-02-07 10:42:47.209250: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fde50000b60 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2025-02-07 10:42:47.209303: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2025-02-07 10:42:47.213482: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2025-02-07 10:42:47.357241: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x200de1d0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2025-02-07 10:42:47.357287: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA GeForce RTX 2080 Ti, Compute Capability 7.5 2025-02-07 10:42:47.358786: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:41:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5 coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s 2025-02-07 10:42:47.359233: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-02-07 10:42:47.362760: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2025-02-07 10:42:47.365950: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2025-02-07 10:42:47.366504: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2025-02-07 10:42:47.369291: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2025-02-07 10:42:47.370374: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2025-02-07 10:42:47.374841: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2025-02-07 10:42:47.376362: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0 2025-02-07 10:42:47.376446: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-02-07 10:42:47.377243: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix: 2025-02-07 10:42:47.377259: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 2025-02-07 10:42:47.377285: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N 2025-02-07 10:42:47.378695: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9985 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:41:00.0, compute capability: 7.5) WARNING:tensorflow:From /home/admin/workarea/git/Velours/python/mtr/mask_rcnn/mask_detection.py:69: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead. 2025-02-07 10:42:47.662762: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:41:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5 coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s 2025-02-07 10:42:47.662872: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-02-07 10:42:47.662888: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2025-02-07 10:42:47.662916: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2025-02-07 10:42:47.662933: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2025-02-07 10:42:47.662946: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2025-02-07 10:42:47.662960: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2025-02-07 10:42:47.662975: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2025-02-07 10:42:47.664640: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0 2025-02-07 10:42:47.665760: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:41:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5 coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s 2025-02-07 10:42:47.665794: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-02-07 10:42:47.665810: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2025-02-07 10:42:47.665825: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2025-02-07 10:42:47.665840: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2025-02-07 10:42:47.665855: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2025-02-07 10:42:47.665870: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2025-02-07 10:42:47.665885: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2025-02-07 10:42:47.667083: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0 2025-02-07 10:42:47.667120: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix: 2025-02-07 10:42:47.667129: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 2025-02-07 10:42:47.667137: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N 2025-02-07 10:42:47.668448: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9985 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:41:00.0, compute capability: 7.5) Using TensorFlow backend. WARNING:tensorflow:From /home/admin/workarea/install/Mask_RCNN/model.py:396: calling crop_and_resize_v1 (from tensorflow.python.ops.image_ops_impl) with box_ind is deprecated and will be removed in a future version. Instructions for updating: box_ind is deprecated, use box_indices instead WARNING:tensorflow:From /home/admin/workarea/install/Mask_RCNN/model.py:703: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.cast` instead. WARNING:tensorflow:From /home/admin/workarea/install/Mask_RCNN/model.py:729: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.cast` instead. Inside mask_sub_process Inside mask_detect About to load cache.load_thcl_param To do loadFromThcl(), then load ParamDescType : thcl2896 thcls : [{'id': 2896, 'mtr_user_id': 31, 'name': 'learn_convoyeur_qualipapia_nantes_poly_100521_1', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'background,carton_brun,carton_gris,cartonnette,kraft,autre_refus,metal,plastique,teint_dans_la_masse,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3663, 'photo_desc_type': 5309, 'type_classification': 'mask_rcnn', 'hashtag_id_list': '0,0,0,0,0,0,0,0,0,0'}] thcl {'id': 2896, 'mtr_user_id': 31, 'name': 'learn_convoyeur_qualipapia_nantes_poly_100521_1', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'background,carton_brun,carton_gris,cartonnette,kraft,autre_refus,metal,plastique,teint_dans_la_masse,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3663, 'photo_desc_type': 5309, 'type_classification': 'mask_rcnn', 'hashtag_id_list': '0,0,0,0,0,0,0,0,0,0'} Update svm_hashtag_type_desc : 5309 FOUND : 1 Here is data_from_sql_as_vec to set the ParamDescriptorType : (5309, 'learn_convoyeur_qualipapia_nantes_poly_100521_1', 16384, 25088, 'learn_convoyeur_qualipapia_nantes_poly_100521_1', 'pool5', 10.0, None, None, 256, None, 0, None, 8, None, None, -1000.0, 1, datetime.datetime(2021, 5, 10, 19, 20, 46), datetime.datetime(2021, 5, 10, 19, 20, 46)) {'thcl': {'id': 2896, 'mtr_user_id': 31, 'name': 'learn_convoyeur_qualipapia_nantes_poly_100521_1', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'background,carton_brun,carton_gris,cartonnette,kraft,autre_refus,metal,plastique,teint_dans_la_masse,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3663, 'photo_desc_type': 5309, 'type_classification': 'mask_rcnn', 'hashtag_id_list': '0,0,0,0,0,0,0,0,0,0'}, 'list_hashtags': ['background', 'carton_brun', 'carton_gris', 'cartonnette', 'kraft', 'autre_refus', 'metal', 'plastique', 'teint_dans_la_masse', 'environnement'], 'list_hashtags_csv': 'background,carton_brun,carton_gris,cartonnette,kraft,autre_refus,metal,plastique,teint_dans_la_masse,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3663, 'svm_hashtag_type_desc': 5309, 'photo_desc_type': 5309, 'pb_hashtag_id_or_classifier': 0} list_class_names : ['background', 'carton_brun', 'carton_gris', 'cartonnette', 'kraft', 'autre_refus', 'metal', 'plastique', 'teint_dans_la_masse', 'environnement'] Configurations: BACKBONE resnet101 BACKBONE_SHAPES [[160 160] [ 80 80] [ 40 40] [ 20 20] [ 10 10]] BACKBONE_STRIDES [4, 8, 16, 32, 64] BATCH_SIZE 1 BBOX_STD_DEV [0.1 0.1 0.2 0.2] DETECTION_MAX_INSTANCES 100 DETECTION_MIN_CONFIDENCE 0.3 DETECTION_NMS_THRESHOLD 0.3 GPU_COUNT 1 IMAGES_PER_GPU 1 IMAGE_MAX_DIM 640 IMAGE_MIN_DIM 640 IMAGE_PADDING True IMAGE_SHAPE [640 640 3] LEARNING_MOMENTUM 0.9 LEARNING_RATE 0.001 LOSS_WEIGHTS {'rpn_class_loss': 1.0, 'rpn_bbox_loss': 1.0, 'mrcnn_class_loss': 1.0, 'mrcnn_bbox_loss': 1.0, 'mrcnn_mask_loss': 1.0} MASK_POOL_SIZE 14 MASK_SHAPE [28, 28] MAX_GT_INSTANCES 100 MEAN_PIXEL [123.7 116.8 103.9] MINI_MASK_SHAPE (56, 56) NAME learn_convoyeur_qualipapia_nantes_poly_100521_1 NUM_CLASSES 10 POOL_SIZE 7 POST_NMS_ROIS_INFERENCE 1000 POST_NMS_ROIS_TRAINING 2000 ROI_POSITIVE_RATIO 0.33 RPN_ANCHOR_RATIOS [0.5, 1, 2] RPN_ANCHOR_SCALES (16, 32, 64, 128, 256) RPN_ANCHOR_STRIDE 1 RPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2] RPN_NMS_THRESHOLD 0.7 RPN_TRAIN_ANCHORS_PER_IMAGE 256 STEPS_PER_EPOCH 1000 TRAIN_ROIS_PER_IMAGE 200 USE_MINI_MASK True USE_RPN_ROIS True VALIDATION_STEPS 50 WEIGHT_DECAY 0.0001 model_param file didn't exist model_name : learn_convoyeur_qualipapia_nantes_poly_100521_1 model_type : mask_rcnn list file need : ['mask_model.h5'] file exist in s3 : ['mask_model.h5'] file manque in s3 : [] 2025-02-07 10:42:54.817838: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2025-02-07 10:42:55.001554: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 local folder : /data/models_weight/learn_convoyeur_qualipapia_nantes_poly_100521_1 /data/models_weight/learn_convoyeur_qualipapia_nantes_poly_100521_1/mask_model.h5 size_local : 256031040 size in s3 : 256031040 create time local : 2021-08-09 05:45:48 create time in s3 : 2021-08-06 18:59:51 mask_model.h5 already exist and didn't need to update list_images length : 20 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 0.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 18) min: 0.00000 max: 1920.00000 nb d'objets trouves : 5 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 3.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 18) min: 0.00000 max: 1920.00000 nb d'objets trouves : 2 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 4.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 18) min: 0.00000 max: 1920.00000 nb d'objets trouves : 5 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 0.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 18) min: 0.00000 max: 1920.00000 nb d'objets trouves : 5 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 0.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 18) min: 0.00000 max: 1920.00000 nb d'objets trouves : 7 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 2.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 18) min: 0.00000 max: 1920.00000 nb d'objets trouves : 4 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 2.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 18) min: 0.00000 max: 1920.00000 nb d'objets trouves : 4 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 0.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 18) min: 0.00000 max: 1920.00000 nb d'objets trouves : 1 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 0.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 18) min: 0.00000 max: 1920.00000 nb d'objets trouves : 3 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 0.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 18) min: 0.00000 max: 1920.00000 nb d'objets trouves : 6 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 2.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 18) min: 0.00000 max: 1920.00000 nb d'objets trouves : 1 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 4.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 18) min: 0.00000 max: 1920.00000 nb d'objets trouves : 3 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 8.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 18) min: 0.00000 max: 1920.00000 nb d'objets trouves : 1 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 0.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 18) min: 0.00000 max: 1920.00000 nb d'objets trouves : 5 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 0.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 18) min: 0.00000 max: 1920.00000 nb d'objets trouves : 4 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 0.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 18) min: 0.00000 max: 1920.00000 nb d'objets trouves : 5 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 0.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 18) min: 0.00000 max: 1920.00000 nb d'objets trouves : 4 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 3.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 18) min: 0.00000 max: 1920.00000 nb d'objets trouves : 5 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 0.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 18) min: 0.00000 max: 1920.00000 nb d'objets trouves : 2 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 0.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 18) min: 0.00000 max: 1920.00000 nb d'objets trouves : 1 Detection mask done ! Trying to reset tf kernel 3077624 begin to check gpu status inside check gpu memory l 3610 free memory gpu now : 4465 tf kernel not reseted sub process len(results) : 20 len(list_Values) 0 None max_time_sub_proc : 3600 parent process len(results) : 20 len(list_Values) 0 process is alive finish correctly or not : True after detect begin to check gpu status inside check gpu memory l 3610 free memory gpu now : 9754 list_Values should be empty [] To do loadFromThcl(), then load ParamDescType : thcl2896 Catched exception ! Connect or reconnect ! thcls : [{'id': 2896, 'mtr_user_id': 31, 'name': 'learn_convoyeur_qualipapia_nantes_poly_100521_1', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'background,carton_brun,carton_gris,cartonnette,kraft,autre_refus,metal,plastique,teint_dans_la_masse,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3663, 'photo_desc_type': 5309, 'type_classification': 'mask_rcnn', 'hashtag_id_list': '0,0,0,0,0,0,0,0,0,0'}] thcl {'id': 2896, 'mtr_user_id': 31, 'name': 'learn_convoyeur_qualipapia_nantes_poly_100521_1', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'background,carton_brun,carton_gris,cartonnette,kraft,autre_refus,metal,plastique,teint_dans_la_masse,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3663, 'photo_desc_type': 5309, 'type_classification': 'mask_rcnn', 'hashtag_id_list': '0,0,0,0,0,0,0,0,0,0'} Update svm_hashtag_type_desc : 5309 ['background', 'carton_brun', 'carton_gris', 'cartonnette', 'kraft', 'autre_refus', 'metal', 'plastique', 'teint_dans_la_masse', 'environnement'] time for calcul the mask position with numpy : 0.030394792556762695 nb_pixel_total : 1328680 time to create 1 rle with new method : 0.07037663459777832 length of segment : 1301 time for calcul the mask position with numpy : 0.002240896224975586 nb_pixel_total : 114549 time to create 1 rle with old method : 0.13058900833129883 length of segment : 816 time for calcul the mask position with numpy : 0.0011241436004638672 nb_pixel_total : 40582 time to create 1 rle with old method : 0.07158470153808594 length of segment : 277 time for calcul the mask position with numpy : 0.004251956939697266 nb_pixel_total : 241396 time to create 1 rle with new method : 0.010438680648803711 length of segment : 807 time for calcul the mask position with numpy : 0.0013515949249267578 nb_pixel_total : 54749 time to create 1 rle with old method : 0.06235694885253906 length of segment : 670 time for calcul the mask position with numpy : 0.016869306564331055 nb_pixel_total : 996234 time to create 1 rle with new method : 0.19122719764709473 length of segment : 1106 time for calcul the mask position with numpy : 0.024427175521850586 nb_pixel_total : 914676 time to create 1 rle with new method : 0.21059298515319824 length of segment : 2481 time for calcul the mask position with numpy : 0.00016427040100097656 nb_pixel_total : 1846 time to create 1 rle with old method : 0.0024552345275878906 length of segment : 59 time for calcul the mask position with numpy : 0.002457141876220703 nb_pixel_total : 129231 time to create 1 rle with old method : 0.14878273010253906 length of segment : 581 time for calcul the mask position with numpy : 0.002149343490600586 nb_pixel_total : 103040 time to create 1 rle with old method : 0.11832547187805176 length of segment : 506 time for calcul the mask position with numpy : 0.0005981922149658203 nb_pixel_total : 40794 time to create 1 rle with old method : 0.046262502670288086 length of segment : 221 time for calcul the mask position with numpy : 0.0002486705780029297 nb_pixel_total : 5303 time to create 1 rle with old method : 0.006857872009277344 length of segment : 91 time for calcul the mask position with numpy : 0.022367000579833984 nb_pixel_total : 1348459 time to create 1 rle with new method : 0.0604557991027832 length of segment : 1642 time for calcul the mask position with numpy : 0.0002148151397705078 nb_pixel_total : 11288 time to create 1 rle with old method : 0.013799190521240234 length of segment : 161 time for calcul the mask position with numpy : 0.004303693771362305 nb_pixel_total : 304415 time to create 1 rle with new method : 0.010342121124267578 length of segment : 948 time for calcul the mask position with numpy : 0.0005846023559570312 nb_pixel_total : 25767 time to create 1 rle with old method : 0.03232216835021973 length of segment : 204 time for calcul the mask position with numpy : 0.018598318099975586 nb_pixel_total : 1128073 time to create 1 rle with new method : 0.2535734176635742 length of segment : 1859 time for calcul the mask position with numpy : 0.0022475719451904297 nb_pixel_total : 91202 time to create 1 rle with old method : 0.10347270965576172 length of segment : 544 time for calcul the mask position with numpy : 0.0015687942504882812 nb_pixel_total : 105068 time to create 1 rle with old method : 0.11802005767822266 length of segment : 332 time for calcul the mask position with numpy : 0.0013427734375 nb_pixel_total : 112473 time to create 1 rle with old method : 0.13543987274169922 length of segment : 332 time for calcul the mask position with numpy : 0.0065631866455078125 nb_pixel_total : 333942 time to create 1 rle with new method : 0.014404535293579102 length of segment : 996 time for calcul the mask position with numpy : 0.0009152889251708984 nb_pixel_total : 36321 time to create 1 rle with old method : 0.056749582290649414 length of segment : 181 time for calcul the mask position with numpy : 0.0028209686279296875 nb_pixel_total : 92492 time to create 1 rle with old method : 0.10706257820129395 length of segment : 559 time for calcul the mask position with numpy : 0.007549762725830078 nb_pixel_total : 369726 time to create 1 rle with new method : 0.021056652069091797 length of segment : 984 time for calcul the mask position with numpy : 0.0013954639434814453 nb_pixel_total : 75187 time to create 1 rle with old method : 0.08607268333435059 length of segment : 437 time for calcul the mask position with numpy : 0.019481897354125977 nb_pixel_total : 1260075 time to create 1 rle with new method : 0.03743147850036621 length of segment : 1517 time for calcul the mask position with numpy : 0.0002608299255371094 nb_pixel_total : 13554 time to create 1 rle with old method : 0.01566028594970703 length of segment : 120 time for calcul the mask position with numpy : 0.0009429454803466797 nb_pixel_total : 33689 time to create 1 rle with old method : 0.03900790214538574 length of segment : 552 time for calcul the mask position with numpy : 0.011691570281982422 nb_pixel_total : 814797 time to create 1 rle with new method : 0.02635788917541504 length of segment : 1490 time for calcul the mask position with numpy : 0.015876293182373047 nb_pixel_total : 986239 time to create 1 rle with new method : 0.038654327392578125 length of segment : 2197 time for calcul the mask position with numpy : 0.0004189014434814453 nb_pixel_total : 20609 time to create 1 rle with old method : 0.02488398551940918 length of segment : 274 time for calcul the mask position with numpy : 0.0009348392486572266 nb_pixel_total : 25712 time to create 1 rle with old method : 0.02944016456604004 length of segment : 281 time for calcul the mask position with numpy : 0.0011780261993408203 nb_pixel_total : 49790 time to create 1 rle with old method : 0.058319807052612305 length of segment : 254 time for calcul the mask position with numpy : 0.014787435531616211 nb_pixel_total : 758356 time to create 1 rle with new method : 0.029445648193359375 length of segment : 1720 time for calcul the mask position with numpy : 0.0007398128509521484 nb_pixel_total : 50571 time to create 1 rle with old method : 0.0575098991394043 length of segment : 318 time for calcul the mask position with numpy : 0.00021338462829589844 nb_pixel_total : 10538 time to create 1 rle with old method : 0.012516975402832031 length of segment : 138 time for calcul the mask position with numpy : 0.0023376941680908203 nb_pixel_total : 198269 time to create 1 rle with new method : 0.004541635513305664 length of segment : 585 time for calcul the mask position with numpy : 0.014313220977783203 nb_pixel_total : 1092580 time to create 1 rle with new method : 0.03125596046447754 length of segment : 1052 time for calcul the mask position with numpy : 0.01868271827697754 nb_pixel_total : 1097801 time to create 1 rle with new method : 0.039649248123168945 length of segment : 2059 time for calcul the mask position with numpy : 0.00457763671875 nb_pixel_total : 308687 time to create 1 rle with new method : 0.011960268020629883 length of segment : 1024 time for calcul the mask position with numpy : 0.001588582992553711 nb_pixel_total : 116759 time to create 1 rle with old method : 0.13239717483520508 length of segment : 500 time for calcul the mask position with numpy : 0.023815155029296875 nb_pixel_total : 1267822 time to create 1 rle with new method : 0.08617901802062988 length of segment : 1220 time for calcul the mask position with numpy : 0.013322591781616211 nb_pixel_total : 613246 time to create 1 rle with new method : 0.06745505332946777 length of segment : 1613 time for calcul the mask position with numpy : 0.0005841255187988281 nb_pixel_total : 28847 time to create 1 rle with old method : 0.03430962562561035 length of segment : 158 time for calcul the mask position with numpy : 0.0006964206695556641 nb_pixel_total : 29398 time to create 1 rle with old method : 0.04833245277404785 length of segment : 247 time for calcul the mask position with numpy : 0.003056764602661133 nb_pixel_total : 210187 time to create 1 rle with new method : 0.007489681243896484 length of segment : 394 time for calcul the mask position with numpy : 0.02093505859375 nb_pixel_total : 1141740 time to create 1 rle with new method : 0.07485318183898926 length of segment : 2289 time for calcul the mask position with numpy : 0.0012671947479248047 nb_pixel_total : 80399 time to create 1 rle with old method : 0.09800076484680176 length of segment : 281 time for calcul the mask position with numpy : 0.0032134056091308594 nb_pixel_total : 244961 time to create 1 rle with new method : 0.00884389877319336 length of segment : 940 time for calcul the mask position with numpy : 0.0010423660278320312 nb_pixel_total : 36284 time to create 1 rle with old method : 0.049687862396240234 length of segment : 203 time for calcul the mask position with numpy : 0.01767873764038086 nb_pixel_total : 982306 time to create 1 rle with new method : 0.12168407440185547 length of segment : 1989 time for calcul the mask position with numpy : 0.0046732425689697266 nb_pixel_total : 129979 time to create 1 rle with old method : 0.14685726165771484 length of segment : 943 time for calcul the mask position with numpy : 0.0005812644958496094 nb_pixel_total : 17147 time to create 1 rle with old method : 0.020107030868530273 length of segment : 278 time for calcul the mask position with numpy : 0.0001723766326904297 nb_pixel_total : 4671 time to create 1 rle with old method : 0.005871772766113281 length of segment : 75 time for calcul the mask position with numpy : 0.020101547241210938 nb_pixel_total : 1077956 time to create 1 rle with new method : 0.07291388511657715 length of segment : 2392 time for calcul the mask position with numpy : 0.004908084869384766 nb_pixel_total : 182442 time to create 1 rle with new method : 0.011692285537719727 length of segment : 975 time for calcul the mask position with numpy : 0.00046634674072265625 nb_pixel_total : 26782 time to create 1 rle with old method : 0.03130817413330078 length of segment : 225 time for calcul the mask position with numpy : 0.0012543201446533203 nb_pixel_total : 56278 time to create 1 rle with old method : 0.06392455101013184 length of segment : 374 time for calcul the mask position with numpy : 0.0003364086151123047 nb_pixel_total : 7268 time to create 1 rle with old method : 0.009113788604736328 length of segment : 140 time for calcul the mask position with numpy : 0.0010585784912109375 nb_pixel_total : 30135 time to create 1 rle with old method : 0.03529763221740723 length of segment : 299 time for calcul the mask position with numpy : 0.0004982948303222656 nb_pixel_total : 19574 time to create 1 rle with old method : 0.023152589797973633 length of segment : 119 time for calcul the mask position with numpy : 0.015894174575805664 nb_pixel_total : 869297 time to create 1 rle with new method : 0.21685385704040527 length of segment : 1992 time for calcul the mask position with numpy : 0.0004432201385498047 nb_pixel_total : 24218 time to create 1 rle with old method : 0.028698205947875977 length of segment : 116 time for calcul the mask position with numpy : 0.015413999557495117 nb_pixel_total : 1156574 time to create 1 rle with new method : 0.3830897808074951 length of segment : 1463 time for calcul the mask position with numpy : 0.0023975372314453125 nb_pixel_total : 137155 time to create 1 rle with old method : 0.1642777919769287 length of segment : 460 time for calcul the mask position with numpy : 0.1172490119934082 nb_pixel_total : 1524669 time to create 1 rle with new method : 0.07438135147094727 length of segment : 1381 time spent for convertir_results : 15.08190369606018 Inside saveOutput : final : False verbose : 0 eke 12-6-18 : saveMask need to be cleaned for new output ! Number saved : None batch 1 Loaded 66 chid ids of type : 3663 Number RLEs to save : 52742 save missing photos in datou_result : time spend for datou_step_exec : 39.778956174850464 time spend to save output : 3.416008472442627 total time spend for step 1 : 43.19496464729309 step2:crop_condition Fri Feb 7 10:43:27 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! VR 22-3-18 : For now we do not clean correctly the datou structure Loading chi in step crop with photo_hashtag_type : 3663 Loading chi in step crop for list_pids : 20 ! batch 1 Loaded 66 chid ids of type : 3663 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ begin to crop the class : teint_dans_la_masse param for this class : {'min_score': 0.7} filtre for class : teint_dans_la_masse hashtag_id of this class : 2107752385 we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! map_result returned by crop_photo_return_map_crop : length : 9 About to insert : list_path_to_insert length 9 new photo from crops ! we have finished the crop for the class : teint_dans_la_masse begin to crop the class : autre_refus param for this class : {'min_score': 0.5} filtre for class : autre_refus hashtag_id of this class : 2107752406 we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! map_result returned by crop_photo_return_map_crop : length : 6 About to insert : list_path_to_insert length 6 new photo from crops ! we have finished the crop for the class : autre_refus begin to crop the class : carton_gris param for this class : {'min_score': 0.5} filtre for class : carton_gris hashtag_id of this class : 2107753020 begin to crop the class : cartonnette param for this class : {'min_score': 0.5} filtre for class : cartonnette hashtag_id of this class : 702398920 we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! map_result returned by crop_photo_return_map_crop : length : 9 About to insert : list_path_to_insert length 9 new photo from crops ! we have finished the crop for the class : cartonnette begin to crop the class : carton_brun param for this class : {'min_score': 0.7} filtre for class : carton_brun hashtag_id of this class : 2107753024 begin to crop the class : plastique param for this class : {'min_score': 0.5} filtre for class : plastique hashtag_id of this class : 492725882 begin to crop the class : kraft param for this class : {'min_score': 0.5} filtre for class : kraft hashtag_id of this class : 493202403 begin to crop the class : metal param for this class : {'min_score': 0.5} filtre for class : metal hashtag_id of this class : 492628673 delete rles for these photos Inside saveOutput : final : False verbose : 0 saveOutput not yet implemented for datou_step.type : crop_condition we use saveGeneral [1330284307, 1330284141, 1330284137, 1330284134, 1330284127, 1330284064, 1330284061, 1330284012, 1330284007, 1330284001, 1330283994, 1330283798, 1330283795, 1330283671, 1330283668, 1330283445, 1330283442, 1330283389, 1330283381, 1330283279] Looping around the photos to save general results len do output : 24 /-3659492166Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /-3659492182Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /-3659492186Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /-3659492190Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /-3659492194Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /-3659492208Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /-3659492216Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /-3659492223Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /-3659492222Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /-3659492183Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /-3659492185Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /-3659492200Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /-3659492211Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /-3659492221Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /-3659492226Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /-3659492173Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /-3659492179Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /-3659492177Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /-3659492195Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /-3659492196Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /-3659492198Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /-3659492199Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /-3659492204Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /-3659492220Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . before output type Here is an output not treated by saveGeneral : Here is an output not treated by saveGeneral : Here is an output not treated by saveGeneral : Managing all output in save final without adding information in the mtr_datou_result ('3459', None, None, None, None, None, None, None, '2497679') ('3459', '19767304', '1330284307', None, None, None, None, None, '2497679') ('3459', None, None, None, None, None, None, None, '2497679') ('3459', '19767304', '1330284141', None, None, None, None, None, '2497679') ('3459', None, None, None, None, None, None, None, '2497679') ('3459', '19767304', '1330284137', None, None, None, None, None, '2497679') ('3459', None, None, None, None, None, None, None, '2497679') ('3459', '19767304', '1330284134', None, None, None, None, None, '2497679') ('3459', None, None, None, None, None, None, None, '2497679') ('3459', '19767304', '1330284127', None, None, None, None, None, '2497679') ('3459', None, None, None, None, None, None, None, '2497679') ('3459', '19767304', '1330284064', None, None, None, None, None, '2497679') ('3459', None, None, None, None, None, None, None, '2497679') ('3459', '19767304', '1330284061', None, None, None, None, None, '2497679') ('3459', None, None, None, None, None, None, None, '2497679') ('3459', '19767304', '1330284012', None, None, None, None, None, '2497679') ('3459', None, None, None, None, None, None, None, '2497679') ('3459', '19767304', '1330284007', None, None, None, None, None, '2497679') ('3459', None, None, None, None, None, None, None, '2497679') ('3459', '19767304', '1330284001', None, None, None, None, None, '2497679') ('3459', None, None, None, None, None, None, None, '2497679') ('3459', '19767304', '1330283994', None, None, None, None, None, '2497679') ('3459', None, None, None, None, None, None, None, '2497679') ('3459', '19767304', '1330283798', None, None, None, None, None, '2497679') ('3459', None, None, None, None, None, None, None, '2497679') ('3459', '19767304', '1330283795', None, None, None, None, None, '2497679') ('3459', None, None, None, None, None, None, None, '2497679') ('3459', '19767304', '1330283671', None, None, None, None, None, '2497679') ('3459', None, None, None, None, None, None, None, '2497679') ('3459', '19767304', '1330283668', None, None, None, None, None, '2497679') ('3459', None, None, None, None, None, None, None, '2497679') ('3459', '19767304', '1330283445', None, None, None, None, None, '2497679') ('3459', None, None, None, None, None, None, None, '2497679') ('3459', '19767304', '1330283442', None, None, None, None, None, '2497679') ('3459', None, None, None, None, None, None, None, '2497679') ('3459', '19767304', '1330283389', None, None, None, None, None, '2497679') ('3459', None, None, None, None, None, None, None, '2497679') ('3459', '19767304', '1330283381', None, None, None, None, None, '2497679') ('3459', None, None, None, None, None, None, None, '2497679') ('3459', '19767304', '1330283279', None, None, None, None, None, '2497679') begin to insert list_values into mtr_datou_result : length of list_values in save_final : 92 time used for this insertion : 0.04498004913330078 save_final save missing photos in datou_result : time spend for datou_step_exec : 7.86372971534729 time spend to save output : 0.04640984535217285 total time spend for step 2 : 7.910139560699463 step3:thcl Fri Feb 7 10:43:35 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed complete output_args for input 0 VR 22-3-18 : For now we do not clean correctly the datou structure Beginning of datou step Thcl ! nombre de thcls : 2 we are using the classfication for multi_thcl [2456, 2868] time to import caffe and check if the image exist : 0.004995822906494141 time to convert the images to numpy array : 2.6226043701171875e-06 time to import caffe and check if the image exist : 0.006676673889160156 time to convert the images to numpy array : 4.291534423828125e-06 time to import caffe and check if the image exist : 0.009877920150756836 time to convert the images to numpy array : 0.014987468719482422 time to import caffe and check if the image exist : 0.007546186447143555 time to convert the images to numpy array : 0.01876544952392578 time to import caffe and check if the image exist : 0.008120298385620117 time to convert the images to numpy array : 0.02230691909790039 time to import caffe and check if the image exist : 0.0074465274810791016 time to convert the images to numpy array : 0.02365899085998535 time to import caffe and check if the image exist : 0.009296655654907227 time to convert the images to numpy array : 0.025827884674072266 time to import caffe and check if the image exist : 0.00418853759765625 time to convert the images to numpy array : 0.0321652889251709 time to import caffe and check if the image exist : 0.00532221794128418 time to convert the images to numpy array : 0.03978228569030762 time to import caffe and check if the image exist : 0.008374214172363281 time to convert the images to numpy array : 0.04003024101257324 total time to convert the images to numpy array : 0.34540772438049316 list photo_ids error: [] list photo_ids correct : [-3659492190, -3659492194, -3659492208, -3659492216, -3659492223, -3659492222, -3659492173, -3659492179, -3659492177, -3659492195, -3659492196, -3659492198, -3659492199, -3659492204, -3659492220, -3659492211, -3659492221, -3659492226, -3659492166, -3659492182, -3659492186, -3659492183, -3659492185, -3659492200] number of photos to traite : 24 try to delete the photos incorrect in DB tagging for thcl : 2456 To do loadFromThcl(), then load ParamDescType : thcl2456 thcls : [{'id': 2456, 'mtr_user_id': 31, 'name': 'learn_qualipapia_papier_refus_from_vlg_data_aug', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'papier,refus', 'svm_portfolios_learning': '3028087,3028251', 'photo_hashtag_type': 3049, 'photo_desc_type': 4999, 'type_classification': 'caffe', 'hashtag_id_list': '492668766,538914404'}] thcl {'id': 2456, 'mtr_user_id': 31, 'name': 'learn_qualipapia_papier_refus_from_vlg_data_aug', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'papier,refus', 'svm_portfolios_learning': '3028087,3028251', 'photo_hashtag_type': 3049, 'photo_desc_type': 4999, 'type_classification': 'caffe', 'hashtag_id_list': '492668766,538914404'} Update svm_hashtag_type_desc : 4999 FOUND : 1 Here is data_from_sql_as_vec to set the ParamDescriptorType : (4999, 'learn_qualipapia_papier_refus_from_vlg_data_aug', 16384, 25088, 'learn_qualipapia_papier_refus_from_vlg_data_aug', 'res5b', 10.0, None, None, 256, None, 0, None, 8, None, None, -1000.0, 1, datetime.datetime(2020, 10, 23, 14, 27, 22), datetime.datetime(2020, 10, 23, 14, 27, 22)) To loadFromThcl() : net_4999 begin to check gpu status inside check gpu memory l 3637 free memory gpu now : 5113 max_wait_temp : 1 max_wait : 0 FOUND : 1 Here is data_from_sql_as_vec to set the ParamDescriptorType : (4999, 'learn_qualipapia_papier_refus_from_vlg_data_aug', 16384, 25088, 'learn_qualipapia_papier_refus_from_vlg_data_aug', 'res5b', 10.0, None, None, 256, None, 0, None, 8, None, None, -1000.0, 1, datetime.datetime(2020, 10, 23, 14, 27, 22), datetime.datetime(2020, 10, 23, 14, 27, 22)) None mean_file_type : mean_file_path : prototxt_file_path : model : learn_qualipapia_papier_refus_from_vlg_data_aug Inside get_net Inside get_net before cache_data_model model_param file didn't exist Inside get_net before CDM.load_model_par_type model_name : learn_qualipapia_papier_refus_from_vlg_data_aug model_type : caffe list file need : ['caffemodel', 'deploy_conv_normal.prototxt', 'deploy_fc.prototxt', 'deploy.prototxt', 'mean.npy', 'synset_words.txt'] file exist in s3 : ['caffemodel', 'deploy.prototxt', 'mean.npy', 'synset_words.txt'] file manque in s3 : ['deploy_conv_normal.prototxt', 'deploy_fc.prototxt'] local folder : /data/models_weight/learn_qualipapia_papier_refus_from_vlg_data_aug /data/models_weight/learn_qualipapia_papier_refus_from_vlg_data_aug/caffemodel size_local : 44972172 size in s3 : 44972172 create time local : 2021-08-09 05:55:48 create time in s3 : 2021-08-06 19:28:49 caffemodel already exist and didn't need to update /data/models_weight/learn_qualipapia_papier_refus_from_vlg_data_aug/deploy.prototxt size_local : 17311 size in s3 : 17311 create time local : 2021-08-09 05:55:48 create time in s3 : 2021-08-06 19:28:49 deploy.prototxt already exist and didn't need to update /data/models_weight/learn_qualipapia_papier_refus_from_vlg_data_aug/mean.npy size_local : 1572992 size in s3 : 1572992 create time local : 2021-08-09 05:55:48 create time in s3 : 2021-08-06 19:28:51 mean.npy already exist and didn't need to update /data/models_weight/learn_qualipapia_papier_refus_from_vlg_data_aug/synset_words.txt size_local : 57 size in s3 : 57 create time local : 2021-08-09 05:55:48 create time in s3 : 2021-08-06 19:28:49 synset_words.txt already exist and didn't need to update Inside get_net after CDM.load_model_par_type After if not only_with_local_cache: /home/admin/workarea/install/darknet/:/home/admin/workarea/git/Velours/python:/home/admin/workarea/install/caffe_frcnn_python3/py-faster-rcnn/caffe-fast-rcnn/python:/home/admin/mtr/.credentials:/home/admin/workarea/install/caffe/python:/home/admin/workarea/install/caffe_frcnn/py-faster-rcnn/tools/:/home/admin/workarea/git/fotonowerpip/:/home/admin/workarea/install/segment-anything:/home/admin//workarea/git/pyfvs/:/home/admin/workarea/git/apy/ Here before set mode gpu Doing nothing but we could set mode gpu after set mode gpu prototxt_filename : /data/models_weight/learn_qualipapia_papier_refus_from_vlg_data_aug/deploy.prototxt caffemodel_filename : /data/models_weight/learn_qualipapia_papier_refus_from_vlg_data_aug/caffemodel now we set caffe to gpu mode before predict begin to check gpu status inside check gpu memory WARNING: Logging before InitGoogleLogging() is written to STDERR F0207 10:43:40.211019 3077394 syncedmem.cpp:71] Check failed: error == cudaSuccess (2 vs. 0) out of memory *** Check failure stack trace: *** Command terminated by signal 6 37.80user 22.49system 1:02.15elapsed 97%CPU (0avgtext+0avgdata 2928796maxresident)k 5840inputs+14928outputs (118major+1480075minor)pagefaults 0swaps