python /home/admin/mtr/script_for_cron.py -j datou_current3 -m 20 -a ' -a 3318 ' -s datou_3318 -M 0 -S 0 -U 95,95,120 import MySQLdb succeeded Import error (python version) ['/Users/moilerat/Documents/Fotonower/install/caffe/distribute/python', '/home/admin/workarea/git/Velours/python/prod', '/home/admin/workarea/install/caffe_cuda8_python3/python', '/home/admin/workarea/install/darknet', '/home/admin/workarea/git/Velours/python', '/home/admin/workarea/install/caffe_frcnn_python3/py-faster-rcnn/caffe-fast-rcnn/python', '/home/admin/mtr/.credentials', '/home/admin/workarea/install/caffe/python', '/home/admin/workarea/install/caffe_frcnn/py-faster-rcnn/tools', '/home/admin/workarea/git/fotonowerpip', '/home/admin/workarea/install/segment-anything', '/home/admin/workarea/git/pyfvs', '/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload', '/home/admin/.local/lib/python3.8/site-packages', '/usr/local/lib/python3.8/dist-packages', '/usr/lib/python3/dist-packages'] process id : 3459257 load datou : 3318 # VR 17-11-17 : to create in DB ! Here we check the datou graph and we reorder steps ! Tree builded and cycle checked, now we need to re-order the steps ! We have currenlty an error because there is no dependence between the last step for the case tile - detect - glue We can either keep the depence of, it is better to keep an order compatible with the id of steps if we do not have sons, so a lexical order : (number_son, step_id) All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! DONE and to test : checkNoCycle ! Here we check the consistency of inputs/outputs number between the given ones and the db ! eke 1-6-18 : checkConsistencyNbInputNbOutput should be processed after step reordering ! WARNING : number of outputs for step 7928 mask_detect is not consistent : 3 used against 2 in the step definition ! Step 8092 crop_condition have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! WARNING : number of outputs for step 8092 crop_condition is not consistent : 4 used against 3 in the step definition ! WARNING : number of inputs for step 7933 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 7933 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 7935 ventilate_hashtags_in_portfolio is not consistent : 2 used against 1 in the step definition ! Step 7934 final have less inputs used (2) than in the step definition (3) : maybe we manage optionnal inputs ! Step 7934 final have less outputs used (1) than in the step definition (2) : some outputs may be not used ! WARNING : number of outputs for step 13649 velours_tree is not consistent : 2 used against 1 in the step definition ! Step 9283 split_time_score have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! Number of inputs / outputs for each step checked ! Here we check the consistency of outputs/inputs types during steps connections eke 1-6-18 : checkConsistencyTypeOutputInput should be processed after checkConsistencyNbInputNbOutput ! We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : type of output 1 of step 7935 doesn't seem to be define in the database( WARNING : type of input 3 of step 7934 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : type of input 1 of step 7935 doesn't seem to be define in the database( WARNING : output 1 of step 7933 have datatype=7 whereas input 1 of step 7935 have datatype=None WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 2 of step 8092 doesn't seem to be define in the database( WARNING : type of output 3 of step 8092 doesn't seem to be define in the database( WARNING : type of input 1 of step 7933 doesn't seem to be define in the database( WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 1 of step 10917 doesn't seem to be define in the database( WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 1 of step 10918 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : output 0 of step 7935 have datatype=10 whereas input 3 of step 10916 have datatype=6 WARNING : output 0 of step 7935 have datatype=10 whereas input 0 of step 13649 have datatype=18 WARNING : type of output 1 of step 13649 doesn't seem to be define in the database( WARNING : type of input 5 of step 10916 doesn't seem to be define in the database( DataTypes for each output/input checked ! Unexpected type for variable list_input_json ERROR or WARNING : can't parse json string Expecting value: line 1 column 1 (char 0) Tried to parse : chemin de la photo was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? [(photo_id, hashtag_id, hashtag_type, x0, x1, y0, y1, score, seg_temp, polygons), ...] was removed should we ? chemin de la photo was removed should we ? [ (photo_id_loc, hashtag_id, hashtag_type, x0, x1, y0, y1, score, None), ...] was removed should we ? chemin de la photo was removed should we ? id de la photo (peut être local ou global) was removed should we ? chemin de la photo was removed should we ? (x0, y0, x1, y1) was removed should we ? chemin de la photo was removed should we ? donnée sous forme de texte was removed should we ? [ (photo_id, photo_id_loc, hashtag_type, x0, x1, y0, y1, score), ...] was removed should we ? None was removed should we ? donnée sous forme de texte was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? id de la photo (peut être local ou global) was removed should we ? donnée sous forme de texte was removed should we ? donnée sous forme de texte was removed should we ? donnée sous forme de texte was removed should we ? chemin de la photo was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? chemin de la photo was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? None was removed should we ? donnée sous forme de nombre was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? donnée sous forme de texte was removed should we ? None was removed should we ? donnée sous forme de texte was removed should we ? [ptf_id0,ptf_id1...] was removed should we ? FOUND : 1 Here is data_from_sql_as_vec to set the ParamDescriptorType : (5275, 'learn_RUBBIA_REFUS_AMIENS_23', 16384, 25088, 'learn_RUBBIA_REFUS_AMIENS_23', 'pool5', 10.0, None, None, 256, None, 0, None, 8, None, None, -1000.0, 1, datetime.datetime(2021, 4, 23, 14, 19, 39), datetime.datetime(2021, 4, 23, 14, 19, 39)) load thcls load THCL from format json or kwargs add thcl : 2847 in CacheModelConfig load pdts add pdt : 5275 in CacheModelConfig Running datou job : batch_current TODO datou_current to load to do maybe to take outside batchDatouExec updating current state to 1 list_input_json: [] Current got : datou_id : 3318, datou_cur_ids : ['3371006'] with mtr_portfolio_ids : ['25401991'] and first list_photo_ids : [] new path : /proc/3459257/ Inside batchDatouExec : verbose : 0 # VR 17-11-17 : to create in DB ! Here we check the datou graph and we reorder steps ! Tree builded and cycle checked, now we need to re-order the steps ! We have currenlty an error because there is no dependence between the last step for the case tile - detect - glue We can either keep the depence of, it is better to keep an order compatible with the id of steps if we do not have sons, so a lexical order : (number_son, step_id) All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! DONE and to test : checkNoCycle ! Here we check the consistency of inputs/outputs number between the given ones and the db ! eke 1-6-18 : checkConsistencyNbInputNbOutput should be processed after step reordering ! WARNING : number of outputs for step 7928 mask_detect is not consistent : 3 used against 2 in the step definition ! Step 8092 crop_condition have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! WARNING : number of outputs for step 8092 crop_condition is not consistent : 4 used against 3 in the step definition ! WARNING : number of inputs for step 7933 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 7933 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 7935 ventilate_hashtags_in_portfolio is not consistent : 2 used against 1 in the step definition ! Step 7934 final have less inputs used (2) than in the step definition (3) : maybe we manage optionnal inputs ! Step 7934 final have less outputs used (1) than in the step definition (2) : some outputs may be not used ! WARNING : number of outputs for step 13649 velours_tree is not consistent : 2 used against 1 in the step definition ! Step 9283 split_time_score have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! Number of inputs / outputs for each step checked ! Here we check the consistency of outputs/inputs types during steps connections eke 1-6-18 : checkConsistencyTypeOutputInput should be processed after checkConsistencyNbInputNbOutput ! We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : type of output 1 of step 7935 doesn't seem to be define in the database( WARNING : type of input 3 of step 7934 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : type of input 1 of step 7935 doesn't seem to be define in the database( WARNING : output 1 of step 7933 have datatype=7 whereas input 1 of step 7935 have datatype=None WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 2 of step 8092 doesn't seem to be define in the database( WARNING : type of output 3 of step 8092 doesn't seem to be define in the database( WARNING : type of input 1 of step 7933 doesn't seem to be define in the database( WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 1 of step 10917 doesn't seem to be define in the database( WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 1 of step 10918 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : output 0 of step 7935 have datatype=10 whereas input 3 of step 10916 have datatype=6 WARNING : output 0 of step 7935 have datatype=10 whereas input 0 of step 13649 have datatype=18 WARNING : type of output 1 of step 13649 doesn't seem to be define in the database( WARNING : type of input 5 of step 10916 doesn't seem to be define in the database( DataTypes for each output/input checked ! List Step Type Loaded in datou : mask_detect, crop_condition, rle_unique_nms_with_priority, ventilate_hashtags_in_portfolio, final, blur_detection, brightness, velours_tree, send_mail_cod, split_time_score over limit max, limiting to limit_max 40 list_input_json : [] origin We have 1 , BFBFBFBFBFwe have missing 0 photos in the step downloads : photo missing : [] try to delete the photos missing in DB length of list_filenames : 5 ; length of list_pids : 5 ; length of list_args : 5 time to download the photos : 0.8222455978393555 About to test input to load we should then remove the video here, and this would fix the bug of datou_current ! Calling datou_exec Inside datou_exec : verbose : 0 number of steps : 10 step1:mask_detect Mon Jul 28 12:40:31 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Beginning of datou step mask_detect ! save_polygon : True begin detect begin to check gpu status inside check gpu memory l 3637 free memory gpu now : 3152 max_wait_temp : 1 max_wait : 0 gpu_flag : 0 2025-07-28 12:40:34.688821: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2025-07-28 12:40:34.723335: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 3493035000 Hz 2025-07-28 12:40:34.725307: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f6a58000b60 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2025-07-28 12:40:34.725354: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2025-07-28 12:40:34.729122: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2025-07-28 12:40:34.857884: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3cd2a170 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2025-07-28 12:40:34.857955: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA GeForce RTX 2080 Ti, Compute Capability 7.5 2025-07-28 12:40:34.858846: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:41:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5 coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s 2025-07-28 12:40:34.859260: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-07-28 12:40:34.865602: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2025-07-28 12:40:34.868074: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2025-07-28 12:40:34.869250: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2025-07-28 12:40:34.872282: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2025-07-28 12:40:34.873524: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2025-07-28 12:40:34.878302: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2025-07-28 12:40:34.879580: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0 2025-07-28 12:40:34.879701: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-07-28 12:40:34.880294: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix: 2025-07-28 12:40:34.880311: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 2025-07-28 12:40:34.880319: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N 2025-07-28 12:40:34.881266: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2701 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:41:00.0, compute capability: 7.5) WARNING:tensorflow:From /home/admin/workarea/git/Velours/python/mtr/mask_rcnn/mask_detection.py:69: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead. 2025-07-28 12:40:35.171865: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:41:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5 coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s 2025-07-28 12:40:35.171998: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-07-28 12:40:35.172017: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2025-07-28 12:40:35.172033: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2025-07-28 12:40:35.172048: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2025-07-28 12:40:35.172061: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2025-07-28 12:40:35.172075: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2025-07-28 12:40:35.172089: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2025-07-28 12:40:35.172922: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0 2025-07-28 12:40:35.174325: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:41:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5 coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s 2025-07-28 12:40:35.174401: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-07-28 12:40:35.174418: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2025-07-28 12:40:35.174432: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2025-07-28 12:40:35.174446: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2025-07-28 12:40:35.174459: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2025-07-28 12:40:35.174473: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2025-07-28 12:40:35.174487: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2025-07-28 12:40:35.175308: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0 2025-07-28 12:40:35.175355: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix: 2025-07-28 12:40:35.175364: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 2025-07-28 12:40:35.175372: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N 2025-07-28 12:40:35.176233: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2701 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:41:00.0, compute capability: 7.5) Using TensorFlow backend. WARNING:tensorflow:From /home/admin/workarea/install/Mask_RCNN/model.py:396: calling crop_and_resize_v1 (from tensorflow.python.ops.image_ops_impl) with box_ind is deprecated and will be removed in a future version. Instructions for updating: box_ind is deprecated, use box_indices instead WARNING:tensorflow:From /home/admin/workarea/install/Mask_RCNN/model.py:703: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.cast` instead. WARNING:tensorflow:From /home/admin/workarea/install/Mask_RCNN/model.py:729: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.cast` instead. Inside mask_sub_process Inside mask_detect About to load cache.load_thcl_param To do loadFromThcl(), then load ParamDescType : thcl2847 thcls : [{'id': 2847, 'mtr_user_id': 31, 'name': 'learn_RUBBIA_REFUS_AMIENS_23', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'background,papier,carton,metal,pet_clair,autre,pehd,pet_fonce,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3594, 'photo_desc_type': 5275, 'type_classification': 'mask_rcnn', 'hashtag_id_list': '0,0,0,0,0,0,0,0,0'}] thcl {'id': 2847, 'mtr_user_id': 31, 'name': 'learn_RUBBIA_REFUS_AMIENS_23', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'background,papier,carton,metal,pet_clair,autre,pehd,pet_fonce,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3594, 'photo_desc_type': 5275, 'type_classification': 'mask_rcnn', 'hashtag_id_list': '0,0,0,0,0,0,0,0,0'} Update svm_hashtag_type_desc : 5275 FOUND : 1 Here is data_from_sql_as_vec to set the ParamDescriptorType : (5275, 'learn_RUBBIA_REFUS_AMIENS_23', 16384, 25088, 'learn_RUBBIA_REFUS_AMIENS_23', 'pool5', 10.0, None, None, 256, None, 0, None, 8, None, None, -1000.0, 1, datetime.datetime(2021, 4, 23, 14, 19, 39), datetime.datetime(2021, 4, 23, 14, 19, 39)) {'thcl': {'id': 2847, 'mtr_user_id': 31, 'name': 'learn_RUBBIA_REFUS_AMIENS_23', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'background,papier,carton,metal,pet_clair,autre,pehd,pet_fonce,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3594, 'photo_desc_type': 5275, 'type_classification': 'mask_rcnn', 'hashtag_id_list': '0,0,0,0,0,0,0,0,0'}, 'list_hashtags': ['background', 'papier', 'carton', 'metal', 'pet_clair', 'autre', 'pehd', 'pet_fonce', 'environnement'], 'list_hashtags_csv': 'background,papier,carton,metal,pet_clair,autre,pehd,pet_fonce,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3594, 'svm_hashtag_type_desc': 5275, 'photo_desc_type': 5275, 'pb_hashtag_id_or_classifier': 0} list_class_names : ['background', 'papier', 'carton', 'metal', 'pet_clair', 'autre', 'pehd', 'pet_fonce', 'environnement'] Configurations: BACKBONE resnet101 BACKBONE_SHAPES [[160 160] [ 80 80] [ 40 40] [ 20 20] [ 10 10]] BACKBONE_STRIDES [4, 8, 16, 32, 64] BATCH_SIZE 1 BBOX_STD_DEV [0.1 0.1 0.2 0.2] DETECTION_MAX_INSTANCES 100 DETECTION_MIN_CONFIDENCE 0.3 DETECTION_NMS_THRESHOLD 0.3 GPU_COUNT 1 IMAGES_PER_GPU 1 IMAGE_MAX_DIM 640 IMAGE_MIN_DIM 640 IMAGE_PADDING True IMAGE_SHAPE [640 640 3] LEARNING_MOMENTUM 0.9 LEARNING_RATE 0.001 LOSS_WEIGHTS {'rpn_class_loss': 1.0, 'rpn_bbox_loss': 1.0, 'mrcnn_class_loss': 1.0, 'mrcnn_bbox_loss': 1.0, 'mrcnn_mask_loss': 1.0} MASK_POOL_SIZE 14 MASK_SHAPE [28, 28] MAX_GT_INSTANCES 100 MEAN_PIXEL [123.7 116.8 103.9] MINI_MASK_SHAPE (56, 56) NAME learn_RUBBIA_REFUS_AMIENS_23 NUM_CLASSES 9 POOL_SIZE 7 POST_NMS_ROIS_INFERENCE 1000 POST_NMS_ROIS_TRAINING 2000 ROI_POSITIVE_RATIO 0.33 RPN_ANCHOR_RATIOS [0.5, 1, 2] RPN_ANCHOR_SCALES (16, 32, 64, 128, 256) RPN_ANCHOR_STRIDE 1 RPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2] RPN_NMS_THRESHOLD 0.7 RPN_TRAIN_ANCHORS_PER_IMAGE 256 STEPS_PER_EPOCH 1000 TRAIN_ROIS_PER_IMAGE 200 USE_MINI_MASK True USE_RPN_ROIS True VALIDATION_STEPS 50 WEIGHT_DECAY 0.0001 model_param file didn't exist model_name : learn_RUBBIA_REFUS_AMIENS_23 model_type : mask_rcnn list file need : ['mask_model.h5'] file exist in s3 : ['mask_model.h5'] file manque in s3 : [] 2025-07-28 12:40:45.077081: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2025-07-28 12:40:45.281999: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2025-07-28 12:40:46.710164: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.710240: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.06GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2025-07-28 12:40:46.710887: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.710904: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.06GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2025-07-28 12:40:46.719379: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.719399: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.06GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2025-07-28 12:40:46.719928: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.719943: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.06GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2025-07-28 12:40:46.728853: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.728874: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 466.56MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2025-07-28 12:40:46.729406: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.729421: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 466.56MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2025-07-28 12:40:46.770355: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.770380: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.06GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2025-07-28 12:40:46.770969: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.770984: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.06GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2025-07-28 12:40:46.777517: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.777540: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 243.25MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2025-07-28 12:40:46.778113: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.778128: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 243.25MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2025-07-28 12:40:46.822917: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.823519: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.827300: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.827881: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.885132: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.885678: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.922645: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.923248: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.924780: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.925358: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.931802: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.932382: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.934622: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.935175: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.942966: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.943524: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.945046: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.945581: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.981836: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.982422: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.982975: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.983537: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.987105: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:46.987657: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.007667: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.008251: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.008825: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.009399: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.023997: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.024579: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.025249: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.025828: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.030295: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.030838: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.038002: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.038580: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.051867: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.052461: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.057381: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.057959: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.058533: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.059082: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.059855: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.060436: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.072695: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.073242: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.073786: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.074355: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.074928: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.075518: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.092661: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.093246: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.123314: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.123367: W tensorflow/core/kernels/gpu_utils.cc:49] Failed to allocate memory for convolution redzone checking; skipping this check. This is benign and only means that we won't check cudnn for out-of-bounds reads and writes. This message will only be printed once. 2025-07-28 12:40:47.123962: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.124549: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.134026: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.135040: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.144500: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.145108: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.164988: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.165760: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.166491: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.167354: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.172346: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.172943: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.173518: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.174091: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.175150: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.187751: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.188348: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.198400: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.198999: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.199601: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.200176: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.200757: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-28 12:40:47.201328: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.14G (2296512512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory local folder : /data/models_weight/learn_RUBBIA_REFUS_AMIENS_23 /data/models_weight/learn_RUBBIA_REFUS_AMIENS_23/mask_model.h5 size_local : 256009536 size in s3 : 256009536 create time local : 2021-08-09 09:43:22 create time in s3 : 2021-08-06 18:54:04 mask_model.h5 already exist and didn't need to update list_images length : 5 NEW PHOTO Processing 1 images image shape: (2160, 3840, 3) min: 0.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 17) min: 0.00000 max: 3840.00000 nb d'objets trouves : 40 NEW PHOTO Processing 1 images image shape: (2160, 3840, 3) min: 0.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 17) min: 0.00000 max: 3840.00000 nb d'objets trouves : 48 NEW PHOTO Processing 1 images image shape: (2160, 3840, 3) min: 0.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 17) min: 0.00000 max: 3840.00000 nb d'objets trouves : 39 NEW PHOTO Processing 1 images image shape: (2160, 3840, 3) min: 0.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 17) min: 0.00000 max: 3840.00000 nb d'objets trouves : 24 NEW PHOTO Processing 1 images image shape: (2160, 3840, 3) min: 0.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 17) min: 0.00000 max: 3840.00000 nb d'objets trouves : 16 Detection mask done ! Trying to reset tf kernel 3460104 begin to check gpu status inside check gpu memory l 3610 free memory gpu now : 919 tf kernel not reseted sub process len(results) : 5 len(list_Values) 0 None max_time_sub_proc : 3600 parent process len(results) : 5 len(list_Values) 0 process is alive finish correctly or not : True after detect begin to check gpu status inside check gpu memory l 3610 free memory gpu now : 2112 list_Values should be empty [] To do loadFromThcl(), then load ParamDescType : thcl2847 Catched exception ! Connect or reconnect ! thcls : [{'id': 2847, 'mtr_user_id': 31, 'name': 'learn_RUBBIA_REFUS_AMIENS_23', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'background,papier,carton,metal,pet_clair,autre,pehd,pet_fonce,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3594, 'photo_desc_type': 5275, 'type_classification': 'mask_rcnn', 'hashtag_id_list': '0,0,0,0,0,0,0,0,0'}] thcl {'id': 2847, 'mtr_user_id': 31, 'name': 'learn_RUBBIA_REFUS_AMIENS_23', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'background,papier,carton,metal,pet_clair,autre,pehd,pet_fonce,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3594, 'photo_desc_type': 5275, 'type_classification': 'mask_rcnn', 'hashtag_id_list': '0,0,0,0,0,0,0,0,0'} Update svm_hashtag_type_desc : 5275 ['background', 'papier', 'carton', 'metal', 'pet_clair', 'autre', 'pehd', 'pet_fonce', 'environnement'] time for calcul the mask position with numpy : 0.0014333724975585938 nb_pixel_total : 25804 time to create 1 rle with old method : 0.03265023231506348 length of segment : 218 time for calcul the mask position with numpy : 0.0010340213775634766 nb_pixel_total : 26627 time to create 1 rle with old method : 0.03370857238769531 length of segment : 317 time for calcul the mask position with numpy : 0.0003712177276611328 nb_pixel_total : 16766 time to create 1 rle with old method : 0.01992058753967285 length of segment : 143 time for calcul the mask position with numpy : 0.0028181076049804688 nb_pixel_total : 105804 time to create 1 rle with old method : 0.12560033798217773 length of segment : 605 time for calcul the mask position with numpy : 0.00043702125549316406 nb_pixel_total : 20914 time to create 1 rle with old method : 0.024584054946899414 length of segment : 192 time for calcul the mask position with numpy : 0.0004515647888183594 nb_pixel_total : 17577 time to create 1 rle with old method : 0.021025896072387695 length of segment : 209 time for calcul the mask position with numpy : 0.000972747802734375 nb_pixel_total : 53289 time to create 1 rle with old method : 0.062303781509399414 length of segment : 272 time for calcul the mask position with numpy : 0.0009043216705322266 nb_pixel_total : 40968 time to create 1 rle with old method : 0.04897356033325195 length of segment : 201 time for calcul the mask position with numpy : 0.0042035579681396484 nb_pixel_total : 124048 time to create 1 rle with old method : 0.14980626106262207 length of segment : 539 time for calcul the mask position with numpy : 0.0009899139404296875 nb_pixel_total : 43795 time to create 1 rle with old method : 0.052324533462524414 length of segment : 299 time for calcul the mask position with numpy : 0.0015981197357177734 nb_pixel_total : 74483 time to create 1 rle with old method : 0.08744311332702637 length of segment : 305 time for calcul the mask position with numpy : 0.0021169185638427734 nb_pixel_total : 112519 time to create 1 rle with old method : 0.1326611042022705 length of segment : 467 time for calcul the mask position with numpy : 0.0014007091522216797 nb_pixel_total : 70809 time to create 1 rle with old method : 0.08347702026367188 length of segment : 593 time for calcul the mask position with numpy : 0.0003483295440673828 nb_pixel_total : 13990 time to create 1 rle with old method : 0.018022537231445312 length of segment : 202 time for calcul the mask position with numpy : 0.0009963512420654297 nb_pixel_total : 48201 time to create 1 rle with old method : 0.05706191062927246 length of segment : 269 time for calcul the mask position with numpy : 0.0005471706390380859 nb_pixel_total : 12092 time to create 1 rle with old method : 0.014812231063842773 length of segment : 253 time for calcul the mask position with numpy : 0.0029778480529785156 nb_pixel_total : 135784 time to create 1 rle with old method : 0.25199317932128906 length of segment : 824 time for calcul the mask position with numpy : 0.00037670135498046875 nb_pixel_total : 11483 time to create 1 rle with old method : 0.017501115798950195 length of segment : 161 time for calcul the mask position with numpy : 0.0010573863983154297 nb_pixel_total : 61885 time to create 1 rle with old method : 0.0727243423461914 length of segment : 301 time for calcul the mask position with numpy : 0.0005249977111816406 nb_pixel_total : 20444 time to create 1 rle with old method : 0.03399229049682617 length of segment : 146 time for calcul the mask position with numpy : 0.0005152225494384766 nb_pixel_total : 17706 time to create 1 rle with old method : 0.02153158187866211 length of segment : 184 time for calcul the mask position with numpy : 0.001577615737915039 nb_pixel_total : 63434 time to create 1 rle with old method : 0.07536745071411133 length of segment : 350 time for calcul the mask position with numpy : 0.0005974769592285156 nb_pixel_total : 26747 time to create 1 rle with old method : 0.03161048889160156 length of segment : 101 time for calcul the mask position with numpy : 0.00040721893310546875 nb_pixel_total : 10703 time to create 1 rle with old method : 0.013091564178466797 length of segment : 177 time for calcul the mask position with numpy : 0.0006871223449707031 nb_pixel_total : 30766 time to create 1 rle with old method : 0.03596901893615723 length of segment : 390 time for calcul the mask position with numpy : 0.0007948875427246094 nb_pixel_total : 29495 time to create 1 rle with old method : 0.035350799560546875 length of segment : 242 time for calcul the mask position with numpy : 0.001047372817993164 nb_pixel_total : 49651 time to create 1 rle with old method : 0.09695553779602051 length of segment : 314 time for calcul the mask position with numpy : 0.004290580749511719 nb_pixel_total : 160767 time to create 1 rle with new method : 0.012601613998413086 length of segment : 675 time for calcul the mask position with numpy : 0.0008106231689453125 nb_pixel_total : 35706 time to create 1 rle with old method : 0.041841745376586914 length of segment : 231 time for calcul the mask position with numpy : 0.002253293991088867 nb_pixel_total : 96845 time to create 1 rle with old method : 0.11422419548034668 length of segment : 544 time for calcul the mask position with numpy : 0.002638578414916992 nb_pixel_total : 116283 time to create 1 rle with old method : 0.13393044471740723 length of segment : 364 time for calcul the mask position with numpy : 0.0011942386627197266 nb_pixel_total : 41199 time to create 1 rle with old method : 0.04884934425354004 length of segment : 421 time for calcul the mask position with numpy : 0.005320072174072266 nb_pixel_total : 206361 time to create 1 rle with new method : 0.013579845428466797 length of segment : 708 time for calcul the mask position with numpy : 0.0003345012664794922 nb_pixel_total : 7085 time to create 1 rle with old method : 0.00898122787475586 length of segment : 108 time for calcul the mask position with numpy : 0.0004420280456542969 nb_pixel_total : 22140 time to create 1 rle with old method : 0.02635335922241211 length of segment : 182 time for calcul the mask position with numpy : 0.00044989585876464844 nb_pixel_total : 14361 time to create 1 rle with old method : 0.017499923706054688 length of segment : 158 time for calcul the mask position with numpy : 0.0004918575286865234 nb_pixel_total : 15328 time to create 1 rle with old method : 0.02273249626159668 length of segment : 161 time for calcul the mask position with numpy : 0.0014352798461914062 nb_pixel_total : 50679 time to create 1 rle with old method : 0.05934405326843262 length of segment : 351 time for calcul the mask position with numpy : 0.0004189014434814453 nb_pixel_total : 11914 time to create 1 rle with old method : 0.014456748962402344 length of segment : 160 time for calcul the mask position with numpy : 0.000621795654296875 nb_pixel_total : 27587 time to create 1 rle with old method : 0.032892704010009766 length of segment : 306 time for calcul the mask position with numpy : 0.0012247562408447266 nb_pixel_total : 62047 time to create 1 rle with old method : 0.07267498970031738 length of segment : 278 time for calcul the mask position with numpy : 0.0004570484161376953 nb_pixel_total : 19225 time to create 1 rle with old method : 0.023168087005615234 length of segment : 153 time for calcul the mask position with numpy : 0.0012862682342529297 nb_pixel_total : 53563 time to create 1 rle with old method : 0.06294918060302734 length of segment : 227 time for calcul the mask position with numpy : 0.0009515285491943359 nb_pixel_total : 43689 time to create 1 rle with old method : 0.0503840446472168 length of segment : 369 time for calcul the mask position with numpy : 0.0005214214324951172 nb_pixel_total : 14166 time to create 1 rle with old method : 0.02197718620300293 length of segment : 115 time for calcul the mask position with numpy : 0.0016970634460449219 nb_pixel_total : 96611 time to create 1 rle with old method : 0.12065505981445312 length of segment : 372 time for calcul the mask position with numpy : 0.0008318424224853516 nb_pixel_total : 42646 time to create 1 rle with old method : 0.05227494239807129 length of segment : 207 time for calcul the mask position with numpy : 0.0018148422241210938 nb_pixel_total : 84471 time to create 1 rle with old method : 0.10239720344543457 length of segment : 369 time for calcul the mask position with numpy : 0.0028595924377441406 nb_pixel_total : 43590 time to create 1 rle with old method : 0.051155805587768555 length of segment : 517 time for calcul the mask position with numpy : 0.0018529891967773438 nb_pixel_total : 27836 time to create 1 rle with old method : 0.0373988151550293 length of segment : 254 time for calcul the mask position with numpy : 0.0018315315246582031 nb_pixel_total : 40394 time to create 1 rle with old method : 0.048555612564086914 length of segment : 255 time for calcul the mask position with numpy : 0.0003707408905029297 nb_pixel_total : 7655 time to create 1 rle with old method : 0.009270429611206055 length of segment : 106 time for calcul the mask position with numpy : 0.001550436019897461 nb_pixel_total : 29813 time to create 1 rle with old method : 0.035056114196777344 length of segment : 294 time for calcul the mask position with numpy : 0.003580808639526367 nb_pixel_total : 90325 time to create 1 rle with old method : 0.10440707206726074 length of segment : 596 time for calcul the mask position with numpy : 0.0008156299591064453 nb_pixel_total : 14151 time to create 1 rle with old method : 0.016988515853881836 length of segment : 184 time for calcul the mask position with numpy : 0.0026051998138427734 nb_pixel_total : 71840 time to create 1 rle with old method : 0.08184194564819336 length of segment : 297 time for calcul the mask position with numpy : 0.0014348030090332031 nb_pixel_total : 43037 time to create 1 rle with old method : 0.055342674255371094 length of segment : 216 time for calcul the mask position with numpy : 0.007957220077514648 nb_pixel_total : 202268 time to create 1 rle with new method : 0.01633906364440918 length of segment : 486 time for calcul the mask position with numpy : 0.002650737762451172 nb_pixel_total : 49857 time to create 1 rle with old method : 0.0623171329498291 length of segment : 227 time for calcul the mask position with numpy : 0.0012373924255371094 nb_pixel_total : 26233 time to create 1 rle with old method : 0.03443455696105957 length of segment : 211 time for calcul the mask position with numpy : 0.004070758819580078 nb_pixel_total : 92300 time to create 1 rle with old method : 0.10756874084472656 length of segment : 416 time for calcul the mask position with numpy : 0.0021855831146240234 nb_pixel_total : 47842 time to create 1 rle with old method : 0.05651235580444336 length of segment : 267 time for calcul the mask position with numpy : 0.0018916130065917969 nb_pixel_total : 28491 time to create 1 rle with old method : 0.03418540954589844 length of segment : 264 time for calcul the mask position with numpy : 0.0025529861450195312 nb_pixel_total : 30592 time to create 1 rle with old method : 0.036908864974975586 length of segment : 207 time for calcul the mask position with numpy : 0.002116680145263672 nb_pixel_total : 40450 time to create 1 rle with old method : 0.048628807067871094 length of segment : 374 time for calcul the mask position with numpy : 0.002164602279663086 nb_pixel_total : 71119 time to create 1 rle with old method : 0.08247137069702148 length of segment : 434 time for calcul the mask position with numpy : 0.001432657241821289 nb_pixel_total : 43042 time to create 1 rle with old method : 0.05110430717468262 length of segment : 250 time for calcul the mask position with numpy : 0.001291036605834961 nb_pixel_total : 28079 time to create 1 rle with old method : 0.03397870063781738 length of segment : 298 time for calcul the mask position with numpy : 0.0009059906005859375 nb_pixel_total : 23382 time to create 1 rle with old method : 0.027705669403076172 length of segment : 239 time for calcul the mask position with numpy : 0.0015361309051513672 nb_pixel_total : 48379 time to create 1 rle with old method : 0.06615805625915527 length of segment : 274 time for calcul the mask position with numpy : 0.0007355213165283203 nb_pixel_total : 18518 time to create 1 rle with old method : 0.02319025993347168 length of segment : 181 time for calcul the mask position with numpy : 0.0007016658782958984 nb_pixel_total : 14037 time to create 1 rle with old method : 0.017404556274414062 length of segment : 176 time for calcul the mask position with numpy : 0.0009143352508544922 nb_pixel_total : 27194 time to create 1 rle with old method : 0.03317713737487793 length of segment : 259 time for calcul the mask position with numpy : 0.0012211799621582031 nb_pixel_total : 37996 time to create 1 rle with old method : 0.04501962661743164 length of segment : 361 time for calcul the mask position with numpy : 0.0005178451538085938 nb_pixel_total : 6959 time to create 1 rle with old method : 0.00855112075805664 length of segment : 99 time for calcul the mask position with numpy : 0.0019335746765136719 nb_pixel_total : 35141 time to create 1 rle with old method : 0.04188179969787598 length of segment : 301 time for calcul the mask position with numpy : 0.0009007453918457031 nb_pixel_total : 30328 time to create 1 rle with old method : 0.036124229431152344 length of segment : 185 time for calcul the mask position with numpy : 0.0042572021484375 nb_pixel_total : 85019 time to create 1 rle with old method : 0.10164570808410645 length of segment : 363 time for calcul the mask position with numpy : 0.0010836124420166016 nb_pixel_total : 27326 time to create 1 rle with old method : 0.0339505672454834 length of segment : 290 time for calcul the mask position with numpy : 0.0007932186126708984 nb_pixel_total : 19059 time to create 1 rle with old method : 0.023243427276611328 length of segment : 152 time for calcul the mask position with numpy : 0.0010132789611816406 nb_pixel_total : 69579 time to create 1 rle with old method : 0.09977889060974121 length of segment : 303 time for calcul the mask position with numpy : 0.00043201446533203125 nb_pixel_total : 23693 time to create 1 rle with old method : 0.02943587303161621 length of segment : 146 time for calcul the mask position with numpy : 0.002271413803100586 nb_pixel_total : 141109 time to create 1 rle with old method : 0.16958189010620117 length of segment : 458 time for calcul the mask position with numpy : 0.0026781558990478516 nb_pixel_total : 126738 time to create 1 rle with old method : 0.15292620658874512 length of segment : 422 time for calcul the mask position with numpy : 0.00018835067749023438 nb_pixel_total : 7624 time to create 1 rle with old method : 0.009574174880981445 length of segment : 67 time for calcul the mask position with numpy : 0.0005125999450683594 nb_pixel_total : 34991 time to create 1 rle with old method : 0.04114866256713867 length of segment : 261 time for calcul the mask position with numpy : 0.0008170604705810547 nb_pixel_total : 32477 time to create 1 rle with old method : 0.03841543197631836 length of segment : 159 time for calcul the mask position with numpy : 0.002215862274169922 nb_pixel_total : 137882 time to create 1 rle with old method : 0.15823793411254883 length of segment : 649 time spent for convertir_results : 10.142071008682251 Inside saveOutput : final : False verbose : 0 eke 12-6-18 : saveMask need to be cleaned for new output ! Number saved : None batch 1 Loaded 88 chid ids of type : 3594 Number RLEs to save : 26231 save missing photos in datou_result : time spend for datou_step_exec : 63.31106948852539 time spend to save output : 1.6537470817565918 total time spend for step 1 : 64.96481657028198 step2:crop_condition Mon Jul 28 12:41:36 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! VR 22-3-18 : For now we do not clean correctly the datou structure Loading chi in step crop with photo_hashtag_type : 3594 Loading chi in step crop for list_pids : 5 ! batch 1 Loaded 88 chid ids of type : 3594 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ begin to crop the class : papier param for this class : {'min_score': 0.7} filtre for class : papier hashtag_id of this class : 492668766 we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! map_result returned by crop_photo_return_map_crop : length : 76 About to insert : list_path_to_insert length 76 new photo from crops ! About to upload 76 photos upload in portfolio : 3736932 init cache_photo without model_param we have 76 photo to upload uploaded to storage server : ovh folder_temporaire : temp/1753699318_3459257 batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! we have uploaded 76 photos in the portfolio 3736932 time of upload the photos Elapsed time : 19.175453424453735 we have finished the crop for the class : papier begin to crop the class : carton param for this class : {'min_score': 0.7} filtre for class : carton hashtag_id of this class : 492774966 we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! map_result returned by crop_photo_return_map_crop : length : 7 About to insert : list_path_to_insert length 7 new photo from crops ! About to upload 7 photos upload in portfolio : 3736932 init cache_photo without model_param we have 7 photo to upload uploaded to storage server : ovh folder_temporaire : temp/1753699340_3459257 batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! we have uploaded 7 photos in the portfolio 3736932 time of upload the photos Elapsed time : 2.1611673831939697 we have finished the crop for the class : carton begin to crop the class : metal param for this class : {'min_score': 0.7} filtre for class : metal hashtag_id of this class : 492628673 we have both polygon and rles Next one ! map_result returned by crop_photo_return_map_crop : length : 1 About to insert : list_path_to_insert length 1 new photo from crops ! About to upload 1 photos upload in portfolio : 3736932 init cache_photo without model_param we have 1 photo to upload uploaded to storage server : ovh folder_temporaire : temp/1753699343_3459257 batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! we have uploaded 1 photos in the portfolio 3736932 time of upload the photos Elapsed time : 0.6318380832672119 we have finished the crop for the class : metal begin to crop the class : pet_clair param for this class : {'min_score': 0.7} filtre for class : pet_clair hashtag_id of this class : 2107755846 we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! map_result returned by crop_photo_return_map_crop : length : 4 About to insert : list_path_to_insert length 4 new photo from crops ! About to upload 4 photos upload in portfolio : 3736932 init cache_photo without model_param we have 4 photo to upload uploaded to storage server : ovh folder_temporaire : temp/1753699347_3459257 batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! we have uploaded 4 photos in the portfolio 3736932 time of upload the photos Elapsed time : 1.2856993675231934 we have finished the crop for the class : pet_clair begin to crop the class : autre param for this class : {'min_score': 0.7} filtre for class : autre hashtag_id of this class : 494826614 begin to crop the class : pehd param for this class : {'min_score': 0.7} filtre for class : pehd hashtag_id of this class : 628944319 begin to crop the class : pet_fonce param for this class : {'min_score': 0.7} filtre for class : pet_fonce hashtag_id of this class : 2107755900 delete rles from all chi we have 0 chi objets contains the rles we have 0 chi objets contains the rles we have 0 chi objets contains the rles we have 0 chi objets contains the rles we have 0 chi objets contains the rles Inside saveOutput : final : False verbose : 0 saveOutput not yet implemented for datou_step.type : crop_condition we use saveGeneral [1374134819, 1374134787, 1374134577, 1374134544, 1374134513] Looping around the photos to save general results len do output : 88 /1374143266Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143267Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143269Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143270Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143271Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143273Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143274Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143275Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143277Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143278Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143279Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143280Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143281Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143283Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143284Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143285Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143286Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143287Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143288Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143289Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143290Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143291Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143292Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143293Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143294Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143295Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143296Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143297Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143298Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143299Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143300Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143301Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143302Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143303Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143304Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143306Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143307Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143308Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143309Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143310Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143311Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143312Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143313Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143314Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143315Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143317Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143318Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143319Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143320Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143321Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143322Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143323Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143324Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143325Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143326Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143328Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143329Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143330Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143332Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143334Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143335Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143337Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143339Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143340Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143342Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143343Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143344Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143346Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143347Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143348Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143350Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143351Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143352Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143354Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143355Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143356Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143369Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143371Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143373Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143375Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143377Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143379Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143381Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143411Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143439Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143440Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143441Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374143442Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . before output type Here is an output not treated by saveGeneral : Here is an output not treated by saveGeneral : Here is an output not treated by saveGeneral : Managing all output in save final without adding information in the mtr_datou_result ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134819', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134787', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134577', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134544', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134513', None, None, None, None, None, '3371006') begin to insert list_values into mtr_datou_result : length of list_values in save_final : 269 time used for this insertion : 0.04478788375854492 save_final save missing photos in datou_result : time spend for datou_step_exec : 51.200485706329346 time spend to save output : 0.04683327674865723 total time spend for step 2 : 51.247318983078 step3:rle_unique_nms_with_priority Mon Jul 28 12:42:28 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed complete output_args for input 0 We expect there is only one output and this part is used while all output are not tuple or array We expect there is only one output and this part is used while all output are not tuple or array We expect there is only one output and this part is used while all output are not tuple or array We expect there is only one output and this part is used while all output are not tuple or array We expect there is only one output and this part is used while all output are not tuple or array VR 22-3-18 : For now we do not clean correctly the datou structure Begin step rle-unique-nms batch 1 Loaded 88 chid ids of type : 3594 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++nb_obj : 18 nb_hashtags : 3 time to prepare the origin masks : 7.827093839645386 time for calcul the mask position with numpy : 0.8669900894165039 nb_pixel_total : 7342390 time to create 1 rle with new method : 0.637683629989624 time for calcul the mask position with numpy : 0.027016878128051758 nb_pixel_total : 11483 time to create 1 rle with old method : 0.013443946838378906 time for calcul the mask position with numpy : 0.03144717216491699 nb_pixel_total : 135784 time to create 1 rle with old method : 0.15777587890625 time for calcul the mask position with numpy : 0.02765965461730957 nb_pixel_total : 12092 time to create 1 rle with old method : 0.016378164291381836 time for calcul the mask position with numpy : 0.030780315399169922 nb_pixel_total : 48201 time to create 1 rle with old method : 0.05625462532043457 time for calcul the mask position with numpy : 0.02701592445373535 nb_pixel_total : 13990 time to create 1 rle with old method : 0.01635122299194336 time for calcul the mask position with numpy : 0.027252674102783203 nb_pixel_total : 70809 time to create 1 rle with old method : 0.08214569091796875 time for calcul the mask position with numpy : 0.027976036071777344 nb_pixel_total : 112519 time to create 1 rle with old method : 0.12897634506225586 time for calcul the mask position with numpy : 0.026993989944458008 nb_pixel_total : 74483 time to create 1 rle with old method : 0.08589959144592285 time for calcul the mask position with numpy : 0.026531457901000977 nb_pixel_total : 43795 time to create 1 rle with old method : 0.052266597747802734 time for calcul the mask position with numpy : 0.028538942337036133 nb_pixel_total : 124048 time to create 1 rle with old method : 0.1462569236755371 time for calcul the mask position with numpy : 0.029391050338745117 nb_pixel_total : 40968 time to create 1 rle with old method : 0.04787921905517578 time for calcul the mask position with numpy : 0.029260873794555664 nb_pixel_total : 53289 time to create 1 rle with old method : 0.06197857856750488 time for calcul the mask position with numpy : 0.027834177017211914 nb_pixel_total : 17577 time to create 1 rle with old method : 0.02433609962463379 time for calcul the mask position with numpy : 0.02732396125793457 nb_pixel_total : 20914 time to create 1 rle with old method : 0.024461984634399414 time for calcul the mask position with numpy : 0.028119564056396484 nb_pixel_total : 102861 time to create 1 rle with old method : 0.12214517593383789 time for calcul the mask position with numpy : 0.03306889533996582 nb_pixel_total : 16766 time to create 1 rle with old method : 0.04467487335205078 time for calcul the mask position with numpy : 0.030597209930419922 nb_pixel_total : 26627 time to create 1 rle with old method : 0.03166937828063965 time for calcul the mask position with numpy : 0.033728599548339844 nb_pixel_total : 25804 time to create 1 rle with old method : 0.030798912048339844 create new chi : 3.217074155807495 time to delete rle : 0.028060436248779297 batch 1 Loaded 37 chid ids of type : 3594 +++++++++++++++++++++++++++++++Number RLEs to save : 14034 TO DO : save crop sub photo not yet done ! save time : 0.8971831798553467 nb_obj : 30 nb_hashtags : 3 time to prepare the origin masks : 4.977208375930786 time for calcul the mask position with numpy : 0.6198132038116455 nb_pixel_total : 6763993 time to create 1 rle with new method : 0.6003260612487793 time for calcul the mask position with numpy : 0.03515267372131348 nb_pixel_total : 11914 time to create 1 rle with old method : 0.013841629028320312 time for calcul the mask position with numpy : 0.03511691093444824 nb_pixel_total : 61885 time to create 1 rle with old method : 0.07289290428161621 time for calcul the mask position with numpy : 0.03517270088195801 nb_pixel_total : 63434 time to create 1 rle with old method : 0.07322287559509277 time for calcul the mask position with numpy : 0.035204172134399414 nb_pixel_total : 30766 time to create 1 rle with old method : 0.03550004959106445 time for calcul the mask position with numpy : 0.03474831581115723 nb_pixel_total : 42646 time to create 1 rle with old method : 0.05020570755004883 time for calcul the mask position with numpy : 0.0359797477722168 nb_pixel_total : 160767 time to create 1 rle with new method : 0.8700945377349854 time for calcul the mask position with numpy : 0.035439252853393555 nb_pixel_total : 22140 time to create 1 rle with old method : 0.025776386260986328 time for calcul the mask position with numpy : 0.03527998924255371 nb_pixel_total : 35706 time to create 1 rle with old method : 0.04165196418762207 time for calcul the mask position with numpy : 0.03481554985046387 nb_pixel_total : 19225 time to create 1 rle with old method : 0.022340774536132812 time for calcul the mask position with numpy : 0.035271644592285156 nb_pixel_total : 96845 time to create 1 rle with old method : 0.11243414878845215 time for calcul the mask position with numpy : 0.038478851318359375 nb_pixel_total : 62047 time to create 1 rle with old method : 0.07258915901184082 time for calcul the mask position with numpy : 0.03514432907104492 nb_pixel_total : 84471 time to create 1 rle with old method : 0.09760832786560059 time for calcul the mask position with numpy : 0.03530287742614746 nb_pixel_total : 116283 time to create 1 rle with old method : 0.15807461738586426 time for calcul the mask position with numpy : 0.035260915756225586 nb_pixel_total : 96611 time to create 1 rle with old method : 0.11253166198730469 time for calcul the mask position with numpy : 0.036989450454711914 nb_pixel_total : 11264 time to create 1 rle with old method : 0.013232231140136719 time for calcul the mask position with numpy : 0.0346989631652832 nb_pixel_total : 29495 time to create 1 rle with old method : 0.03447699546813965 time for calcul the mask position with numpy : 0.03656363487243652 nb_pixel_total : 206361 time to create 1 rle with new method : 0.5573370456695557 time for calcul the mask position with numpy : 0.0346829891204834 nb_pixel_total : 17706 time to create 1 rle with old method : 0.0205228328704834 time for calcul the mask position with numpy : 0.03513216972351074 nb_pixel_total : 41199 time to create 1 rle with old method : 0.04770302772521973 time for calcul the mask position with numpy : 0.034824371337890625 nb_pixel_total : 27587 time to create 1 rle with old method : 0.032492876052856445 time for calcul the mask position with numpy : 0.034955501556396484 nb_pixel_total : 10703 time to create 1 rle with old method : 0.012773275375366211 time for calcul the mask position with numpy : 0.03459358215332031 nb_pixel_total : 14166 time to create 1 rle with old method : 0.016490936279296875 time for calcul the mask position with numpy : 0.03449654579162598 nb_pixel_total : 15328 time to create 1 rle with old method : 0.01788473129272461 time for calcul the mask position with numpy : 0.035085201263427734 nb_pixel_total : 43689 time to create 1 rle with old method : 0.07014966011047363 time for calcul the mask position with numpy : 0.03625345230102539 nb_pixel_total : 53563 time to create 1 rle with old method : 0.062451839447021484 time for calcul the mask position with numpy : 0.03459429740905762 nb_pixel_total : 7085 time to create 1 rle with old method : 0.008360624313354492 time for calcul the mask position with numpy : 0.03533458709716797 nb_pixel_total : 50679 time to create 1 rle with old method : 0.05954313278198242 time for calcul the mask position with numpy : 0.035492658615112305 nb_pixel_total : 49651 time to create 1 rle with old method : 0.05832386016845703 time for calcul the mask position with numpy : 0.03512835502624512 nb_pixel_total : 20444 time to create 1 rle with old method : 0.023911237716674805 time for calcul the mask position with numpy : 0.03541278839111328 nb_pixel_total : 26747 time to create 1 rle with old method : 0.03103184700012207 create new chi : 5.212745189666748 time to delete rle : 0.003872394561767578 batch 1 Loaded 61 chid ids of type : 3594 ++++++++++++++++++++++++++++++++++++++++Number RLEs to save : 19401 TO DO : save crop sub photo not yet done ! save time : 1.1094555854797363 nb_obj : 17 nb_hashtags : 2 time to prepare the origin masks : 6.370002508163452 time for calcul the mask position with numpy : 0.5525507926940918 nb_pixel_total : 7407726 time to create 1 rle with new method : 1.4586338996887207 time for calcul the mask position with numpy : 0.02695488929748535 nb_pixel_total : 40450 time to create 1 rle with old method : 0.05231809616088867 time for calcul the mask position with numpy : 0.029407501220703125 nb_pixel_total : 30592 time to create 1 rle with old method : 0.03664278984069824 time for calcul the mask position with numpy : 0.02919769287109375 nb_pixel_total : 28491 time to create 1 rle with old method : 0.03348350524902344 time for calcul the mask position with numpy : 0.028737545013427734 nb_pixel_total : 47842 time to create 1 rle with old method : 0.06157040596008301 time for calcul the mask position with numpy : 0.03157687187194824 nb_pixel_total : 92300 time to create 1 rle with old method : 0.12875151634216309 time for calcul the mask position with numpy : 0.04808950424194336 nb_pixel_total : 26233 time to create 1 rle with old method : 0.033936262130737305 time for calcul the mask position with numpy : 0.054619550704956055 nb_pixel_total : 49857 time to create 1 rle with old method : 0.0674746036529541 time for calcul the mask position with numpy : 0.05363774299621582 nb_pixel_total : 202268 time to create 1 rle with new method : 0.6476516723632812 time for calcul the mask position with numpy : 0.04788351058959961 nb_pixel_total : 43037 time to create 1 rle with old method : 0.04954934120178223 time for calcul the mask position with numpy : 0.0428929328918457 nb_pixel_total : 71840 time to create 1 rle with old method : 0.08567476272583008 time for calcul the mask position with numpy : 0.04083085060119629 nb_pixel_total : 14151 time to create 1 rle with old method : 0.016581296920776367 time for calcul the mask position with numpy : 0.042096614837646484 nb_pixel_total : 90325 time to create 1 rle with old method : 0.10463547706604004 time for calcul the mask position with numpy : 0.04210972785949707 nb_pixel_total : 29813 time to create 1 rle with old method : 0.03466510772705078 time for calcul the mask position with numpy : 0.032819509506225586 nb_pixel_total : 7655 time to create 1 rle with old method : 0.008952617645263672 time for calcul the mask position with numpy : 0.024924516677856445 nb_pixel_total : 40394 time to create 1 rle with old method : 0.04663276672363281 time for calcul the mask position with numpy : 0.026441574096679688 nb_pixel_total : 27836 time to create 1 rle with old method : 0.03266191482543945 time for calcul the mask position with numpy : 0.02562403678894043 nb_pixel_total : 43590 time to create 1 rle with old method : 0.05054640769958496 create new chi : 4.204385042190552 time to delete rle : 0.0016956329345703125 batch 1 Loaded 35 chid ids of type : 3594 +++++++++++++++++++Number RLEs to save : 12502 TO DO : save crop sub photo not yet done ! save time : 0.857452392578125 nb_obj : 15 nb_hashtags : 2 time to prepare the origin masks : 8.480044603347778 time for calcul the mask position with numpy : 0.769155740737915 nb_pixel_total : 7779278 time to create 1 rle with new method : 0.8978896141052246 time for calcul the mask position with numpy : 0.02614903450012207 nb_pixel_total : 19059 time to create 1 rle with old method : 0.022161006927490234 time for calcul the mask position with numpy : 0.029245376586914062 nb_pixel_total : 27326 time to create 1 rle with old method : 0.03278708457946777 time for calcul the mask position with numpy : 0.026250839233398438 nb_pixel_total : 85019 time to create 1 rle with old method : 0.11733412742614746 time for calcul the mask position with numpy : 0.03204989433288574 nb_pixel_total : 29872 time to create 1 rle with old method : 0.03754281997680664 time for calcul the mask position with numpy : 0.02673053741455078 nb_pixel_total : 35141 time to create 1 rle with old method : 0.04034733772277832 time for calcul the mask position with numpy : 0.025629758834838867 nb_pixel_total : 6959 time to create 1 rle with old method : 0.007955312728881836 time for calcul the mask position with numpy : 0.025795936584472656 nb_pixel_total : 37996 time to create 1 rle with old method : 0.04720497131347656 time for calcul the mask position with numpy : 0.02858877182006836 nb_pixel_total : 27194 time to create 1 rle with old method : 0.031450510025024414 time for calcul the mask position with numpy : 0.026638507843017578 nb_pixel_total : 14037 time to create 1 rle with old method : 0.016329288482666016 time for calcul the mask position with numpy : 0.03140425682067871 nb_pixel_total : 18518 time to create 1 rle with old method : 0.025124788284301758 time for calcul the mask position with numpy : 0.02937150001525879 nb_pixel_total : 48379 time to create 1 rle with old method : 0.06324577331542969 time for calcul the mask position with numpy : 0.025205135345458984 nb_pixel_total : 23382 time to create 1 rle with old method : 0.02688741683959961 time for calcul the mask position with numpy : 0.028101444244384766 nb_pixel_total : 28079 time to create 1 rle with old method : 0.0323333740234375 time for calcul the mask position with numpy : 0.03406262397766113 nb_pixel_total : 43042 time to create 1 rle with old method : 0.0515592098236084 time for calcul the mask position with numpy : 0.0514531135559082 nb_pixel_total : 71119 time to create 1 rle with old method : 0.08335471153259277 create new chi : 2.805917739868164 time to delete rle : 0.0014383792877197266 batch 1 Loaded 31 chid ids of type : 3594 ++++++++++++++++++++Number RLEs to save : 9840 TO DO : save crop sub photo not yet done ! save time : 0.5999207496643066 nb_obj : 8 nb_hashtags : 2 time to prepare the origin masks : 4.358392000198364 time for calcul the mask position with numpy : 0.771233081817627 nb_pixel_total : 7720307 time to create 1 rle with new method : 1.036940574645996 time for calcul the mask position with numpy : 0.04351210594177246 nb_pixel_total : 137882 time to create 1 rle with old method : 0.16027307510375977 time for calcul the mask position with numpy : 0.03916430473327637 nb_pixel_total : 32477 time to create 1 rle with old method : 0.04947352409362793 time for calcul the mask position with numpy : 0.027351856231689453 nb_pixel_total : 34991 time to create 1 rle with old method : 0.05665087699890137 time for calcul the mask position with numpy : 0.029170751571655273 nb_pixel_total : 7624 time to create 1 rle with old method : 0.008949995040893555 time for calcul the mask position with numpy : 0.029158830642700195 nb_pixel_total : 126738 time to create 1 rle with old method : 0.1473534107208252 time for calcul the mask position with numpy : 0.027041196823120117 nb_pixel_total : 141109 time to create 1 rle with old method : 0.16219735145568848 time for calcul the mask position with numpy : 0.026426315307617188 nb_pixel_total : 23693 time to create 1 rle with old method : 0.027285099029541016 time for calcul the mask position with numpy : 0.02604365348815918 nb_pixel_total : 69579 time to create 1 rle with old method : 0.0804286003112793 create new chi : 2.7984225749969482 time to delete rle : 0.0011055469512939453 batch 1 Loaded 17 chid ids of type : 3594 ++++++++++++Number RLEs to save : 7090 TO DO : save crop sub photo not yet done ! save time : 0.4631316661834717 map_output_result : {1374134819: (0.0, 'Should be the crop_list due to order', 0), 1374134787: (0.0, 'Should be the crop_list due to order', 0), 1374134577: (0.0, 'Should be the crop_list due to order', 0), 1374134544: (0.0, 'Should be the crop_list due to order', 0), 1374134513: (0.0, 'Should be the crop_list due to order', 0)} End step rle-unique-nms Inside saveOutput : final : False verbose : 0 saveOutput not yet implemented for datou_step.type : rle_unique_nms_with_priority we use saveGeneral [1374134819, 1374134787, 1374134577, 1374134544, 1374134513] Looping around the photos to save general results len do output : 5 /1374134819.Didn't retrieve data . /1374134787.Didn't retrieve data . /1374134577.Didn't retrieve data . /1374134544.Didn't retrieve data . /1374134513.Didn't retrieve data . before output type Used above Here is an output not treated by saveGeneral : Managing all output in save final without adding information in the mtr_datou_result ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134819', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134787', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134577', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134544', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134513', None, None, None, None, None, '3371006') begin to insert list_values into mtr_datou_result : length of list_values in save_final : 15 time used for this insertion : 0.015615463256835938 save_final save missing photos in datou_result : time spend for datou_step_exec : 55.22276258468628 time spend to save output : 0.01612067222595215 total time spend for step 3 : 55.23888325691223 step4:ventilate_hashtags_in_portfolio Mon Jul 28 12:43:23 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! VR 22-3-18 : For now we do not clean correctly the datou structure beginning of datou step ventilate_hashtags_in_portfolio : To implement ! Iterating over portfolio : 25401991 get user id for portfolio 25401991 SELECT mptpi.id, mptpi.mtr_portfolio_id_1, mptpi.mtr_portfolio_id_2, mptpi.type, mptpi.hashtag_id, mptpi.min_score, mptpi.mtr_user_id, mptpi.created_at, mptpi.updated_at, mptpi.last_updated_at_desc, mptpi.last_updated_at_asc, h.hashtag FROM MTRPhoto.mtr_port_to_port_ids mptpi, MTRBack.hashtags h WHERE h.hashtag_id=mptpi.hashtag_id AND mptpi.`mtr_portfolio_id_1`=25401991 AND mptpi.`type`=3594 AND mptpi.`hashtag_id` in (select hashtag_id FROM MTRBack.hashtags where hashtag in ('flou','metal','background','autre','pehd','pet_clair','carton','pet_fonce','environnement','papier','mal_croppe')) AND mptpi.`min_score`=0.5 To do To do SELECT mptpi.id, mptpi.mtr_portfolio_id_1, mptpi.mtr_portfolio_id_2, mptpi.type, mptpi.hashtag_id, mptpi.min_score, mptpi.mtr_user_id, mptpi.created_at, mptpi.updated_at, mptpi.last_updated_at_desc, mptpi.last_updated_at_asc, h.hashtag FROM MTRPhoto.mtr_port_to_port_ids mptpi, MTRBack.hashtags h WHERE h.hashtag_id=mptpi.hashtag_id AND mptpi.`mtr_portfolio_id_1`=25401991 AND mptpi.`type`=3594 AND mptpi.`hashtag_id` in (select hashtag_id FROM MTRBack.hashtags where hashtag in ('flou','metal','background','autre','pehd','pet_clair','carton','pet_fonce','environnement','papier','mal_croppe')) AND mptpi.`min_score`=0.5 To do Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") To do ! Use context local managing function ! SELECT mptpi.id, mptpi.mtr_portfolio_id_1, mptpi.mtr_portfolio_id_2, mptpi.type, mptpi.hashtag_id, mptpi.min_score, mptpi.mtr_user_id, mptpi.created_at, mptpi.updated_at, mptpi.last_updated_at_desc, mptpi.last_updated_at_asc, h.hashtag FROM MTRPhoto.mtr_port_to_port_ids mptpi, MTRBack.hashtags h WHERE h.hashtag_id=mptpi.hashtag_id AND mptpi.`mtr_portfolio_id_1`=25401991 AND mptpi.`type`=3594 AND mptpi.`hashtag_id` in (select hashtag_id FROM MTRBack.hashtags where hashtag in ('flou','metal','background','autre','pehd','pet_clair','carton','pet_fonce','environnement','papier','mal_croppe')) AND mptpi.`min_score`=0.5 To do lien utilise dans velours : https://www.fotonower.com/velours/25402372,25402374,25402375,25402376,25402377,25402378,25402379,25402380,25402381,25402382,25402383?tags=flou,metal,background,autre,pehd,pet_clair,carton,pet_fonce,environnement,papier,mal_croppe Inside saveOutput : final : False verbose : 0 saveOutput not yet implemented for datou_step.type : ventilate_hashtags_in_portfolio we use saveGeneral [1374134819, 1374134787, 1374134577, 1374134544, 1374134513] Looping around the photos to save general results len do output : 1 /25401991. before output type Here is an output not treated by saveGeneral : Managing all output in save final without adding information in the mtr_datou_result ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134819', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134787', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134577', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134544', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134513', None, None, None, None, None, '3371006') begin to insert list_values into mtr_datou_result : length of list_values in save_final : 6 time used for this insertion : 0.019902706146240234 save_final save missing photos in datou_result : time spend for datou_step_exec : 1.7216618061065674 time spend to save output : 0.02016735076904297 total time spend for step 4 : 1.7418291568756104 step5:final Mon Jul 28 12:43:25 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! complete output_args for input 2 VR 22-3-18 : For now we do not clean correctly the datou structure Beginning of datou step final ! Catched exception ! Connect or reconnect ! Inside saveOutput : final : False verbose : 0 original output for save of step final : {1374134819: ('0.10750159143518523',), 1374134787: ('0.10750159143518523',), 1374134577: ('0.10750159143518523',), 1374134544: ('0.10750159143518523',), 1374134513: ('0.10750159143518523',)} new output for save of step final : {1374134819: ('0.10750159143518523',), 1374134787: ('0.10750159143518523',), 1374134577: ('0.10750159143518523',), 1374134544: ('0.10750159143518523',), 1374134513: ('0.10750159143518523',)} [1374134819, 1374134787, 1374134577, 1374134544, 1374134513] Looping around the photos to save general results len do output : 5 /1374134819.Didn't retrieve data . /1374134787.Didn't retrieve data . /1374134577.Didn't retrieve data . /1374134544.Didn't retrieve data . /1374134513.Didn't retrieve data . before output type Used above Used above Managing all output in save final without adding information in the mtr_datou_result ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134819', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134787', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134577', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134544', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134513', None, None, None, None, None, '3371006') begin to insert list_values into mtr_datou_result : length of list_values in save_final : 15 time used for this insertion : 0.014866828918457031 save_final save missing photos in datou_result : time spend for datou_step_exec : 0.12972736358642578 time spend to save output : 0.015180587768554688 total time spend for step 5 : 0.14490795135498047 step6:blur_detection Mon Jul 28 12:43:25 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! VR 22-3-18 : For now we do not clean correctly the datou structure inside step blur_detection methode: ratio et variance treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75.jpg resize: (2160, 3840) 1374134819 -7.338993980093415 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9.jpg resize: (2160, 3840) 1374134787 -7.4108041468807 treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf.jpg resize: (2160, 3840) 1374134577 -7.239141803082168 treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449.jpg resize: (2160, 3840) 1374134544 -6.175396670087478 treat image : temp/1753699231_3459257_1374134513_5d3eb80653a0ec9e8f5a41bddd1ff6f1.jpg resize: (2160, 3840) 1374134513 -5.456090117598144 treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363889_0.png resize: (218, 170) 1374143266 -3.256876056093367 treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363906_0.png resize: (158, 111) 1374143267 -4.912478507920146 treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363896_0.png resize: (196, 312) 1374143269 -2.373186450662734 treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363890_0.png resize: (332, 218) 1374143270 -2.8710599999576054 treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363892_0.png resize: (478, 319) 1374143271 -4.19882674526835 treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363902_0.png resize: (172, 126) 1374143273 -3.177450696998362 treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363895_0.png resize: (269, 273) 1374143274 -0.16425835060971766 treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363897_0.png resize: (530, 414) 1374143275 -4.381363460482658 treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363894_0.png resize: (208, 122) 1374143277 -4.331966633720386 treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363899_0.png resize: (297, 306) 1374143278 -2.7988341164835915 treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363904_0.png resize: (169, 109) 1374143279 -3.068600943873597 treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363900_0.png resize: (341, 460) 1374143280 -4.326164961705018 treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363898_0.png resize: (293, 193) 1374143281 -3.809879530750862 treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363901_0.png resize: (391, 269) 1374143283 -2.5089262349089774 treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363905_0.png resize: (469, 420) 1374143284 -5.472128176123763 treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363903_0.png resize: (223, 293) 1374143285 -4.242069600532605 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363916_0.png resize: (608, 418) 1374143286 -1.9841984644092168 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363907_0.png resize: (224, 398) 1374143287 0.47258297603288424 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363921_0.png resize: (426, 777) 1374143288 -4.559472877843252 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363919_0.png resize: (353, 494) 1374143289 -1.6500234312541273 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363915_0.png resize: (265, 296) 1374143290 -4.317171833007732 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363924_0.png resize: (158, 178) 1374143291 -2.9560548672536164 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363918_0.png resize: (437, 361) 1374143292 -4.728114842081724 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363922_0.png resize: (108, 121) 1374143293 -4.539306263313052 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363914_0.png resize: (239, 265) 1374143294 -2.4800389563505907 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363926_0.png resize: (347, 297) 1374143295 -4.7443011859955675 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363920_0.png resize: (329, 269) 1374143296 -4.05927563904335 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363932_0.png resize: (341, 238) 1374143297 -4.657564796046992 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363928_0.png resize: (266, 152) 1374143298 -3.563673954502918 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363911_0.png resize: (100, 349) 1374143299 -3.6804311144467885 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363934_0.png resize: (371, 311) 1374143300 -4.771145633718454 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363910_0.png resize: (348, 322) 1374143301 0.5208262661313418 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363912_0.png resize: (177, 167) 1374143302 -3.720022183814051 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363929_0.png resize: (246, 309) 1374143303 -4.023104146014708 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363933_0.png resize: (112, 193) 1374143304 -4.349783907258978 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363925_0.png resize: (142, 157) 1374143306 -2.9734971755797575 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363935_0.png resize: (200, 327) 1374143307 -3.868336088289972 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363908_0.png resize: (142, 247) 1374143308 -3.2201869815752886 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363909_0.png resize: (184, 166) 1374143309 -3.6575057970906504 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363930_0.png resize: (151, 252) 1374143310 -2.4542064892832816 treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363940_0.png resize: (106, 86) 1374143311 -3.129108409043715 treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363943_0.png resize: (151, 174) 1374143312 -4.155757106311882 treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363951_0.png resize: (226, 200) 1374143313 -2.1237238528352216 treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363937_0.png resize: (487, 222) 1374143314 -3.5374052561680527 treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363939_0.png resize: (215, 329) 1374143315 -4.874693487107721 treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363942_0.png resize: (303, 636) 1374143317 -4.8843385690042425 treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363944_0.png resize: (297, 403) 1374143318 -3.4901982803330314 treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363945_0.png resize: (215, 262) 1374143319 -3.2584979780921466 treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363949_0.png resize: (411, 347) 1374143320 -3.758948545837594 treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363947_0.png resize: (223, 270) 1374143321 -0.24641487459600653 treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363941_0.png resize: (294, 179) 1374143322 -4.265544401215459 treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363946_0.png resize: (412, 797) 1374143323 -4.13484351603067 treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363948_0.png resize: (211, 208) 1374143324 -3.3916542358494066 treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363938_0.png resize: (244, 269) 1374143325 -2.556310683099965 treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363953_0.png resize: (300, 248) 1374143326 -2.9736545201774565 treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363950_0.png resize: (267, 294) 1374143328 -3.168747515182257 treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363954_0.png resize: (434, 230) 1374143329 -4.693010734036507 treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363960_0.png resize: (176, 197) 1374143330 -3.8691323060406213 treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363957_0.png resize: (239, 127) 1374143332 -3.4483034721416455 treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363959_0.png resize: (178, 149) 1374143334 -3.849153241692339 treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363961_0.png resize: (202, 219) 1374143335 -5.368806297219938 treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363966_0.png resize: (219, 1008) 1374143337 -3.7970520058643005 treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363967_0.png resize: (275, 149) 1374143339 -4.735468444452916 treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363955_0.png resize: (238, 262) 1374143340 -4.837540338598214 treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363956_0.png resize: (266, 185) 1374143342 -3.8168660582330363 treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363964_0.png resize: (282, 319) 1374143343 -2.8827024425123904 treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363958_0.png resize: (257, 259) 1374143344 -5.016753135232747 treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363962_0.png resize: (238, 287) 1374143346 -4.470419118132681 treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363968_0.png resize: (152, 222) 1374143347 -4.6393099726921365 treat image : temp/1753699231_3459257_1374134513_5d3eb80653a0ec9e8f5a41bddd1ff6f1_rle_crop_3896363973_0.png resize: (66, 140) 1374143348 -4.529945635137468 treat image : temp/1753699231_3459257_1374134513_5d3eb80653a0ec9e8f5a41bddd1ff6f1_rle_crop_3896363974_0.png resize: (260, 209) 1374143350 -5.138425763378456 treat image : temp/1753699231_3459257_1374134513_5d3eb80653a0ec9e8f5a41bddd1ff6f1_rle_crop_3896363971_0.png resize: (421, 548) 1374143351 -4.217777357478966 treat image : temp/1753699231_3459257_1374134513_5d3eb80653a0ec9e8f5a41bddd1ff6f1_rle_crop_3896363969_0.png resize: (303, 335) 1374143352 -3.9149317719604952 treat image : temp/1753699231_3459257_1374134513_5d3eb80653a0ec9e8f5a41bddd1ff6f1_rle_crop_3896363970_0.png resize: (141, 192) 1374143354 -2.4788839515754924 treat image : temp/1753699231_3459257_1374134513_5d3eb80653a0ec9e8f5a41bddd1ff6f1_rle_crop_3896363972_0.png resize: (339, 507) 1374143355 -5.201720122745081 treat image : temp/1753699231_3459257_1374134513_5d3eb80653a0ec9e8f5a41bddd1ff6f1_rle_crop_3896363976_0.png resize: (539, 446) 1374143356 -5.050950201484555 treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363893_0.png resize: (190, 142) 1374143369 -2.665293719483423 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363917_0.png resize: (231, 246) 1374143371 -3.4329208434254355 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363923_0.png resize: (178, 159) 1374143373 -3.247194112670636 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363927_0.png resize: (160, 125) 1374143375 -3.6215522763835972 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363913_0.png resize: (390, 133) 1374143377 -3.495589029677544 treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363952_0.png resize: (201, 322) 1374143379 -3.0051178822680917 treat image : temp/1753699231_3459257_1374134513_5d3eb80653a0ec9e8f5a41bddd1ff6f1_rle_crop_3896363975_0.png resize: (156, 296) 1374143381 -4.36251700053887 treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363891_0.png resize: (142, 147) 1374143411 -4.01136719945533 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363931_0.png resize: (289, 324) 1374143439 -3.9398455725242574 treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363936_0.png resize: (286, 494) 1374143440 -4.540528766075263 treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363965_0.png resize: (176, 251) 1374143441 -3.58648839138242 treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363963_0.png resize: (78, 147) 1374143442 -3.5375875164728656 Inside saveOutput : final : False verbose : 0 begin to insert list_values into class_photo_scores : length of list_valuse in save_photo_hashtag_id_thcl_score : 93 time used for this insertion : 0.014342546463012695 begin to insert list_values into photo_hahstag_ids : length of list_valuse in save_photo_hashtag_id_type : 93 time used for this insertion : 0.017891883850097656 save missing photos in datou_result : time spend for datou_step_exec : 24.998995065689087 time spend to save output : 0.039989471435546875 total time spend for step 6 : 25.038984537124634 step7:brightness Mon Jul 28 12:43:50 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! VR 22-3-18 : For now we do not clean correctly the datou structure inside step calcul brightness treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75.jpg treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9.jpg treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf.jpg treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449.jpg treat image : temp/1753699231_3459257_1374134513_5d3eb80653a0ec9e8f5a41bddd1ff6f1.jpg treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363889_0.png treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363906_0.png treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363896_0.png treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363890_0.png treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363892_0.png treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363902_0.png treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363895_0.png treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363897_0.png treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363894_0.png treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363899_0.png treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363904_0.png treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363900_0.png treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363898_0.png treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363901_0.png treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363905_0.png treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363903_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363916_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363907_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363921_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363919_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363915_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363924_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363918_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363922_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363914_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363926_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363920_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363932_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363928_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363911_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363934_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363910_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363912_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363929_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363933_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363925_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363935_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363908_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363909_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363930_0.png treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363940_0.png treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363943_0.png treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363951_0.png treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363937_0.png treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363939_0.png treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363942_0.png treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363944_0.png treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363945_0.png treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363949_0.png treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363947_0.png treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363941_0.png treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363946_0.png treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363948_0.png treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363938_0.png treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363953_0.png treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363950_0.png treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363954_0.png treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363960_0.png treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363957_0.png treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363959_0.png treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363961_0.png treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363966_0.png treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363967_0.png treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363955_0.png treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363956_0.png treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363964_0.png treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363958_0.png treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363962_0.png treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363968_0.png treat image : temp/1753699231_3459257_1374134513_5d3eb80653a0ec9e8f5a41bddd1ff6f1_rle_crop_3896363973_0.png treat image : temp/1753699231_3459257_1374134513_5d3eb80653a0ec9e8f5a41bddd1ff6f1_rle_crop_3896363974_0.png treat image : temp/1753699231_3459257_1374134513_5d3eb80653a0ec9e8f5a41bddd1ff6f1_rle_crop_3896363971_0.png treat image : temp/1753699231_3459257_1374134513_5d3eb80653a0ec9e8f5a41bddd1ff6f1_rle_crop_3896363969_0.png treat image : temp/1753699231_3459257_1374134513_5d3eb80653a0ec9e8f5a41bddd1ff6f1_rle_crop_3896363970_0.png treat image : temp/1753699231_3459257_1374134513_5d3eb80653a0ec9e8f5a41bddd1ff6f1_rle_crop_3896363972_0.png treat image : temp/1753699231_3459257_1374134513_5d3eb80653a0ec9e8f5a41bddd1ff6f1_rle_crop_3896363976_0.png treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363893_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363917_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363923_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363927_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363913_0.png treat image : temp/1753699231_3459257_1374134577_61d6240eb856494ad81480a07a58c9cf_rle_crop_3896363952_0.png treat image : temp/1753699231_3459257_1374134513_5d3eb80653a0ec9e8f5a41bddd1ff6f1_rle_crop_3896363975_0.png treat image : temp/1753699231_3459257_1374134819_f39389837960a1c5908e3cd7efd7ae75_rle_crop_3896363891_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363931_0.png treat image : temp/1753699231_3459257_1374134787_92e7d507cad817cfd525ed8c76b192b9_rle_crop_3896363936_0.png treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363965_0.png treat image : temp/1753699231_3459257_1374134544_efafb99389454212b544beb67b53b449_rle_crop_3896363963_0.png Inside saveOutput : final : False verbose : 0 begin to insert list_values into class_photo_scores : length of list_valuse in save_photo_hashtag_id_thcl_score : 93 time used for this insertion : 0.018062591552734375 begin to insert list_values into photo_hahstag_ids : length of list_valuse in save_photo_hashtag_id_type : 93 time used for this insertion : 0.015363454818725586 save missing photos in datou_result : time spend for datou_step_exec : 6.134100437164307 time spend to save output : 0.03841590881347656 total time spend for step 7 : 6.172516345977783 step8:velours_tree Mon Jul 28 12:43:56 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed complete output_args for input 0 VR 22-3-18 : For now we do not clean correctly the datou structure can't find the photo_desc_type Inside saveOutput : final : False verbose : 0 ouput is None No outpout to save, returning out of save general time spend for datou_step_exec : 0.4265720844268799 time spend to save output : 6.270408630371094e-05 total time spend for step 8 : 0.4266347885131836 step9:send_mail_cod Mon Jul 28 12:43:56 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed complete output_args for input 0 complete output_args for input 1 Inconsistent number of input and output, step which parrallelize and manage error in input by avoiding sending an output for this data can't be used in tree dependencies of input and output complete output_args for input 2 Inconsistent number of input and output, step which parrallelize and manage error in input by avoiding sending an output for this data can't be used in tree dependencies of input and output complete output_args for input 3 We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! VR 22-3-18 : For now we do not clean correctly the datou structure dans la step send mail cod work_area: /home/admin/workarea/git/Velours/python in order to get the selector url, please entre the license of selector results_Auto_P25401991_28-07-2025_12_43_56.pdf 25402372 imagette254023721753699436 25402374 change filename to text .imagette254023741753699436 25402375 imagette254023751753699436 25402376 imagette254023761753699436 25402377 imagette254023771753699437 25402378 change filename to text .change filename to text .change filename to text .change filename to text .imagette254023781753699437 25402379 change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .imagette254023791753699437 25402380 imagette254023801753699437 25402382 change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .imagette254023821753699437 25402383 imagette254023831753699439 SELECT h.hashtag,pcr.value FROM MTRUser.portfolio_carac_ratio pcr, MTRBack.hashtags h where pcr.portfolio_id=25401991 and hashtag_type = 3594 and pcr.hashtag_id = h.hashtag_id; velour_link : https://www.fotonower.com/velours/25402372,25402374,25402375,25402376,25402377,25402378,25402379,25402380,25402381,25402382,25402383?tags=flou,metal,background,autre,pehd,pet_clair,carton,pet_fonce,environnement,papier,mal_croppe args[1374134819] : ((1374134819, -7.338993980093415, 492609224), (1374134819, 0.6628507281502652, 2107752395), '0.10750159143518523') We are sending mail with results at report@fotonower.com args[1374134787] : ((1374134787, -7.4108041468807, 492609224), (1374134787, 0.864728073588755, 2107752395), '0.10750159143518523') We are sending mail with results at report@fotonower.com args[1374134577] : ((1374134577, -7.239141803082168, 492609224), (1374134577, 0.5565258906892251, 2107752395), '0.10750159143518523') We are sending mail with results at report@fotonower.com args[1374134544] : ((1374134544, -6.175396670087478, 492609224), (1374134544, -0.08342833500257547, 496442774), '0.10750159143518523') We are sending mail with results at report@fotonower.com args[1374134513] : ((1374134513, -5.456090117598144, 492609224), (1374134513, -0.6740950248874744, 501862349), '0.10750159143518523') We are sending mail with results at report@fotonower.com refus_total : 0.10750159143518523 2022-04-13 10:29:59 0 SELECT ph.photo_id,ph.url,ph.username,ph.uploaded_at,ph.text FROM MTRBack.photos ph, MTRUser.mtr_portfolio_photos mpp WHERE ph.photo_id=mpp.mtr_photo_id AND mpp.mtr_portfolio_id=25401991 AND mpp.hide_status=0 ORDER BY mpp.order LIMIT 0, 1000 start upload file to ovh https://storage.sbg.cloud.ovh.net/v1/AUTH_3b171620e76e4af496c5fd050759c9f0/media.fotonower.com/results_Auto_P25401991_28-07-2025_12_43_56.pdf results_Auto_P25401991_28-07-2025_12_43_56.pdf uploaded to url https://storage.sbg.cloud.ovh.net/v1/AUTH_3b171620e76e4af496c5fd050759c9f0/media.fotonower.com/results_Auto_P25401991_28-07-2025_12_43_56.pdf start insert file to database insert into MTRUser.mtr_files (mtd_id,mtr_portfolio_id,text,url,format,tags,file_size,value) values ('3318','25401991','results_Auto_P25401991_28-07-2025_12_43_56.pdf','https://storage.sbg.cloud.ovh.net/v1/AUTH_3b171620e76e4af496c5fd050759c9f0/media.fotonower.com/results_Auto_P25401991_28-07-2025_12_43_56.pdf','pdf','','0.35','0.10750159143518523') message_in_mail: Bonjour,
Veuillez trouver ci dessous les résultats du service carac on demand pour le portfolio: https://www.fotonower.com/view/25401991

https://www.fotonower.com/image?json=false&list_photos_id=1374134819
Bravo, la photo est bien prise.
https://www.fotonower.com/image?json=false&list_photos_id=1374134787
Bravo, la photo est bien prise.
https://www.fotonower.com/image?json=false&list_photos_id=1374134577
Bravo, la photo est bien prise.
https://www.fotonower.com/image?json=false&list_photos_id=1374134544
Bravo, la photo est bien prise.
https://www.fotonower.com/image?json=false&list_photos_id=1374134513
Bravo, la photo est bien prise.

Dans ces conditions,le taux de refus est: 10.75%
Veuillez trouver les photos des contaminants.

exemples de contaminants: metal: https://www.fotonower.com/view/25402374?limit=200
exemples de contaminants: pet_clair: https://www.fotonower.com/view/25402378?limit=200
exemples de contaminants: carton: https://www.fotonower.com/view/25402379?limit=200
exemples de contaminants: papier: https://www.fotonower.com/view/25402382?limit=200
Veuillez trouver le rapport en pdf:https://storage.sbg.cloud.ovh.net/v1/AUTH_3b171620e76e4af496c5fd050759c9f0/media.fotonower.com/results_Auto_P25401991_28-07-2025_12_43_56.pdf.

Lien vers velours :https://www.fotonower.com/velours/25402372,25402374,25402375,25402376,25402377,25402378,25402379,25402380,25402381,25402382,25402383?tags=flou,metal,background,autre,pehd,pet_clair,carton,pet_fonce,environnement,papier,mal_croppe.


L'équipe Fotonower 202 b'' Server: nginx Date: Mon, 28 Jul 2025 10:44:02 GMT Content-Length: 0 Connection: close X-Message-Id: 108hWC2jT8KrRXBN8YtrLg Access-Control-Allow-Origin: https://sendgrid.api-docs.io Access-Control-Allow-Methods: POST Access-Control-Allow-Headers: Authorization, Content-Type, On-behalf-of, x-sg-elas-acl Access-Control-Max-Age: 600 X-No-CORS-Reason: https://sendgrid.com/docs/Classroom/Basics/API/cors.html Strict-Transport-Security: max-age=31536000; includeSubDomains Content-Security-Policy: frame-ancestors 'none' Cache-Control: no-cache X-Content-Type-Options: no-sniff Referrer-Policy: strict-origin-when-cross-origin Inside saveOutput : final : False verbose : 0 saveOutput not yet implemented for datou_step.type : send_mail_cod we use saveGeneral [1374134819, 1374134787, 1374134577, 1374134544, 1374134513] Looping around the photos to save general results len do output : 0 before output type Used above Managing all output in save final without adding information in the mtr_datou_result ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134819', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134787', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134577', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134544', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134513', None, None, None, None, None, '3371006') begin to insert list_values into mtr_datou_result : length of list_values in save_final : 5 time used for this insertion : 0.012332677841186523 save_final save missing photos in datou_result : time spend for datou_step_exec : 5.785775184631348 time spend to save output : 0.012515544891357422 total time spend for step 9 : 5.798290729522705 step10:split_time_score Mon Jul 28 12:44:02 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! complete output_args for input 1 VR 22-3-18 : For now we do not clean correctly the datou structure begin split time score Catched exception ! Connect or reconnect ! TODO : Insert select and so on Begin split_port_in_batch_balle thcls : [{'id': 861, 'mtr_user_id': 31, 'name': 'Rungis_class_dechets_1212', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'Rungis_Aluminium,Rungis_Carton,Rungis_Papier,Rungis_Plastique_clair,Rungis_Plastique_dur,Rungis_Plastique_fonce,Rungis_Tapis_vide,Rungis_Tetrapak', 'svm_portfolios_learning': '1160730,571842,571844,571839,571933,571840,571841,572307', 'photo_hashtag_type': 999, 'photo_desc_type': 3963, 'type_classification': 'caffe', 'hashtag_id_list': '2107751280,2107750907,2107750908,2107750909,2107750910,2107750911,2107750912,2107750913'}] thcls : [{'id': 758, 'mtr_user_id': 31, 'name': 'Rungis_amount_dechets_fall_2018_v2', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': '05102018_Papier_non_papier_dense,05102018_Papier_non_papier_peu_dense,05102018_Papier_non_papier_presque_vide,05102018_Papier_non_papier_tres_dense,05102018_Papier_non_papier_tres_peu_dense', 'svm_portfolios_learning': '1108385,1108386,1108388,1108384,1108387', 'photo_hashtag_type': 856, 'photo_desc_type': 3853, 'type_classification': 'caffe', 'hashtag_id_list': '2107751013,2107751014,2107751015,2107751016,2107751017'}] (('12', 5),) ERROR counted https://github.com/fotonower/Velours/issues/663#issuecomment-421136223 {} 28072025 25401991 Nombre de photos uploadées : 5 / 23040 (0%) 28072025 25401991 Nombre de photos taguées (types de déchets): 0 / 5 (0%) 28072025 25401991 Nombre de photos taguées (volume) : 0 / 5 (0%) elapsed_time : load_data_split_time_score 2.6226043701171875e-06 elapsed_time : order_list_meta_photo_and_scores 7.152557373046875e-06 ????? elapsed_time : fill_and_build_computed_from_old_data 0.00033545494079589844 Catched exception ! Connect or reconnect ! Catched exception ! Connect or reconnect ! elapsed_time : insert_dashboard_record_day_entry 0.20653390884399414 We will return after consolidate but for now we need the day, how to get it, for now depending on the previous heavy steps Qualite : 0.0036678978592061414 find url: https://storage.sbg.cloud.ovh.net/v1/AUTH_3b171620e76e4af496c5fd050759c9f0/media.fotonower.com/results_Auto_P25395602_28-07-2025_10_37_26.pdf select completion_json, dashboard_run_id from MTRPhoto.dashboard_results where mtr_portfolio_id = 25395602 order by id desc limit 1 # VR 17-11-17 : to create in DB ! Here we check the datou graph and we reorder steps ! Tree builded and cycle checked, now we need to re-order the steps ! We have currenlty an error because there is no dependence between the last step for the case tile - detect - glue We can either keep the depence of, it is better to keep an order compatible with the id of steps if we do not have sons, so a lexical order : (number_son, step_id) All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! DONE and to test : checkNoCycle ! Here we check the consistency of inputs/outputs number between the given ones and the db ! eke 1-6-18 : checkConsistencyNbInputNbOutput should be processed after step reordering ! WARNING : number of outputs for step 11449 mask_detect is not consistent : 3 used against 2 in the step definition ! Step 11452 crop_condition have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! Step 11452 crop_condition have less outputs used (2) than in the step definition (3) : some outputs may be not used ! Step 11453 merge_mask_thcl_custom have less inputs used (2) than in the step definition (3) : maybe we manage optionnal inputs ! WARNING : number of outputs for step 11453 merge_mask_thcl_custom is not consistent : 4 used against 2 in the step definition ! WARNING : number of inputs for step 11454 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 11454 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! Step 11478 crop_condition have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! WARNING : number of outputs for step 11478 crop_condition is not consistent : 4 used against 3 in the step definition ! WARNING : number of inputs for step 11456 ventilate_hashtags_in_portfolio is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 11456 ventilate_hashtags_in_portfolio is not consistent : 2 used against 1 in the step definition ! Step 11455 final have less inputs used (2) than in the step definition (3) : maybe we manage optionnal inputs ! Step 11455 final have less outputs used (1) than in the step definition (2) : some outputs may be not used ! Step 11458 send_mail_cod have less inputs used (3) than in the step definition (5) : maybe we manage optionnal inputs ! Number of inputs / outputs for each step checked ! Here we check the consistency of outputs/inputs types during steps connections eke 1-6-18 : checkConsistencyTypeOutputInput should be processed after checkConsistencyNbInputNbOutput ! WARNING : type of output 2 of step 11449 doesn't seem to be define in the database( WARNING : type of input 2 of step 11452 doesn't seem to be define in the database( WARNING : output 1 of step 11449 have datatype=2 whereas input 1 of step 11453 have datatype=7 WARNING : type of output 2 of step 11453 doesn't seem to be define in the database( WARNING : type of input 1 of step 11454 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : type of output 3 of step 11453 doesn't seem to be define in the database( WARNING : type of input 1 of step 11456 doesn't seem to be define in the database( WARNING : type of output 1 of step 11456 doesn't seem to be define in the database( WARNING : type of input 3 of step 11455 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : output 0 of step 11456 have datatype=10 whereas input 3 of step 11458 have datatype=6 WARNING : type of input 5 of step 11458 doesn't seem to be define in the database( WARNING : output 0 of step 11477 have datatype=11 whereas input 5 of step 11458 have datatype=None WARNING : output 0 of step 11456 have datatype=10 whereas input 0 of step 11477 have datatype=18 WARNING : type of input 2 of step 11478 doesn't seem to be define in the database( WARNING : output 1 of step 11454 have datatype=7 whereas input 2 of step 11478 have datatype=None WARNING : type of output 3 of step 11478 doesn't seem to be define in the database( WARNING : type of input 2 of step 11456 doesn't seem to be define in the database( WARNING : output 0 of step 11453 have datatype=1 whereas input 0 of step 11454 have datatype=2 DataTypes for each output/input checked ! TODO Duplicate data, are they consistent 3 ? Duplicate data, are they consistent 4 ? SELECT mptpi.id, mptpi.mtr_portfolio_id_1, mptpi.mtr_portfolio_id_2, mptpi.type, mptpi.hashtag_id, mptpi.min_score, mptpi.mtr_user_id, mptpi.created_at, mptpi.updated_at, mptpi.last_updated_at_desc, mptpi.last_updated_at_asc, h.hashtag FROM MTRPhoto.mtr_port_to_port_ids mptpi, MTRBack.hashtags h WHERE h.hashtag_id=mptpi.hashtag_id AND mptpi.`mtr_portfolio_id_1`=25395602 AND mptpi.`type`=3726 To do Qualite : 0.050278109423888696 find url: https://storage.sbg.cloud.ovh.net/v1/AUTH_3b171620e76e4af496c5fd050759c9f0/media.fotonower.com/results_Auto_P25394246_28-07-2025_09_51_56.pdf select completion_json, dashboard_run_id from MTRPhoto.dashboard_results where mtr_portfolio_id = 25394246 order by id desc limit 1 # VR 17-11-17 : to create in DB ! Here we check the datou graph and we reorder steps ! Tree builded and cycle checked, now we need to re-order the steps ! We have currenlty an error because there is no dependence between the last step for the case tile - detect - glue We can either keep the depence of, it is better to keep an order compatible with the id of steps if we do not have sons, so a lexical order : (number_son, step_id) All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! DONE and to test : checkNoCycle ! Here we check the consistency of inputs/outputs number between the given ones and the db ! eke 1-6-18 : checkConsistencyNbInputNbOutput should be processed after step reordering ! WARNING : number of outputs for step 11449 mask_detect is not consistent : 3 used against 2 in the step definition ! Step 11452 crop_condition have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! Step 11452 crop_condition have less outputs used (2) than in the step definition (3) : some outputs may be not used ! Step 11453 merge_mask_thcl_custom have less inputs used (2) than in the step definition (3) : maybe we manage optionnal inputs ! WARNING : number of outputs for step 11453 merge_mask_thcl_custom is not consistent : 4 used against 2 in the step definition ! WARNING : number of inputs for step 11454 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 11454 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! Step 11478 crop_condition have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! WARNING : number of outputs for step 11478 crop_condition is not consistent : 4 used against 3 in the step definition ! WARNING : number of inputs for step 11456 ventilate_hashtags_in_portfolio is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 11456 ventilate_hashtags_in_portfolio is not consistent : 2 used against 1 in the step definition ! Step 11455 final have less inputs used (2) than in the step definition (3) : maybe we manage optionnal inputs ! Step 11455 final have less outputs used (1) than in the step definition (2) : some outputs may be not used ! Step 11458 send_mail_cod have less inputs used (3) than in the step definition (5) : maybe we manage optionnal inputs ! Number of inputs / outputs for each step checked ! Here we check the consistency of outputs/inputs types during steps connections eke 1-6-18 : checkConsistencyTypeOutputInput should be processed after checkConsistencyNbInputNbOutput ! WARNING : type of output 2 of step 11449 doesn't seem to be define in the database( WARNING : type of input 2 of step 11452 doesn't seem to be define in the database( WARNING : output 1 of step 11449 have datatype=2 whereas input 1 of step 11453 have datatype=7 WARNING : type of output 2 of step 11453 doesn't seem to be define in the database( WARNING : type of input 1 of step 11454 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : type of output 3 of step 11453 doesn't seem to be define in the database( WARNING : type of input 1 of step 11456 doesn't seem to be define in the database( WARNING : type of output 1 of step 11456 doesn't seem to be define in the database( WARNING : type of input 3 of step 11455 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : output 0 of step 11456 have datatype=10 whereas input 3 of step 11458 have datatype=6 WARNING : type of input 5 of step 11458 doesn't seem to be define in the database( WARNING : output 0 of step 11477 have datatype=11 whereas input 5 of step 11458 have datatype=None WARNING : output 0 of step 11456 have datatype=10 whereas input 0 of step 11477 have datatype=18 WARNING : type of input 2 of step 11478 doesn't seem to be define in the database( WARNING : output 1 of step 11454 have datatype=7 whereas input 2 of step 11478 have datatype=None WARNING : type of output 3 of step 11478 doesn't seem to be define in the database( WARNING : type of input 2 of step 11456 doesn't seem to be define in the database( WARNING : output 0 of step 11453 have datatype=1 whereas input 0 of step 11454 have datatype=2 DataTypes for each output/input checked ! TODO Duplicate data, are they consistent 3 ? Duplicate data, are they consistent 4 ? SELECT mptpi.id, mptpi.mtr_portfolio_id_1, mptpi.mtr_portfolio_id_2, mptpi.type, mptpi.hashtag_id, mptpi.min_score, mptpi.mtr_user_id, mptpi.created_at, mptpi.updated_at, mptpi.last_updated_at_desc, mptpi.last_updated_at_asc, h.hashtag FROM MTRPhoto.mtr_port_to_port_ids mptpi, MTRBack.hashtags h WHERE h.hashtag_id=mptpi.hashtag_id AND mptpi.`mtr_portfolio_id_1`=25394246 AND mptpi.`type`=3726 To do Qualite : 0.04542007865825641 find url: https://storage.sbg.cloud.ovh.net/v1/AUTH_3b171620e76e4af496c5fd050759c9f0/media.fotonower.com/results_Auto_P25399179_28-07-2025_11_48_37.pdf select completion_json, dashboard_run_id from MTRPhoto.dashboard_results where mtr_portfolio_id = 25399179 order by id desc limit 1 # VR 17-11-17 : to create in DB ! Here we check the datou graph and we reorder steps ! Tree builded and cycle checked, now we need to re-order the steps ! We have currenlty an error because there is no dependence between the last step for the case tile - detect - glue We can either keep the depence of, it is better to keep an order compatible with the id of steps if we do not have sons, so a lexical order : (number_son, step_id) All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! DONE and to test : checkNoCycle ! Here we check the consistency of inputs/outputs number between the given ones and the db ! eke 1-6-18 : checkConsistencyNbInputNbOutput should be processed after step reordering ! WARNING : number of outputs for step 11449 mask_detect is not consistent : 3 used against 2 in the step definition ! Step 11452 crop_condition have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! Step 11452 crop_condition have less outputs used (2) than in the step definition (3) : some outputs may be not used ! Step 11453 merge_mask_thcl_custom have less inputs used (2) than in the step definition (3) : maybe we manage optionnal inputs ! WARNING : number of outputs for step 11453 merge_mask_thcl_custom is not consistent : 4 used against 2 in the step definition ! WARNING : number of inputs for step 11454 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 11454 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! Step 11478 crop_condition have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! WARNING : number of outputs for step 11478 crop_condition is not consistent : 4 used against 3 in the step definition ! WARNING : number of inputs for step 11456 ventilate_hashtags_in_portfolio is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 11456 ventilate_hashtags_in_portfolio is not consistent : 2 used against 1 in the step definition ! Step 11455 final have less inputs used (2) than in the step definition (3) : maybe we manage optionnal inputs ! Step 11455 final have less outputs used (1) than in the step definition (2) : some outputs may be not used ! Step 11458 send_mail_cod have less inputs used (3) than in the step definition (5) : maybe we manage optionnal inputs ! Number of inputs / outputs for each step checked ! Here we check the consistency of outputs/inputs types during steps connections eke 1-6-18 : checkConsistencyTypeOutputInput should be processed after checkConsistencyNbInputNbOutput ! WARNING : type of output 2 of step 11449 doesn't seem to be define in the database( WARNING : type of input 2 of step 11452 doesn't seem to be define in the database( WARNING : output 1 of step 11449 have datatype=2 whereas input 1 of step 11453 have datatype=7 WARNING : type of output 2 of step 11453 doesn't seem to be define in the database( WARNING : type of input 1 of step 11454 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : type of output 3 of step 11453 doesn't seem to be define in the database( WARNING : type of input 1 of step 11456 doesn't seem to be define in the database( WARNING : type of output 1 of step 11456 doesn't seem to be define in the database( WARNING : type of input 3 of step 11455 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : output 0 of step 11456 have datatype=10 whereas input 3 of step 11458 have datatype=6 WARNING : type of input 5 of step 11458 doesn't seem to be define in the database( WARNING : output 0 of step 11477 have datatype=11 whereas input 5 of step 11458 have datatype=None WARNING : output 0 of step 11456 have datatype=10 whereas input 0 of step 11477 have datatype=18 WARNING : type of input 2 of step 11478 doesn't seem to be define in the database( WARNING : output 1 of step 11454 have datatype=7 whereas input 2 of step 11478 have datatype=None WARNING : type of output 3 of step 11478 doesn't seem to be define in the database( WARNING : type of input 2 of step 11456 doesn't seem to be define in the database( WARNING : output 0 of step 11453 have datatype=1 whereas input 0 of step 11454 have datatype=2 DataTypes for each output/input checked ! TODO Duplicate data, are they consistent 3 ? Duplicate data, are they consistent 4 ? SELECT mptpi.id, mptpi.mtr_portfolio_id_1, mptpi.mtr_portfolio_id_2, mptpi.type, mptpi.hashtag_id, mptpi.min_score, mptpi.mtr_user_id, mptpi.created_at, mptpi.updated_at, mptpi.last_updated_at_desc, mptpi.last_updated_at_asc, h.hashtag FROM MTRPhoto.mtr_port_to_port_ids mptpi, MTRBack.hashtags h WHERE h.hashtag_id=mptpi.hashtag_id AND mptpi.`mtr_portfolio_id_1`=25399179 AND mptpi.`type`=3726 To do Qualite : 0.10750159143518523 find url: https://storage.sbg.cloud.ovh.net/v1/AUTH_3b171620e76e4af496c5fd050759c9f0/media.fotonower.com/results_Auto_P25401991_28-07-2025_12_43_56.pdf select completion_json, dashboard_run_id from MTRPhoto.dashboard_results where mtr_portfolio_id = 25401991 order by id desc limit 1 # VR 17-11-17 : to create in DB ! Here we check the datou graph and we reorder steps ! Tree builded and cycle checked, now we need to re-order the steps ! We have currenlty an error because there is no dependence between the last step for the case tile - detect - glue We can either keep the depence of, it is better to keep an order compatible with the id of steps if we do not have sons, so a lexical order : (number_son, step_id) All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! DONE and to test : checkNoCycle ! Here we check the consistency of inputs/outputs number between the given ones and the db ! eke 1-6-18 : checkConsistencyNbInputNbOutput should be processed after step reordering ! WARNING : number of outputs for step 7928 mask_detect is not consistent : 3 used against 2 in the step definition ! Step 8092 crop_condition have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! WARNING : number of outputs for step 8092 crop_condition is not consistent : 4 used against 3 in the step definition ! WARNING : number of inputs for step 7933 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 7933 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 7935 ventilate_hashtags_in_portfolio is not consistent : 2 used against 1 in the step definition ! Step 7934 final have less inputs used (2) than in the step definition (3) : maybe we manage optionnal inputs ! Step 7934 final have less outputs used (1) than in the step definition (2) : some outputs may be not used ! WARNING : number of outputs for step 13649 velours_tree is not consistent : 2 used against 1 in the step definition ! Step 9283 split_time_score have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! Number of inputs / outputs for each step checked ! Here we check the consistency of outputs/inputs types during steps connections eke 1-6-18 : checkConsistencyTypeOutputInput should be processed after checkConsistencyNbInputNbOutput ! We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : type of output 1 of step 7935 doesn't seem to be define in the database( WARNING : type of input 3 of step 7934 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : type of input 1 of step 7935 doesn't seem to be define in the database( WARNING : output 1 of step 7933 have datatype=7 whereas input 1 of step 7935 have datatype=None WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 2 of step 8092 doesn't seem to be define in the database( WARNING : type of output 3 of step 8092 doesn't seem to be define in the database( WARNING : type of input 1 of step 7933 doesn't seem to be define in the database( WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 1 of step 10917 doesn't seem to be define in the database( WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 1 of step 10918 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : output 0 of step 7935 have datatype=10 whereas input 3 of step 10916 have datatype=6 WARNING : output 0 of step 7935 have datatype=10 whereas input 0 of step 13649 have datatype=18 WARNING : type of output 1 of step 13649 doesn't seem to be define in the database( WARNING : type of input 5 of step 10916 doesn't seem to be define in the database( DataTypes for each output/input checked ! TODO Duplicate data, are they consistent 3 ? Duplicate data, are they consistent 4 ? SELECT mptpi.id, mptpi.mtr_portfolio_id_1, mptpi.mtr_portfolio_id_2, mptpi.type, mptpi.hashtag_id, mptpi.min_score, mptpi.mtr_user_id, mptpi.created_at, mptpi.updated_at, mptpi.last_updated_at_desc, mptpi.last_updated_at_asc, h.hashtag FROM MTRPhoto.mtr_port_to_port_ids mptpi, MTRBack.hashtags h WHERE h.hashtag_id=mptpi.hashtag_id AND mptpi.`mtr_portfolio_id_1`=25401991 AND mptpi.`type`=3594 To do NUMBER BATCH : 0 # DISPLAY ALL COLLECTED DATA : {'28072025': {'nb_upload': 5, 'nb_taggue_class': 0, 'nb_taggue_densite': 0}} Inside saveOutput : final : True verbose : 0 saveOutput not yet implemented for datou_step.type : split_time_score we use saveGeneral [1374134819, 1374134787, 1374134577, 1374134544, 1374134513] Looping around the photos to save general results len do output : 1 /25401991Didn't retrieve data . before output type Here is an output not treated by saveGeneral : Managing all output in save final without adding information in the mtr_datou_result ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134819', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134787', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134577', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134544', None, None, None, None, None, '3371006') ('3318', None, None, None, None, None, None, None, '3371006') ('3318', '25401991', '1374134513', None, None, None, None, None, '3371006') begin to insert list_values into mtr_datou_result : length of list_values in save_final : 6 time used for this insertion : 0.01390528678894043 save_final save missing photos in datou_result : time spend for datou_step_exec : 0.6118814945220947 time spend to save output : 0.014131546020507812 total time spend for step 10 : 0.6260130405426025 caffe_path_current : About to save ! 2 After save, about to update current ! ret : 2 len(input) + len(total_photo_id_missing) : 5 set_done_treatment 94.75user 85.46system 3:37.61elapsed 82%CPU (0avgtext+0avgdata 3391268maxresident)k 920016inputs+75760outputs (21870major+8440081minor)pagefaults 0swaps