python /home/admin/mtr/script_for_cron.py -j datou_current3 -m 20 -a ' -a 3318 ' -s datou_3318 -M 0 -S 0 -U 95,95,120 import MySQLdb succeeded Import error (python version) ['/Users/moilerat/Documents/Fotonower/install/caffe/distribute/python', '/home/admin/workarea/git/Velours/python/prod', '/home/admin/workarea/install/caffe_cuda8_python3/python', '/home/admin/workarea/install/darknet', '/home/admin/workarea/git/Velours/python', '/home/admin/workarea/install/caffe_frcnn_python3/py-faster-rcnn/caffe-fast-rcnn/python', '/home/admin/mtr/.credentials', '/home/admin/workarea/install/caffe/python', '/home/admin/workarea/install/caffe_frcnn/py-faster-rcnn/tools', '/home/admin/workarea/git/fotonowerpip', '/home/admin/workarea/install/segment-anything', '/home/admin/workarea/git/pyfvs', '/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload', '/home/admin/.local/lib/python3.8/site-packages', '/usr/local/lib/python3.8/dist-packages', '/usr/lib/python3/dist-packages'] process id : 2098305 load datou : 3318 # VR 17-11-17 : to create in DB ! Here we check the datou graph and we reorder steps ! Tree builded and cycle checked, now we need to re-order the steps ! We have currenlty an error because there is no dependence between the last step for the case tile - detect - glue We can either keep the depence of, it is better to keep an order compatible with the id of steps if we do not have sons, so a lexical order : (number_son, step_id) All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! DONE and to test : checkNoCycle ! Here we check the consistency of inputs/outputs number between the given ones and the db ! eke 1-6-18 : checkConsistencyNbInputNbOutput should be processed after step reordering ! WARNING : number of outputs for step 7928 mask_detect is not consistent : 3 used against 2 in the step definition ! Step 8092 crop_condition have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! WARNING : number of outputs for step 8092 crop_condition is not consistent : 4 used against 3 in the step definition ! WARNING : number of inputs for step 7933 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 7933 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 7935 ventilate_hashtags_in_portfolio is not consistent : 2 used against 1 in the step definition ! Step 7934 final have less inputs used (2) than in the step definition (3) : maybe we manage optionnal inputs ! Step 7934 final have less outputs used (1) than in the step definition (2) : some outputs may be not used ! WARNING : number of outputs for step 13649 velours_tree is not consistent : 2 used against 1 in the step definition ! Step 9283 split_time_score have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! Number of inputs / outputs for each step checked ! Here we check the consistency of outputs/inputs types during steps connections eke 1-6-18 : checkConsistencyTypeOutputInput should be processed after checkConsistencyNbInputNbOutput ! We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : type of output 1 of step 7935 doesn't seem to be define in the database( WARNING : type of input 3 of step 7934 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : type of input 1 of step 7935 doesn't seem to be define in the database( WARNING : output 1 of step 7933 have datatype=7 whereas input 1 of step 7935 have datatype=None WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 2 of step 8092 doesn't seem to be define in the database( WARNING : type of output 3 of step 8092 doesn't seem to be define in the database( WARNING : type of input 1 of step 7933 doesn't seem to be define in the database( WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 1 of step 10917 doesn't seem to be define in the database( WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 1 of step 10918 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : output 0 of step 7935 have datatype=10 whereas input 3 of step 10916 have datatype=6 WARNING : output 0 of step 7935 have datatype=10 whereas input 0 of step 13649 have datatype=18 WARNING : type of output 1 of step 13649 doesn't seem to be define in the database( WARNING : type of input 5 of step 10916 doesn't seem to be define in the database( DataTypes for each output/input checked ! Unexpected type for variable list_input_json ERROR or WARNING : can't parse json string Expecting value: line 1 column 1 (char 0) Tried to parse : chemin de la photo was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? [(photo_id, hashtag_id, hashtag_type, x0, x1, y0, y1, score, seg_temp, polygons), ...] was removed should we ? chemin de la photo was removed should we ? [ (photo_id_loc, hashtag_id, hashtag_type, x0, x1, y0, y1, score, None), ...] was removed should we ? chemin de la photo was removed should we ? id de la photo (peut être local ou global) was removed should we ? chemin de la photo was removed should we ? (x0, y0, x1, y1) was removed should we ? chemin de la photo was removed should we ? donnée sous forme de texte was removed should we ? [ (photo_id, photo_id_loc, hashtag_type, x0, x1, y0, y1, score), ...] was removed should we ? None was removed should we ? donnée sous forme de texte was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? id de la photo (peut être local ou global) was removed should we ? donnée sous forme de texte was removed should we ? donnée sous forme de texte was removed should we ? donnée sous forme de texte was removed should we ? chemin de la photo was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? chemin de la photo was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? None was removed should we ? donnée sous forme de nombre was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? donnée sous forme de texte was removed should we ? None was removed should we ? donnée sous forme de texte was removed should we ? [ptf_id0,ptf_id1...] was removed should we ? FOUND : 1 Here is data_from_sql_as_vec to set the ParamDescriptorType : (5275, 'learn_RUBBIA_REFUS_AMIENS_23', 16384, 25088, 'learn_RUBBIA_REFUS_AMIENS_23', 'pool5', 10.0, None, None, 256, None, 0, None, 8, None, None, -1000.0, 1, datetime.datetime(2021, 4, 23, 14, 19, 39), datetime.datetime(2021, 4, 23, 14, 19, 39)) load thcls load THCL from format json or kwargs add thcl : 2847 in CacheModelConfig load pdts add pdt : 5275 in CacheModelConfig Running datou job : batch_current TODO datou_current to load to do maybe to take outside batchDatouExec updating current state to 1 list_input_json: [] Current got : datou_id : 3318, datou_cur_ids : ['3406282'] with mtr_portfolio_ids : ['25530216'] and first list_photo_ids : [] new path : /proc/2098305/ Inside batchDatouExec : verbose : 0 # VR 17-11-17 : to create in DB ! Here we check the datou graph and we reorder steps ! Tree builded and cycle checked, now we need to re-order the steps ! We have currenlty an error because there is no dependence between the last step for the case tile - detect - glue We can either keep the depence of, it is better to keep an order compatible with the id of steps if we do not have sons, so a lexical order : (number_son, step_id) All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! DONE and to test : checkNoCycle ! Here we check the consistency of inputs/outputs number between the given ones and the db ! eke 1-6-18 : checkConsistencyNbInputNbOutput should be processed after step reordering ! WARNING : number of outputs for step 7928 mask_detect is not consistent : 3 used against 2 in the step definition ! Step 8092 crop_condition have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! WARNING : number of outputs for step 8092 crop_condition is not consistent : 4 used against 3 in the step definition ! WARNING : number of inputs for step 7933 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 7933 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 7935 ventilate_hashtags_in_portfolio is not consistent : 2 used against 1 in the step definition ! Step 7934 final have less inputs used (2) than in the step definition (3) : maybe we manage optionnal inputs ! Step 7934 final have less outputs used (1) than in the step definition (2) : some outputs may be not used ! WARNING : number of outputs for step 13649 velours_tree is not consistent : 2 used against 1 in the step definition ! Step 9283 split_time_score have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! Number of inputs / outputs for each step checked ! Here we check the consistency of outputs/inputs types during steps connections eke 1-6-18 : checkConsistencyTypeOutputInput should be processed after checkConsistencyNbInputNbOutput ! We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : type of output 1 of step 7935 doesn't seem to be define in the database( WARNING : type of input 3 of step 7934 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : type of input 1 of step 7935 doesn't seem to be define in the database( WARNING : output 1 of step 7933 have datatype=7 whereas input 1 of step 7935 have datatype=None WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 2 of step 8092 doesn't seem to be define in the database( WARNING : type of output 3 of step 8092 doesn't seem to be define in the database( WARNING : type of input 1 of step 7933 doesn't seem to be define in the database( WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 1 of step 10917 doesn't seem to be define in the database( WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 1 of step 10918 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : output 0 of step 7935 have datatype=10 whereas input 3 of step 10916 have datatype=6 WARNING : output 0 of step 7935 have datatype=10 whereas input 0 of step 13649 have datatype=18 WARNING : type of output 1 of step 13649 doesn't seem to be define in the database( WARNING : type of input 5 of step 10916 doesn't seem to be define in the database( DataTypes for each output/input checked ! List Step Type Loaded in datou : mask_detect, crop_condition, rle_unique_nms_with_priority, ventilate_hashtags_in_portfolio, final, blur_detection, brightness, velours_tree, send_mail_cod, split_time_score over limit max, limiting to limit_max 40 list_input_json : [] origin We have 1 , BFBFBFBFBFBFBFBFBFwe have missing 0 photos in the step downloads : photo missing : [] try to delete the photos missing in DB length of list_filenames : 9 ; length of list_pids : 9 ; length of list_args : 9 time to download the photos : 1.0700228214263916 About to test input to load we should then remove the video here, and this would fix the bug of datou_current ! Calling datou_exec Inside datou_exec : verbose : 0 number of steps : 10 step1:mask_detect Thu Jul 31 08:20:30 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Beginning of datou step mask_detect ! save_polygon : True begin detect begin to check gpu status inside check gpu memory l 3637 free memory gpu now : 3470 max_wait_temp : 1 max_wait : 0 gpu_flag : 0 2025-07-31 08:20:33.474442: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2025-07-31 08:20:33.499563: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 3493035000 Hz 2025-07-31 08:20:33.501516: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f6ad0000b60 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2025-07-31 08:20:33.501564: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2025-07-31 08:20:33.505279: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2025-07-31 08:20:33.655375: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x39966b20 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2025-07-31 08:20:33.655423: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA GeForce RTX 2080 Ti, Compute Capability 7.5 2025-07-31 08:20:33.656461: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:41:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5 coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s 2025-07-31 08:20:33.656870: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-07-31 08:20:33.659923: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2025-07-31 08:20:33.662548: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2025-07-31 08:20:33.663030: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2025-07-31 08:20:33.666269: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2025-07-31 08:20:33.668002: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2025-07-31 08:20:33.672380: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2025-07-31 08:20:33.673348: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0 2025-07-31 08:20:33.673416: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-07-31 08:20:33.673942: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix: 2025-07-31 08:20:33.673956: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 2025-07-31 08:20:33.673964: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N 2025-07-31 08:20:33.674888: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3019 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:41:00.0, compute capability: 7.5) WARNING:tensorflow:From /home/admin/workarea/git/Velours/python/mtr/mask_rcnn/mask_detection.py:69: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead. 2025-07-31 08:20:33.914806: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:41:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5 coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s 2025-07-31 08:20:33.914900: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-07-31 08:20:33.914929: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2025-07-31 08:20:33.914955: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2025-07-31 08:20:33.914979: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2025-07-31 08:20:33.915003: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2025-07-31 08:20:33.915027: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2025-07-31 08:20:33.915093: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2025-07-31 08:20:33.916379: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0 2025-07-31 08:20:33.917248: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:41:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5 coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s 2025-07-31 08:20:33.917275: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-07-31 08:20:33.917289: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2025-07-31 08:20:33.917302: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2025-07-31 08:20:33.917314: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2025-07-31 08:20:33.917327: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2025-07-31 08:20:33.917339: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2025-07-31 08:20:33.917352: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2025-07-31 08:20:33.918126: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0 2025-07-31 08:20:33.918151: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix: 2025-07-31 08:20:33.918159: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 2025-07-31 08:20:33.918165: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N 2025-07-31 08:20:33.918987: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3019 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:41:00.0, compute capability: 7.5) Using TensorFlow backend. WARNING:tensorflow:From /home/admin/workarea/install/Mask_RCNN/model.py:396: calling crop_and_resize_v1 (from tensorflow.python.ops.image_ops_impl) with box_ind is deprecated and will be removed in a future version. Instructions for updating: box_ind is deprecated, use box_indices instead WARNING:tensorflow:From /home/admin/workarea/install/Mask_RCNN/model.py:703: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.cast` instead. WARNING:tensorflow:From /home/admin/workarea/install/Mask_RCNN/model.py:729: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.cast` instead. Inside mask_sub_process Inside mask_detect About to load cache.load_thcl_param To do loadFromThcl(), then load ParamDescType : thcl2847 thcls : [{'id': 2847, 'mtr_user_id': 31, 'name': 'learn_RUBBIA_REFUS_AMIENS_23', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'background,papier,carton,metal,pet_clair,autre,pehd,pet_fonce,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3594, 'photo_desc_type': 5275, 'type_classification': 'mask_rcnn', 'hashtag_id_list': '0,0,0,0,0,0,0,0,0'}] thcl {'id': 2847, 'mtr_user_id': 31, 'name': 'learn_RUBBIA_REFUS_AMIENS_23', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'background,papier,carton,metal,pet_clair,autre,pehd,pet_fonce,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3594, 'photo_desc_type': 5275, 'type_classification': 'mask_rcnn', 'hashtag_id_list': '0,0,0,0,0,0,0,0,0'} Update svm_hashtag_type_desc : 5275 FOUND : 1 Here is data_from_sql_as_vec to set the ParamDescriptorType : (5275, 'learn_RUBBIA_REFUS_AMIENS_23', 16384, 25088, 'learn_RUBBIA_REFUS_AMIENS_23', 'pool5', 10.0, None, None, 256, None, 0, None, 8, None, None, -1000.0, 1, datetime.datetime(2021, 4, 23, 14, 19, 39), datetime.datetime(2021, 4, 23, 14, 19, 39)) {'thcl': {'id': 2847, 'mtr_user_id': 31, 'name': 'learn_RUBBIA_REFUS_AMIENS_23', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'background,papier,carton,metal,pet_clair,autre,pehd,pet_fonce,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3594, 'photo_desc_type': 5275, 'type_classification': 'mask_rcnn', 'hashtag_id_list': '0,0,0,0,0,0,0,0,0'}, 'list_hashtags': ['background', 'papier', 'carton', 'metal', 'pet_clair', 'autre', 'pehd', 'pet_fonce', 'environnement'], 'list_hashtags_csv': 'background,papier,carton,metal,pet_clair,autre,pehd,pet_fonce,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3594, 'svm_hashtag_type_desc': 5275, 'photo_desc_type': 5275, 'pb_hashtag_id_or_classifier': 0} list_class_names : ['background', 'papier', 'carton', 'metal', 'pet_clair', 'autre', 'pehd', 'pet_fonce', 'environnement'] Configurations: BACKBONE resnet101 BACKBONE_SHAPES [[160 160] [ 80 80] [ 40 40] [ 20 20] [ 10 10]] BACKBONE_STRIDES [4, 8, 16, 32, 64] BATCH_SIZE 1 BBOX_STD_DEV [0.1 0.1 0.2 0.2] DETECTION_MAX_INSTANCES 100 DETECTION_MIN_CONFIDENCE 0.3 DETECTION_NMS_THRESHOLD 0.3 GPU_COUNT 1 IMAGES_PER_GPU 1 IMAGE_MAX_DIM 640 IMAGE_MIN_DIM 640 IMAGE_PADDING True IMAGE_SHAPE [640 640 3] LEARNING_MOMENTUM 0.9 LEARNING_RATE 0.001 LOSS_WEIGHTS {'rpn_class_loss': 1.0, 'rpn_bbox_loss': 1.0, 'mrcnn_class_loss': 1.0, 'mrcnn_bbox_loss': 1.0, 'mrcnn_mask_loss': 1.0} MASK_POOL_SIZE 14 MASK_SHAPE [28, 28] MAX_GT_INSTANCES 100 MEAN_PIXEL [123.7 116.8 103.9] MINI_MASK_SHAPE (56, 56) NAME learn_RUBBIA_REFUS_AMIENS_23 NUM_CLASSES 9 POOL_SIZE 7 POST_NMS_ROIS_INFERENCE 1000 POST_NMS_ROIS_TRAINING 2000 ROI_POSITIVE_RATIO 0.33 RPN_ANCHOR_RATIOS [0.5, 1, 2] RPN_ANCHOR_SCALES (16, 32, 64, 128, 256) RPN_ANCHOR_STRIDE 1 RPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2] RPN_NMS_THRESHOLD 0.7 RPN_TRAIN_ANCHORS_PER_IMAGE 256 STEPS_PER_EPOCH 1000 TRAIN_ROIS_PER_IMAGE 200 USE_MINI_MASK True USE_RPN_ROIS True VALIDATION_STEPS 50 WEIGHT_DECAY 0.0001 model_param file didn't exist model_name : learn_RUBBIA_REFUS_AMIENS_23 model_type : mask_rcnn list file need : ['mask_model.h5'] file exist in s3 : ['mask_model.h5'] file manque in s3 : [] 2025-07-31 08:20:41.534777: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2025-07-31 08:20:41.694258: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2025-07-31 08:20:42.980410: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.45G (2629959680 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-31 08:20:43.700420: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.67GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2025-07-31 08:20:43.700469: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.67GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2025-07-31 08:20:43.707393: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.29GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2025-07-31 08:20:43.707423: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.29GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2025-07-31 08:20:43.812936: W tensorflow/core/common_runtime/bfc_allocator.cc:311] Garbage collection: deallocate free memory regions (i.e., allocations) so that we can re-allocate a larger region to avoid OOM due to memory fragmentation. If you see this message frequently, you are running near the threshold of the available device memory and re-allocation may incur great performance overhead. You may try smaller batch sizes to observe the performance impact. Set TF_ENABLE_GPU_GARBAGE_COLLECTION=false if you'd like to disable this feature. 2025-07-31 08:20:43.868281: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.45G (2629959680 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-31 08:20:43.868351: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.26GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2025-07-31 08:20:43.869942: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.45G (2629959680 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-31 08:20:43.869977: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.26GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2025-07-31 08:20:43.878196: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.45G (2629959680 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-31 08:20:43.878219: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 417.62MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2025-07-31 08:20:43.878791: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.45G (2629959680 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-31 08:20:43.878806: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 417.62MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2025-07-31 08:20:43.896370: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.45G (2629959680 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-31 08:20:43.896444: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 188.27MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2025-07-31 08:20:43.897432: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.45G (2629959680 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-31 08:20:43.897455: W tensorflow/core/common_runtime/bfc_allocator.cc:245] Allocator (GPU_0_bfc) ran out of memory trying to allocate 188.27MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2025-07-31 08:20:43.898468: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.45G (2629959680 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-31 08:20:43.899495: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.45G (2629959680 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-31 08:20:43.904182: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.45G (2629959680 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-31 08:20:43.905182: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.45G (2629959680 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-31 08:20:43.906178: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.45G (2629959680 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-31 08:20:43.907183: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.45G (2629959680 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-31 08:20:43.908857: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.45G (2629959680 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-31 08:20:43.908878: W tensorflow/core/kernels/gpu_utils.cc:49] Failed to allocate memory for convolution redzone checking; skipping this check. This is benign and only means that we won't check cudnn for out-of-bounds reads and writes. This message will only be printed once. 2025-07-31 08:20:43.918469: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.45G (2629959680 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-31 08:20:43.919043: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.45G (2629959680 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-31 08:20:43.928085: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.45G (2629959680 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-31 08:20:43.928674: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.45G (2629959680 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-31 08:20:43.929243: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.45G (2629959680 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-31 08:20:43.929840: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.45G (2629959680 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-31 08:20:43.930450: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.45G (2629959680 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2025-07-31 08:20:43.931051: I tensorflow/stream_executor/cuda/cuda_driver.cc:763] failed to allocate 2.45G (2629959680 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory local folder : /data/models_weight/learn_RUBBIA_REFUS_AMIENS_23 /data/models_weight/learn_RUBBIA_REFUS_AMIENS_23/mask_model.h5 size_local : 256009536 size in s3 : 256009536 create time local : 2021-08-09 09:43:22 create time in s3 : 2021-08-06 18:54:04 mask_model.h5 already exist and didn't need to update list_images length : 9 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 20.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 17) min: 0.00000 max: 1920.00000 nb d'objets trouves : 12 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 25.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 17) min: 0.00000 max: 1920.00000 nb d'objets trouves : 7 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 15.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 17) min: 0.00000 max: 1920.00000 nb d'objets trouves : 8 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 11.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 17) min: 0.00000 max: 1920.00000 nb d'objets trouves : 7 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 63.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 17) min: 0.00000 max: 1920.00000 nb d'objets trouves : 2 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 44.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 17) min: 0.00000 max: 1920.00000 nb d'objets trouves : 1 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 0.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 17) min: 0.00000 max: 1920.00000 nb d'objets trouves : 1 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 31.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 17) min: 0.00000 max: 1920.00000 nb d'objets trouves : 6 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 28.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 17) min: 0.00000 max: 1920.00000 nb d'objets trouves : 14 Detection mask done ! Trying to reset tf kernel 2099053 begin to check gpu status inside check gpu memory l 3610 free memory gpu now : 1478 tf kernel not reseted sub process len(results) : 9 len(list_Values) 0 None max_time_sub_proc : 3600 parent process len(results) : 9 len(list_Values) 0 process is alive finish correctly or not : True after detect begin to check gpu status inside check gpu memory l 3610 free memory gpu now : 2671 list_Values should be empty [] To do loadFromThcl(), then load ParamDescType : thcl2847 Catched exception ! Connect or reconnect ! thcls : [{'id': 2847, 'mtr_user_id': 31, 'name': 'learn_RUBBIA_REFUS_AMIENS_23', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'background,papier,carton,metal,pet_clair,autre,pehd,pet_fonce,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3594, 'photo_desc_type': 5275, 'type_classification': 'mask_rcnn', 'hashtag_id_list': '0,0,0,0,0,0,0,0,0'}] thcl {'id': 2847, 'mtr_user_id': 31, 'name': 'learn_RUBBIA_REFUS_AMIENS_23', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'background,papier,carton,metal,pet_clair,autre,pehd,pet_fonce,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3594, 'photo_desc_type': 5275, 'type_classification': 'mask_rcnn', 'hashtag_id_list': '0,0,0,0,0,0,0,0,0'} Update svm_hashtag_type_desc : 5275 ['background', 'papier', 'carton', 'metal', 'pet_clair', 'autre', 'pehd', 'pet_fonce', 'environnement'] time for calcul the mask position with numpy : 0.00035381317138671875 nb_pixel_total : 14765 time to create 1 rle with old method : 0.016103267669677734 length of segment : 138 time for calcul the mask position with numpy : 7.2479248046875e-05 nb_pixel_total : 2156 time to create 1 rle with old method : 0.002477884292602539 length of segment : 44 time for calcul the mask position with numpy : 0.00026702880859375 nb_pixel_total : 11712 time to create 1 rle with old method : 0.01282048225402832 length of segment : 191 time for calcul the mask position with numpy : 0.0007672309875488281 nb_pixel_total : 41507 time to create 1 rle with old method : 0.045047760009765625 length of segment : 298 time for calcul the mask position with numpy : 0.0005536079406738281 nb_pixel_total : 33572 time to create 1 rle with old method : 0.036690473556518555 length of segment : 200 time for calcul the mask position with numpy : 0.00031828880310058594 nb_pixel_total : 24165 time to create 1 rle with old method : 0.025434255599975586 length of segment : 167 time for calcul the mask position with numpy : 7.081031799316406e-05 nb_pixel_total : 1658 time to create 1 rle with old method : 0.0018987655639648438 length of segment : 64 time for calcul the mask position with numpy : 0.00013399124145507812 nb_pixel_total : 5926 time to create 1 rle with old method : 0.0067408084869384766 length of segment : 93 time for calcul the mask position with numpy : 1.1393287181854248 nb_pixel_total : 1125812 time to create 1 rle with new method : 0.06721067428588867 length of segment : 1429 time for calcul the mask position with numpy : 0.00013399124145507812 nb_pixel_total : 2786 time to create 1 rle with old method : 0.003085613250732422 length of segment : 48 time for calcul the mask position with numpy : 0.0001266002655029297 nb_pixel_total : 3416 time to create 1 rle with old method : 0.003956317901611328 length of segment : 45 time for calcul the mask position with numpy : 0.0001544952392578125 nb_pixel_total : 3361 time to create 1 rle with old method : 0.0036537647247314453 length of segment : 63 time for calcul the mask position with numpy : 0.011349678039550781 nb_pixel_total : 1015204 time to create 1 rle with new method : 0.733924388885498 length of segment : 1134 time for calcul the mask position with numpy : 0.0014829635620117188 nb_pixel_total : 78420 time to create 1 rle with old method : 0.08488702774047852 length of segment : 476 time for calcul the mask position with numpy : 0.0005276203155517578 nb_pixel_total : 13492 time to create 1 rle with old method : 0.015628337860107422 length of segment : 149 time for calcul the mask position with numpy : 0.0005247592926025391 nb_pixel_total : 21451 time to create 1 rle with old method : 0.024689435958862305 length of segment : 162 time for calcul the mask position with numpy : 0.0005881786346435547 nb_pixel_total : 20806 time to create 1 rle with old method : 0.023385286331176758 length of segment : 181 time for calcul the mask position with numpy : 0.00019979476928710938 nb_pixel_total : 10803 time to create 1 rle with old method : 0.012951850891113281 length of segment : 97 time spent for convertir_results : 3.4953134059906006 Inside saveOutput : final : False verbose : 0 eke 12-6-18 : saveMask need to be cleaned for new output ! Number saved : None batch 1 Loaded 18 chid ids of type : 3594 Number RLEs to save : 4979 save missing photos in datou_result : time spend for datou_step_exec : 23.66728186607361 time spend to save output : 0.34447288513183594 total time spend for step 1 : 24.011754751205444 step2:crop_condition Thu Jul 31 08:20:54 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! VR 22-3-18 : For now we do not clean correctly the datou structure Loading chi in step crop with photo_hashtag_type : 3594 Loading chi in step crop for list_pids : 9 ! batch 1 Loaded 18 chid ids of type : 3594 +++++++++++++++++++++++ begin to crop the class : papier param for this class : {'min_score': 0.7} filtre for class : papier hashtag_id of this class : 492668766 we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! map_result returned by crop_photo_return_map_crop : length : 8 About to insert : list_path_to_insert length 8 new photo from crops ! About to upload 8 photos upload in portfolio : 3736932 init cache_photo without model_param we have 8 photo to upload uploaded to storage server : ovh folder_temporaire : temp/1753942856_2098305 batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! we have uploaded 8 photos in the portfolio 3736932 time of upload the photos Elapsed time : 2.2607831954956055 we have finished the crop for the class : papier begin to crop the class : carton param for this class : {'min_score': 0.7} filtre for class : carton hashtag_id of this class : 492774966 we have both polygon and rles Next one ! we have both polygon and rles Next one ! map_result returned by crop_photo_return_map_crop : length : 2 About to insert : list_path_to_insert length 2 new photo from crops ! About to upload 2 photos upload in portfolio : 3736932 init cache_photo without model_param we have 2 photo to upload uploaded to storage server : ovh folder_temporaire : temp/1753942858_2098305 batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! we have uploaded 2 photos in the portfolio 3736932 time of upload the photos Elapsed time : 0.9296512603759766 we have finished the crop for the class : carton begin to crop the class : metal param for this class : {'min_score': 0.7} filtre for class : metal hashtag_id of this class : 492628673 begin to crop the class : pet_clair param for this class : {'min_score': 0.7} filtre for class : pet_clair hashtag_id of this class : 2107755846 we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! we have both polygon and rles Next one ! map_result returned by crop_photo_return_map_crop : length : 7 About to insert : list_path_to_insert length 7 new photo from crops ! About to upload 7 photos upload in portfolio : 3736932 init cache_photo without model_param we have 7 photo to upload uploaded to storage server : ovh folder_temporaire : temp/1753942868_2098305 batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! we have uploaded 7 photos in the portfolio 3736932 time of upload the photos Elapsed time : 2.021711826324463 we have finished the crop for the class : pet_clair begin to crop the class : autre param for this class : {'min_score': 0.7} filtre for class : autre hashtag_id of this class : 494826614 begin to crop the class : pehd param for this class : {'min_score': 0.7} filtre for class : pehd hashtag_id of this class : 628944319 begin to crop the class : pet_fonce param for this class : {'min_score': 0.7} filtre for class : pet_fonce hashtag_id of this class : 2107755900 we have both polygon and rles Next one ! map_result returned by crop_photo_return_map_crop : length : 1 About to insert : list_path_to_insert length 1 new photo from crops ! About to upload 1 photos upload in portfolio : 3736932 init cache_photo without model_param we have 1 photo to upload uploaded to storage server : ovh folder_temporaire : temp/1753942871_2098305 batch_size : 0, verbose : False, strat_bulk_insert : ignore_different_from_first This is a hack ! we have uploaded 1 photos in the portfolio 3736932 time of upload the photos Elapsed time : 0.7596874237060547 we have finished the crop for the class : pet_fonce delete rles from all chi we have 0 chi objets contains the rles we have 0 chi objets contains the rles we have 0 chi objets contains the rles we have 0 chi objets contains the rles we have 0 chi objets contains the rles we have 0 chi objets contains the rles we have 0 chi objets contains the rles Inside saveOutput : final : False verbose : 0 saveOutput not yet implemented for datou_step.type : crop_condition we use saveGeneral [1374549731, 1374549730, 1374549729, 1374549727, 1374549699, 1374549696, 1374549693, 1374549691, 1374549689] Looping around the photos to save general results len do output : 18 /1374555234Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374555235Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374555236Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374555237Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374555238Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374555239Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374555240Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374555241Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374555242Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374555243Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374555249Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374555250Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374555251Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374555252Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374555253Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374555254Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374555256Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . /1374555258Didn't retrieve data .Didn't retrieve data .Didn't retrieve data . before output type Here is an output not treated by saveGeneral : Here is an output not treated by saveGeneral : Here is an output not treated by saveGeneral : Managing all output in save final without adding information in the mtr_datou_result ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549731', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549730', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549729', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549727', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549699', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549696', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549693', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549691', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549689', None, None, None, None, None, '3406282') begin to insert list_values into mtr_datou_result : length of list_values in save_final : 63 time used for this insertion : 0.2813224792480469 save_final save missing photos in datou_result : time spend for datou_step_exec : 16.72406268119812 time spend to save output : 0.28220033645629883 total time spend for step 2 : 17.00626301765442 step3:rle_unique_nms_with_priority Thu Jul 31 08:21:11 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed complete output_args for input 0 We expect there is only one output and this part is used while all output are not tuple or array We expect there is only one output and this part is used while all output are not tuple or array We expect there is only one output and this part is used while all output are not tuple or array We expect there is only one output and this part is used while all output are not tuple or array We expect there is only one output and this part is used while all output are not tuple or array We expect there is only one output and this part is used while all output are not tuple or array We expect there is only one output and this part is used while all output are not tuple or array We expect there is only one output and this part is used while all output are not tuple or array We expect there is only one output and this part is used while all output are not tuple or array VR 22-3-18 : For now we do not clean correctly the datou structure Begin step rle-unique-nms batch 1 Loaded 18 chid ids of type : 3594 +++++++++++++++++++++++nb_obj : 5 nb_hashtags : 3 time to prepare the origin masks : 0.6523261070251465 time for calcul the mask position with numpy : 0.17615938186645508 nb_pixel_total : 1972044 time to create 1 rle with new method : 0.08385753631591797 time for calcul the mask position with numpy : 0.006393909454345703 nb_pixel_total : 33572 time to create 1 rle with old method : 0.0359797477722168 time for calcul the mask position with numpy : 0.0063457489013671875 nb_pixel_total : 39351 time to create 1 rle with old method : 0.04336714744567871 time for calcul the mask position with numpy : 0.006479740142822266 nb_pixel_total : 11712 time to create 1 rle with old method : 0.012610912322998047 time for calcul the mask position with numpy : 0.007279872894287109 nb_pixel_total : 2156 time to create 1 rle with old method : 0.0024416446685791016 time for calcul the mask position with numpy : 0.006749868392944336 nb_pixel_total : 14765 time to create 1 rle with old method : 0.016478538513183594 create new chi : 0.4144325256347656 time to delete rle : 0.024541139602661133 batch 1 Loaded 11 chid ids of type : 3594 +++++++Number RLEs to save : 2822 TO DO : save crop sub photo not yet done ! save time : 0.2719686031341553 No data in photo_id : 1374549730 nb_obj : 4 nb_hashtags : 3 time to prepare the origin masks : 0.41159486770629883 time for calcul the mask position with numpy : 0.03200173377990723 nb_pixel_total : 939513 time to create 1 rle with new method : 0.08312034606933594 time for calcul the mask position with numpy : 0.12791872024536133 nb_pixel_total : 1102338 time to create 1 rle with new method : 0.07872939109802246 time for calcul the mask position with numpy : 0.00653076171875 nb_pixel_total : 5926 time to create 1 rle with old method : 0.006701946258544922 time for calcul the mask position with numpy : 0.006431102752685547 nb_pixel_total : 1658 time to create 1 rle with old method : 0.0018329620361328125 time for calcul the mask position with numpy : 0.006206512451171875 nb_pixel_total : 24165 time to create 1 rle with old method : 0.025890827178955078 create new chi : 0.39098024368286133 time to delete rle : 0.000453948974609375 batch 1 Loaded 9 chid ids of type : 3594 ++++++Number RLEs to save : 4416 TO DO : save crop sub photo not yet done ! save time : 1.3138725757598877 nb_obj : 3 nb_hashtags : 2 time to prepare the origin masks : 0.05071234703063965 time for calcul the mask position with numpy : 0.29085755348205566 nb_pixel_total : 2064037 time to create 1 rle with new method : 0.1680307388305664 time for calcul the mask position with numpy : 0.006386995315551758 nb_pixel_total : 3361 time to create 1 rle with old method : 0.003771543502807617 time for calcul the mask position with numpy : 0.00642848014831543 nb_pixel_total : 3416 time to create 1 rle with old method : 0.003958225250244141 time for calcul the mask position with numpy : 0.006262302398681641 nb_pixel_total : 2786 time to create 1 rle with old method : 0.00316619873046875 create new chi : 0.4925050735473633 time to delete rle : 0.000240325927734375 batch 1 Loaded 7 chid ids of type : 3594 +++Number RLEs to save : 1392 TO DO : save crop sub photo not yet done ! save time : 0.23067188262939453 No data in photo_id : 1374549699 nb_obj : 1 nb_hashtags : 1 time to prepare the origin masks : 0.043402671813964844 time for calcul the mask position with numpy : 0.01273036003112793 nb_pixel_total : 1058396 time to create 1 rle with new method : 0.0771026611328125 time for calcul the mask position with numpy : 0.012403488159179688 nb_pixel_total : 1015204 time to create 1 rle with new method : 0.21912097930908203 create new chi : 0.3363516330718994 time to delete rle : 0.00032639503479003906 batch 1 Loaded 3 chid ids of type : 3594 +Number RLEs to save : 3348 TO DO : save crop sub photo not yet done ! save time : 1.7067794799804688 nb_obj : 1 nb_hashtags : 1 time to prepare the origin masks : 0.035605669021606445 time for calcul the mask position with numpy : 0.043298959732055664 nb_pixel_total : 1995180 time to create 1 rle with new method : 0.24229049682617188 time for calcul the mask position with numpy : 0.01090383529663086 nb_pixel_total : 78420 time to create 1 rle with old method : 0.08329963684082031 create new chi : 0.3897731304168701 time to delete rle : 0.0003330707550048828 batch 1 Loaded 3 chid ids of type : 3594 +Number RLEs to save : 2032 TO DO : save crop sub photo not yet done ! save time : 0.2764267921447754 nb_obj : 2 nb_hashtags : 2 time to prepare the origin masks : 0.133528470993042 time for calcul the mask position with numpy : 0.18442416191101074 nb_pixel_total : 2038657 time to create 1 rle with new method : 0.2341916561126709 time for calcul the mask position with numpy : 0.006208658218383789 nb_pixel_total : 21451 time to create 1 rle with old method : 0.02311110496520996 time for calcul the mask position with numpy : 0.005906581878662109 nb_pixel_total : 13492 time to create 1 rle with old method : 0.015048027038574219 create new chi : 0.47794103622436523 time to delete rle : 0.00030303001403808594 batch 1 Loaded 5 chid ids of type : 3594 +++Number RLEs to save : 1702 TO DO : save crop sub photo not yet done ! save time : 0.5899055004119873 nb_obj : 2 nb_hashtags : 2 time to prepare the origin masks : 0.047702789306640625 time for calcul the mask position with numpy : 0.04279756546020508 nb_pixel_total : 2041991 time to create 1 rle with new method : 0.25405359268188477 time for calcul the mask position with numpy : 0.006104469299316406 nb_pixel_total : 10803 time to create 1 rle with old method : 0.011871099472045898 time for calcul the mask position with numpy : 0.0057985782623291016 nb_pixel_total : 20806 time to create 1 rle with old method : 0.022382259368896484 create new chi : 0.3525700569152832 time to delete rle : 0.0002770423889160156 batch 1 Loaded 5 chid ids of type : 3594 ++Number RLEs to save : 1636 TO DO : save crop sub photo not yet done ! save time : 0.7082521915435791 map_output_result : {1374549731: (0.0, 'Should be the crop_list due to order', 0), 1374549730: (0.0, 'Should be the crop_list due to order', 0.0), 1374549729: (0.0, 'Should be the crop_list due to order', 0), 1374549727: (0.0, 'Should be the crop_list due to order', 0), 1374549699: (0.0, 'Should be the crop_list due to order', 0.0), 1374549696: (0.0, 'Should be the crop_list due to order', 0), 1374549693: (0.0, 'Should be the crop_list due to order', 0), 1374549691: (0.0, 'Should be the crop_list due to order', 0), 1374549689: (0.0, 'Should be the crop_list due to order', 0)} End step rle-unique-nms Inside saveOutput : final : False verbose : 0 saveOutput not yet implemented for datou_step.type : rle_unique_nms_with_priority we use saveGeneral [1374549731, 1374549730, 1374549729, 1374549727, 1374549699, 1374549696, 1374549693, 1374549691, 1374549689] Looping around the photos to save general results len do output : 9 /1374549731.Didn't retrieve data . /1374549730.Didn't retrieve data . /1374549729.Didn't retrieve data . /1374549727.Didn't retrieve data . /1374549699.Didn't retrieve data . /1374549696.Didn't retrieve data . /1374549693.Didn't retrieve data . /1374549691.Didn't retrieve data . /1374549689.Didn't retrieve data . before output type Used above Here is an output not treated by saveGeneral : Managing all output in save final without adding information in the mtr_datou_result ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549731', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549730', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549729', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549727', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549699', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549696', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549693', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549691', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549689', None, None, None, None, None, '3406282') begin to insert list_values into mtr_datou_result : length of list_values in save_final : 27 time used for this insertion : 0.017963647842407227 save_final save missing photos in datou_result : time spend for datou_step_exec : 9.998701333999634 time spend to save output : 0.018309354782104492 total time spend for step 3 : 10.017010688781738 step4:ventilate_hashtags_in_portfolio Thu Jul 31 08:21:21 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! VR 22-3-18 : For now we do not clean correctly the datou structure beginning of datou step ventilate_hashtags_in_portfolio : To implement ! Iterating over portfolio : 25530216 get user id for portfolio 25530216 SELECT mptpi.id, mptpi.mtr_portfolio_id_1, mptpi.mtr_portfolio_id_2, mptpi.type, mptpi.hashtag_id, mptpi.min_score, mptpi.mtr_user_id, mptpi.created_at, mptpi.updated_at, mptpi.last_updated_at_desc, mptpi.last_updated_at_asc, h.hashtag FROM MTRPhoto.mtr_port_to_port_ids mptpi, MTRBack.hashtags h WHERE h.hashtag_id=mptpi.hashtag_id AND mptpi.`mtr_portfolio_id_1`=25530216 AND mptpi.`type`=3594 AND mptpi.`hashtag_id` in (select hashtag_id FROM MTRBack.hashtags where hashtag in ('mal_croppe','flou','background','autre','pet_clair','pehd','papier','carton','metal','pet_fonce','environnement')) AND mptpi.`min_score`=0.5 To do To do SELECT mptpi.id, mptpi.mtr_portfolio_id_1, mptpi.mtr_portfolio_id_2, mptpi.type, mptpi.hashtag_id, mptpi.min_score, mptpi.mtr_user_id, mptpi.created_at, mptpi.updated_at, mptpi.last_updated_at_desc, mptpi.last_updated_at_asc, h.hashtag FROM MTRPhoto.mtr_port_to_port_ids mptpi, MTRBack.hashtags h WHERE h.hashtag_id=mptpi.hashtag_id AND mptpi.`mtr_portfolio_id_1`=25530216 AND mptpi.`type`=3594 AND mptpi.`hashtag_id` in (select hashtag_id FROM MTRBack.hashtags where hashtag in ('mal_croppe','flou','background','autre','pet_clair','pehd','papier','carton','metal','pet_fonce','environnement')) AND mptpi.`min_score`=0.5 To do Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") To do ! Use context local managing function ! SELECT mptpi.id, mptpi.mtr_portfolio_id_1, mptpi.mtr_portfolio_id_2, mptpi.type, mptpi.hashtag_id, mptpi.min_score, mptpi.mtr_user_id, mptpi.created_at, mptpi.updated_at, mptpi.last_updated_at_desc, mptpi.last_updated_at_asc, h.hashtag FROM MTRPhoto.mtr_port_to_port_ids mptpi, MTRBack.hashtags h WHERE h.hashtag_id=mptpi.hashtag_id AND mptpi.`mtr_portfolio_id_1`=25530216 AND mptpi.`type`=3594 AND mptpi.`hashtag_id` in (select hashtag_id FROM MTRBack.hashtags where hashtag in ('mal_croppe','flou','background','autre','pet_clair','pehd','papier','carton','metal','pet_fonce','environnement')) AND mptpi.`min_score`=0.5 To do lien utilise dans velours : https://www.fotonower.com/velours/25530541,25530542,25530543,25530544,25530545,25530546,25530547,25530548,25530549,25530550,25530551?tags=mal_croppe,flou,background,autre,pet_clair,pehd,papier,carton,metal,pet_fonce,environnement Inside saveOutput : final : False verbose : 0 saveOutput not yet implemented for datou_step.type : ventilate_hashtags_in_portfolio we use saveGeneral [1374549731, 1374549730, 1374549729, 1374549727, 1374549699, 1374549696, 1374549693, 1374549691, 1374549689] Looping around the photos to save general results len do output : 1 /25530216. before output type Here is an output not treated by saveGeneral : Managing all output in save final without adding information in the mtr_datou_result ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549731', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549730', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549729', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549727', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549699', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549696', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549693', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549691', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549689', None, None, None, None, None, '3406282') begin to insert list_values into mtr_datou_result : length of list_values in save_final : 10 time used for this insertion : 0.017999887466430664 save_final save missing photos in datou_result : time spend for datou_step_exec : 1.7860515117645264 time spend to save output : 0.018216371536254883 total time spend for step 4 : 1.8042678833007812 step5:final Thu Jul 31 08:21:23 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! complete output_args for input 2 VR 22-3-18 : For now we do not clean correctly the datou structure Beginning of datou step final ! Catched exception ! Connect or reconnect ! Inside saveOutput : final : False verbose : 0 original output for save of step final : {1374549731: ('0.1288892103909465',), 1374549730: ('0.1288892103909465',), 1374549729: ('0.1288892103909465',), 1374549727: ('0.1288892103909465',), 1374549699: ('0.1288892103909465',), 1374549696: ('0.1288892103909465',), 1374549693: ('0.1288892103909465',), 1374549691: ('0.1288892103909465',), 1374549689: ('0.1288892103909465',)} new output for save of step final : {1374549731: ('0.1288892103909465',), 1374549730: ('0.1288892103909465',), 1374549729: ('0.1288892103909465',), 1374549727: ('0.1288892103909465',), 1374549699: ('0.1288892103909465',), 1374549696: ('0.1288892103909465',), 1374549693: ('0.1288892103909465',), 1374549691: ('0.1288892103909465',), 1374549689: ('0.1288892103909465',)} [1374549731, 1374549730, 1374549729, 1374549727, 1374549699, 1374549696, 1374549693, 1374549691, 1374549689] Looping around the photos to save general results len do output : 9 /1374549731.Didn't retrieve data . /1374549730.Didn't retrieve data . /1374549729.Didn't retrieve data . /1374549727.Didn't retrieve data . /1374549699.Didn't retrieve data . /1374549696.Didn't retrieve data . /1374549693.Didn't retrieve data . /1374549691.Didn't retrieve data . /1374549689.Didn't retrieve data . before output type Used above Used above Managing all output in save final without adding information in the mtr_datou_result ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549731', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549730', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549729', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549727', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549699', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549696', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549693', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549691', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549689', None, None, None, None, None, '3406282') begin to insert list_values into mtr_datou_result : length of list_values in save_final : 27 time used for this insertion : 0.020473480224609375 save_final save missing photos in datou_result : time spend for datou_step_exec : 0.11773872375488281 time spend to save output : 0.020955801010131836 total time spend for step 5 : 0.13869452476501465 step6:blur_detection Thu Jul 31 08:21:23 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! VR 22-3-18 : For now we do not clean correctly the datou structure inside step blur_detection methode: ratio et variance treat image : temp/1753942829_2098305_1374549731_8a4485d7e4839922090f7593d55ff536.jpg resize: (1080, 1920) 1374549731 -0.7534590637379442 treat image : temp/1753942829_2098305_1374549730_f1b16ca1f6acdbd71b72596ac73189cd.jpg resize: (1080, 1920) 1374549730 1.792805505149938 treat image : temp/1753942829_2098305_1374549729_a4d77bf77014c2b3d8916ee1da546667.jpg resize: (1080, 1920) 1374549729 -0.8721791710020166 treat image : temp/1753942829_2098305_1374549727_f34c71ba3ac53f1025056c8a1067e899.jpg resize: (1080, 1920) 1374549727 -1.022714744719436 treat image : temp/1753942829_2098305_1374549699_95a4c8494198bd03a65fd5c19c55c189.jpg resize: (1080, 1920) 1374549699 3.789729503338969 treat image : temp/1753942829_2098305_1374549696_41e15348a0e65b2120e01673a4d6c9f9.jpg resize: (1080, 1920) 1374549696 -1.7493171805176493 treat image : temp/1753942829_2098305_1374549693_382a162f2e990d180498bfcdfea66d62.jpg resize: (1080, 1920) 1374549693 0.7114478031869889 treat image : temp/1753942829_2098305_1374549691_fa38ca4ce5a68fc6a819f6c8757189ce.jpg resize: (1080, 1920) 1374549691 -0.8503868275701529 treat image : temp/1753942829_2098305_1374549689_626e3986db62ae7809f34aeec68c3f43.jpg resize: (1080, 1920) 1374549689 -2.7731210455708957 treat image : temp/1753942829_2098305_1374549731_8a4485d7e4839922090f7593d55ff536_rle_crop_3898999306_0.png resize: (190, 104) 1374555234 -0.38207590182144724 treat image : temp/1753942829_2098305_1374549731_8a4485d7e4839922090f7593d55ff536_rle_crop_3898999307_0.png resize: (214, 317) 1374555235 -2.246799463909747 treat image : temp/1753942829_2098305_1374549731_8a4485d7e4839922090f7593d55ff536_rle_crop_3898999305_0.png resize: (44, 64) 1374555236 -0.7147620146954898 treat image : temp/1753942829_2098305_1374549729_a4d77bf77014c2b3d8916ee1da546667_rle_crop_3898999310_0.png resize: (64, 39) 1374555237 0.787596672669492 treat image : temp/1753942829_2098305_1374549727_f34c71ba3ac53f1025056c8a1067e899_rle_crop_3898999314_0.png resize: (45, 103) 1374555238 -1.3032770707451689 treat image : temp/1753942829_2098305_1374549727_f34c71ba3ac53f1025056c8a1067e899_rle_crop_3898999313_0.png resize: (48, 75) 1374555239 0.543792902530888 treat image : temp/1753942829_2098305_1374549691_fa38ca4ce5a68fc6a819f6c8757189ce_rle_crop_3898999319_0.png resize: (162, 156) 1374555240 0.4658514694080852 treat image : temp/1753942829_2098305_1374549689_626e3986db62ae7809f34aeec68c3f43_rle_crop_3898999320_0.png resize: (169, 222) 1374555241 -2.480219491941527 treat image : temp/1753942829_2098305_1374549729_a4d77bf77014c2b3d8916ee1da546667_rle_crop_3898999311_0.png resize: (93, 119) 1374555242 -2.0342867039936423 treat image : temp/1753942829_2098305_1374549727_f34c71ba3ac53f1025056c8a1067e899_rle_crop_3898999315_0.png resize: (63, 71) 1374555243 -0.13132134244243354 treat image : temp/1753942829_2098305_1374549731_8a4485d7e4839922090f7593d55ff536_rle_crop_3898999304_0.png resize: (137, 143) 1374555249 -2.251944961912031 treat image : temp/1753942829_2098305_1374549729_a4d77bf77014c2b3d8916ee1da546667_rle_crop_3898999312_0.png resize: (988, 1416) 1374555250 -1.5838090765758135 treat image : temp/1753942829_2098305_1374549729_a4d77bf77014c2b3d8916ee1da546667_rle_crop_3898999309_0.png resize: (167, 180) 1374555251 -1.9149507943184891 treat image : temp/1753942829_2098305_1374549696_41e15348a0e65b2120e01673a4d6c9f9_rle_crop_3898999316_0.png resize: (994, 1331) 1374555252 -2.6180188075769393 treat image : temp/1753942829_2098305_1374549693_382a162f2e990d180498bfcdfea66d62_rle_crop_3898999317_0.png resize: (475, 302) 1374555253 -0.4347703584282797 treat image : temp/1753942829_2098305_1374549691_fa38ca4ce5a68fc6a819f6c8757189ce_rle_crop_3898999318_0.png resize: (126, 148) 1374555254 0.13213842265522352 treat image : temp/1753942829_2098305_1374549689_626e3986db62ae7809f34aeec68c3f43_rle_crop_3898999321_0.png resize: (80, 217) 1374555256 -4.01557873470983 treat image : temp/1753942829_2098305_1374549731_8a4485d7e4839922090f7593d55ff536_rle_crop_3898999308_0.png resize: (180, 252) 1374555258 -1.8472311740227216 Inside saveOutput : final : False verbose : 0 begin to insert list_values into class_photo_scores : length of list_valuse in save_photo_hashtag_id_thcl_score : 27 time used for this insertion : 0.012650489807128906 begin to insert list_values into photo_hahstag_ids : length of list_valuse in save_photo_hashtag_id_type : 27 time used for this insertion : 0.09824395179748535 save missing photos in datou_result : time spend for datou_step_exec : 8.536018371582031 time spend to save output : 0.1160886287689209 total time spend for step 6 : 8.652107000350952 step7:brightness Thu Jul 31 08:21:32 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! VR 22-3-18 : For now we do not clean correctly the datou structure inside step calcul brightness treat image : temp/1753942829_2098305_1374549731_8a4485d7e4839922090f7593d55ff536.jpg treat image : temp/1753942829_2098305_1374549730_f1b16ca1f6acdbd71b72596ac73189cd.jpg treat image : temp/1753942829_2098305_1374549729_a4d77bf77014c2b3d8916ee1da546667.jpg treat image : temp/1753942829_2098305_1374549727_f34c71ba3ac53f1025056c8a1067e899.jpg treat image : temp/1753942829_2098305_1374549699_95a4c8494198bd03a65fd5c19c55c189.jpg treat image : temp/1753942829_2098305_1374549696_41e15348a0e65b2120e01673a4d6c9f9.jpg treat image : temp/1753942829_2098305_1374549693_382a162f2e990d180498bfcdfea66d62.jpg treat image : temp/1753942829_2098305_1374549691_fa38ca4ce5a68fc6a819f6c8757189ce.jpg treat image : temp/1753942829_2098305_1374549689_626e3986db62ae7809f34aeec68c3f43.jpg treat image : temp/1753942829_2098305_1374549731_8a4485d7e4839922090f7593d55ff536_rle_crop_3898999306_0.png treat image : temp/1753942829_2098305_1374549731_8a4485d7e4839922090f7593d55ff536_rle_crop_3898999307_0.png treat image : temp/1753942829_2098305_1374549731_8a4485d7e4839922090f7593d55ff536_rle_crop_3898999305_0.png treat image : temp/1753942829_2098305_1374549729_a4d77bf77014c2b3d8916ee1da546667_rle_crop_3898999310_0.png treat image : temp/1753942829_2098305_1374549727_f34c71ba3ac53f1025056c8a1067e899_rle_crop_3898999314_0.png treat image : temp/1753942829_2098305_1374549727_f34c71ba3ac53f1025056c8a1067e899_rle_crop_3898999313_0.png treat image : temp/1753942829_2098305_1374549691_fa38ca4ce5a68fc6a819f6c8757189ce_rle_crop_3898999319_0.png treat image : temp/1753942829_2098305_1374549689_626e3986db62ae7809f34aeec68c3f43_rle_crop_3898999320_0.png treat image : temp/1753942829_2098305_1374549729_a4d77bf77014c2b3d8916ee1da546667_rle_crop_3898999311_0.png treat image : temp/1753942829_2098305_1374549727_f34c71ba3ac53f1025056c8a1067e899_rle_crop_3898999315_0.png treat image : temp/1753942829_2098305_1374549731_8a4485d7e4839922090f7593d55ff536_rle_crop_3898999304_0.png treat image : temp/1753942829_2098305_1374549729_a4d77bf77014c2b3d8916ee1da546667_rle_crop_3898999312_0.png treat image : temp/1753942829_2098305_1374549729_a4d77bf77014c2b3d8916ee1da546667_rle_crop_3898999309_0.png treat image : temp/1753942829_2098305_1374549696_41e15348a0e65b2120e01673a4d6c9f9_rle_crop_3898999316_0.png treat image : temp/1753942829_2098305_1374549693_382a162f2e990d180498bfcdfea66d62_rle_crop_3898999317_0.png treat image : temp/1753942829_2098305_1374549691_fa38ca4ce5a68fc6a819f6c8757189ce_rle_crop_3898999318_0.png treat image : temp/1753942829_2098305_1374549689_626e3986db62ae7809f34aeec68c3f43_rle_crop_3898999321_0.png treat image : temp/1753942829_2098305_1374549731_8a4485d7e4839922090f7593d55ff536_rle_crop_3898999308_0.png Inside saveOutput : final : False verbose : 0 begin to insert list_values into class_photo_scores : length of list_valuse in save_photo_hashtag_id_thcl_score : 27 time used for this insertion : 0.014426708221435547 begin to insert list_values into photo_hahstag_ids : length of list_valuse in save_photo_hashtag_id_type : 27 time used for this insertion : 0.1470167636871338 save missing photos in datou_result : time spend for datou_step_exec : 2.400585651397705 time spend to save output : 0.16570782661437988 total time spend for step 7 : 2.566293478012085 step8:velours_tree Thu Jul 31 08:21:35 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed complete output_args for input 0 VR 22-3-18 : For now we do not clean correctly the datou structure can't find the photo_desc_type Inside saveOutput : final : False verbose : 0 ouput is None No outpout to save, returning out of save general time spend for datou_step_exec : 0.13576817512512207 time spend to save output : 4.982948303222656e-05 total time spend for step 8 : 0.1358180046081543 step9:send_mail_cod Thu Jul 31 08:21:35 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed complete output_args for input 0 complete output_args for input 1 Inconsistent number of input and output, step which parrallelize and manage error in input by avoiding sending an output for this data can't be used in tree dependencies of input and output complete output_args for input 2 Inconsistent number of input and output, step which parrallelize and manage error in input by avoiding sending an output for this data can't be used in tree dependencies of input and output complete output_args for input 3 We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! VR 22-3-18 : For now we do not clean correctly the datou structure dans la step send mail cod work_area: /home/admin/workarea/git/Velours/python in order to get the selector url, please entre the license of selector results_Auto_P25530216_31-07-2025_08_21_35.pdf 25530541 imagette255305411753942895 25530542 imagette255305421753942895 25530543 imagette255305431753942895 25530544 imagette255305441753942895 25530545 change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .imagette255305451753942895 25530546 imagette255305461753942895 25530547 change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .change filename to text .imagette255305471753942895 25530548 change filename to text .change filename to text .imagette255305481753942896 25530549 imagette255305491753942896 25530550 change filename to text .imagette255305501753942896 SELECT h.hashtag,pcr.value FROM MTRUser.portfolio_carac_ratio pcr, MTRBack.hashtags h where pcr.portfolio_id=25530216 and hashtag_type = 3594 and pcr.hashtag_id = h.hashtag_id; velour_link : https://www.fotonower.com/velours/25530541,25530542,25530543,25530544,25530545,25530546,25530547,25530548,25530549,25530550,25530551?tags=mal_croppe,flou,background,autre,pet_clair,pehd,papier,carton,metal,pet_fonce,environnement args[1374549731] : ((1374549731, -0.7534590637379442, 492688767), (1374549731, 0.3586540767591578, 2107752395), '0.1288892103909465') We are sending mail with results at report@fotonower.com args[1374549730] : ((1374549730, 1.792805505149938, 492688767), (1374549730, 0.5075830099202937, 2107752395), '0.1288892103909465') We are sending mail with results at report@fotonower.com args[1374549729] : ((1374549729, -0.8721791710020166, 492688767), (1374549729, 0.3146904873760836, 2107752395), '0.1288892103909465') We are sending mail with results at report@fotonower.com args[1374549727] : ((1374549727, -1.022714744719436, 492688767), (1374549727, 0.21571047794985806, 2107752395), '0.1288892103909465') We are sending mail with results at report@fotonower.com args[1374549699] : ((1374549699, 3.789729503338969, 492688767), (1374549699, 0.8067670128138562, 2107752395), '0.1288892103909465') We are sending mail with results at report@fotonower.com args[1374549696] : ((1374549696, -1.7493171805176493, 492688767), (1374549696, 0.8783804422475662, 2107752395), '0.1288892103909465') We are sending mail with results at report@fotonower.com args[1374549693] : ((1374549693, 0.7114478031869889, 492688767), (1374549693, 0.7611883712856443, 2107752395), '0.1288892103909465') We are sending mail with results at report@fotonower.com args[1374549691] : ((1374549691, -0.8503868275701529, 492688767), (1374549691, 0.49516668332238567, 2107752395), '0.1288892103909465') We are sending mail with results at report@fotonower.com args[1374549689] : ((1374549689, -2.7731210455708957, 492609224), (1374549689, 0.4326977101234335, 2107752395), '0.1288892103909465') We are sending mail with results at report@fotonower.com refus_total : 0.1288892103909465 2022-04-13 10:29:59 0 SELECT ph.photo_id,ph.url,ph.username,ph.uploaded_at,ph.text FROM MTRBack.photos ph, MTRUser.mtr_portfolio_photos mpp WHERE ph.photo_id=mpp.mtr_photo_id AND mpp.mtr_portfolio_id=25530216 AND mpp.hide_status=0 ORDER BY mpp.order LIMIT 0, 1000 start upload file to ovh https://storage.sbg.cloud.ovh.net/v1/AUTH_3b171620e76e4af496c5fd050759c9f0/media.fotonower.com/results_Auto_P25530216_31-07-2025_08_21_35.pdf results_Auto_P25530216_31-07-2025_08_21_35.pdf uploaded to url https://storage.sbg.cloud.ovh.net/v1/AUTH_3b171620e76e4af496c5fd050759c9f0/media.fotonower.com/results_Auto_P25530216_31-07-2025_08_21_35.pdf start insert file to database insert into MTRUser.mtr_files (mtd_id,mtr_portfolio_id,text,url,format,tags,file_size,value) values ('3318','25530216','results_Auto_P25530216_31-07-2025_08_21_35.pdf','https://storage.sbg.cloud.ovh.net/v1/AUTH_3b171620e76e4af496c5fd050759c9f0/media.fotonower.com/results_Auto_P25530216_31-07-2025_08_21_35.pdf','pdf','','0.37','0.1288892103909465') message_in_mail: Bonjour,
Veuillez trouver ci dessous les résultats du service carac on demand pour le portfolio: https://www.fotonower.com/view/25530216

https://www.fotonower.com/image?json=false&list_photos_id=1374549731
Bravo, la photo est bien prise.
https://www.fotonower.com/image?json=false&list_photos_id=1374549730
La photo est trop floue, merci de reprendre une photo.(avec le score = 1.792805505149938)
https://www.fotonower.com/image?json=false&list_photos_id=1374549729
Bravo, la photo est bien prise.
https://www.fotonower.com/image?json=false&list_photos_id=1374549727
Bravo, la photo est bien prise.
https://www.fotonower.com/image?json=false&list_photos_id=1374549699
La photo est trop floue, merci de reprendre une photo.(avec le score = 3.789729503338969)
https://www.fotonower.com/image?json=false&list_photos_id=1374549696
Bravo, la photo est bien prise.
https://www.fotonower.com/image?json=false&list_photos_id=1374549693
Bravo, la photo est bien prise.
https://www.fotonower.com/image?json=false&list_photos_id=1374549691
Bravo, la photo est bien prise.
https://www.fotonower.com/image?json=false&list_photos_id=1374549689
Bravo, la photo est bien prise.

Dans ces conditions,le taux de refus est: 12.89%
Veuillez trouver les photos des contaminants.

exemples de contaminants: pet_clair: https://www.fotonower.com/view/25530545?limit=200
exemples de contaminants: papier: https://www.fotonower.com/view/25530547?limit=200
exemples de contaminants: carton: https://www.fotonower.com/view/25530548?limit=200
exemples de contaminants: pet_fonce: https://www.fotonower.com/view/25530550?limit=200
Veuillez trouver le rapport en pdf:https://storage.sbg.cloud.ovh.net/v1/AUTH_3b171620e76e4af496c5fd050759c9f0/media.fotonower.com/results_Auto_P25530216_31-07-2025_08_21_35.pdf.

Lien vers velours :https://www.fotonower.com/velours/25530541,25530542,25530543,25530544,25530545,25530546,25530547,25530548,25530549,25530550,25530551?tags=mal_croppe,flou,background,autre,pet_clair,pehd,papier,carton,metal,pet_fonce,environnement.


L'équipe Fotonower 202 b'' Server: nginx Date: Thu, 31 Jul 2025 06:21:38 GMT Content-Length: 0 Connection: close X-Message-Id: Yx7vkDznSO6HItDcQSfP0g Access-Control-Allow-Origin: https://sendgrid.api-docs.io Access-Control-Allow-Methods: POST Access-Control-Allow-Headers: Authorization, Content-Type, On-behalf-of, x-sg-elas-acl Access-Control-Max-Age: 600 X-No-CORS-Reason: https://sendgrid.com/docs/Classroom/Basics/API/cors.html Strict-Transport-Security: max-age=31536000; includeSubDomains Content-Security-Policy: frame-ancestors 'none' Cache-Control: no-cache X-Content-Type-Options: no-sniff Referrer-Policy: strict-origin-when-cross-origin Inside saveOutput : final : False verbose : 0 saveOutput not yet implemented for datou_step.type : send_mail_cod we use saveGeneral [1374549731, 1374549730, 1374549729, 1374549727, 1374549699, 1374549696, 1374549693, 1374549691, 1374549689] Looping around the photos to save general results len do output : 0 before output type Used above Managing all output in save final without adding information in the mtr_datou_result ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549731', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549730', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549729', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549727', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549699', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549696', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549693', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549691', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549689', None, None, None, None, None, '3406282') begin to insert list_values into mtr_datou_result : length of list_values in save_final : 9 time used for this insertion : 0.06705665588378906 save_final save missing photos in datou_result : time spend for datou_step_exec : 2.914885997772217 time spend to save output : 0.06727457046508789 total time spend for step 9 : 2.9821605682373047 step10:split_time_score Thu Jul 31 08:21:38 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! complete output_args for input 1 VR 22-3-18 : For now we do not clean correctly the datou structure begin split time score Catched exception ! Connect or reconnect ! TODO : Insert select and so on Begin split_port_in_batch_balle thcls : [{'id': 861, 'mtr_user_id': 31, 'name': 'Rungis_class_dechets_1212', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'Rungis_Aluminium,Rungis_Carton,Rungis_Papier,Rungis_Plastique_clair,Rungis_Plastique_dur,Rungis_Plastique_fonce,Rungis_Tapis_vide,Rungis_Tetrapak', 'svm_portfolios_learning': '1160730,571842,571844,571839,571933,571840,571841,572307', 'photo_hashtag_type': 999, 'photo_desc_type': 3963, 'type_classification': 'caffe', 'hashtag_id_list': '2107751280,2107750907,2107750908,2107750909,2107750910,2107750911,2107750912,2107750913'}] thcls : [{'id': 758, 'mtr_user_id': 31, 'name': 'Rungis_amount_dechets_fall_2018_v2', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': '05102018_Papier_non_papier_dense,05102018_Papier_non_papier_peu_dense,05102018_Papier_non_papier_presque_vide,05102018_Papier_non_papier_tres_dense,05102018_Papier_non_papier_tres_peu_dense', 'svm_portfolios_learning': '1108385,1108386,1108388,1108384,1108387', 'photo_hashtag_type': 856, 'photo_desc_type': 3853, 'type_classification': 'caffe', 'hashtag_id_list': '2107751013,2107751014,2107751015,2107751016,2107751017'}] (('07', 9),) ERROR counted https://github.com/fotonower/Velours/issues/663#issuecomment-421136223 {} 31072025 25530216 Nombre de photos uploadées : 9 / 23040 (0%) 31072025 25530216 Nombre de photos taguées (types de déchets): 0 / 9 (0%) 31072025 25530216 Nombre de photos taguées (volume) : 0 / 9 (0%) elapsed_time : load_data_split_time_score 1.6689300537109375e-06 elapsed_time : order_list_meta_photo_and_scores 5.4836273193359375e-06 ????????? elapsed_time : fill_and_build_computed_from_old_data 0.0005002021789550781 Catched exception ! Connect or reconnect ! Catched exception ! Connect or reconnect ! elapsed_time : insert_dashboard_record_day_entry 0.20400524139404297 We will return after consolidate but for now we need the day, how to get it, for now depending on the previous heavy steps Qualite : 0.1288892103909465 find url: https://storage.sbg.cloud.ovh.net/v1/AUTH_3b171620e76e4af496c5fd050759c9f0/media.fotonower.com/results_Auto_P25530216_31-07-2025_08_21_35.pdf select completion_json, dashboard_run_id from MTRPhoto.dashboard_results where mtr_portfolio_id = 25530216 order by id desc limit 1 # VR 17-11-17 : to create in DB ! Here we check the datou graph and we reorder steps ! Tree builded and cycle checked, now we need to re-order the steps ! We have currenlty an error because there is no dependence between the last step for the case tile - detect - glue We can either keep the depence of, it is better to keep an order compatible with the id of steps if we do not have sons, so a lexical order : (number_son, step_id) All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! DONE and to test : checkNoCycle ! Here we check the consistency of inputs/outputs number between the given ones and the db ! eke 1-6-18 : checkConsistencyNbInputNbOutput should be processed after step reordering ! WARNING : number of outputs for step 7928 mask_detect is not consistent : 3 used against 2 in the step definition ! Step 8092 crop_condition have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! WARNING : number of outputs for step 8092 crop_condition is not consistent : 4 used against 3 in the step definition ! WARNING : number of inputs for step 7933 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 7933 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 7935 ventilate_hashtags_in_portfolio is not consistent : 2 used against 1 in the step definition ! Step 7934 final have less inputs used (2) than in the step definition (3) : maybe we manage optionnal inputs ! Step 7934 final have less outputs used (1) than in the step definition (2) : some outputs may be not used ! WARNING : number of outputs for step 13649 velours_tree is not consistent : 2 used against 1 in the step definition ! Step 9283 split_time_score have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! Number of inputs / outputs for each step checked ! Here we check the consistency of outputs/inputs types during steps connections eke 1-6-18 : checkConsistencyTypeOutputInput should be processed after checkConsistencyNbInputNbOutput ! We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : type of output 1 of step 7935 doesn't seem to be define in the database( WARNING : type of input 3 of step 7934 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : type of input 1 of step 7935 doesn't seem to be define in the database( WARNING : output 1 of step 7933 have datatype=7 whereas input 1 of step 7935 have datatype=None WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 2 of step 8092 doesn't seem to be define in the database( WARNING : type of output 3 of step 8092 doesn't seem to be define in the database( WARNING : type of input 1 of step 7933 doesn't seem to be define in the database( WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 1 of step 10917 doesn't seem to be define in the database( WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 1 of step 10918 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : output 0 of step 7935 have datatype=10 whereas input 3 of step 10916 have datatype=6 WARNING : output 0 of step 7935 have datatype=10 whereas input 0 of step 13649 have datatype=18 WARNING : type of output 1 of step 13649 doesn't seem to be define in the database( WARNING : type of input 5 of step 10916 doesn't seem to be define in the database( DataTypes for each output/input checked ! TODO Duplicate data, are they consistent 3 ? Duplicate data, are they consistent 4 ? SELECT mptpi.id, mptpi.mtr_portfolio_id_1, mptpi.mtr_portfolio_id_2, mptpi.type, mptpi.hashtag_id, mptpi.min_score, mptpi.mtr_user_id, mptpi.created_at, mptpi.updated_at, mptpi.last_updated_at_desc, mptpi.last_updated_at_asc, h.hashtag FROM MTRPhoto.mtr_port_to_port_ids mptpi, MTRBack.hashtags h WHERE h.hashtag_id=mptpi.hashtag_id AND mptpi.`mtr_portfolio_id_1`=25530216 AND mptpi.`type`=3594 To do NUMBER BATCH : 0 # DISPLAY ALL COLLECTED DATA : {'31072025': {'nb_upload': 9, 'nb_taggue_class': 0, 'nb_taggue_densite': 0}} Inside saveOutput : final : True verbose : 0 saveOutput not yet implemented for datou_step.type : split_time_score we use saveGeneral [1374549731, 1374549730, 1374549729, 1374549727, 1374549699, 1374549696, 1374549693, 1374549691, 1374549689] Looping around the photos to save general results len do output : 1 /25530216Didn't retrieve data . before output type Here is an output not treated by saveGeneral : Managing all output in save final without adding information in the mtr_datou_result ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549731', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549730', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549729', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549727', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549699', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549696', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549693', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549691', None, None, None, None, None, '3406282') ('3318', None, None, None, None, None, None, None, '3406282') ('3318', '25530216', '1374549689', None, None, None, None, None, '3406282') begin to insert list_values into mtr_datou_result : length of list_values in save_final : 10 time used for this insertion : 0.10127687454223633 save_final save missing photos in datou_result : time spend for datou_step_exec : 0.47134828567504883 time spend to save output : 0.1014859676361084 total time spend for step 10 : 0.5728342533111572 caffe_path_current : About to save ! 2 After save, about to update current ! ret : 2 len(input) + len(total_photo_id_missing) : 9 set_done_treatment 37.10user 20.46system 1:13.68elapsed 78%CPU (0avgtext+0avgdata 2591148maxresident)k 500040inputs+28912outputs (2major+1380249minor)pagefaults 0swaps