python /home/admin/mtr/script_for_cron.py -j datou_current3 -m 20 -a ' -a 3318 ' -s datou_3318 -M 0 -S 0 -U 95,95,120 import MySQLdb succeeded Import error (python version) ['/Users/moilerat/Documents/Fotonower/install/caffe/distribute/python', '/home/admin/workarea/git/Velours/python/prod', '/home/admin/workarea/install/caffe_cuda8_python3/python', '/home/admin/workarea/install/darknet', '/home/admin/workarea/git/Velours/python', '/home/admin/workarea/install/caffe_frcnn_python3/py-faster-rcnn/caffe-fast-rcnn/python', '/home/admin/mtr/.credentials', '/home/admin/workarea/install/caffe/python', '/home/admin/workarea/install/caffe_frcnn/py-faster-rcnn/tools', '/home/admin/workarea/git/fotonowerpip', '/home/admin/workarea/install/segment-anything', '/home/admin/workarea/git/pyfvs', '/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload', '/home/admin/.local/lib/python3.8/site-packages', '/usr/local/lib/python3.8/dist-packages', '/usr/lib/python3/dist-packages'] process id : 2064713 load datou : 3318 # VR 17-11-17 : to create in DB ! Here we check the datou graph and we reorder steps ! Tree builded and cycle checked, now we need to re-order the steps ! We have currenlty an error because there is no dependence between the last step for the case tile - detect - glue We can either keep the depence of, it is better to keep an order compatible with the id of steps if we do not have sons, so a lexical order : (number_son, step_id) All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! DONE and to test : checkNoCycle ! Here we check the consistency of inputs/outputs number between the given ones and the db ! eke 1-6-18 : checkConsistencyNbInputNbOutput should be processed after step reordering ! WARNING : number of outputs for step 7928 mask_detect is not consistent : 3 used against 2 in the step definition ! Step 8092 crop_condition have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! WARNING : number of outputs for step 8092 crop_condition is not consistent : 4 used against 3 in the step definition ! WARNING : number of inputs for step 7933 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 7933 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 7935 ventilate_hashtags_in_portfolio is not consistent : 2 used against 1 in the step definition ! Step 7934 final have less inputs used (2) than in the step definition (3) : maybe we manage optionnal inputs ! Step 7934 final have less outputs used (1) than in the step definition (2) : some outputs may be not used ! WARNING : number of outputs for step 13649 velours_tree is not consistent : 2 used against 1 in the step definition ! Step 9283 split_time_score have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! Number of inputs / outputs for each step checked ! Here we check the consistency of outputs/inputs types during steps connections eke 1-6-18 : checkConsistencyTypeOutputInput should be processed after checkConsistencyNbInputNbOutput ! We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : type of output 1 of step 7935 doesn't seem to be define in the database( WARNING : type of input 3 of step 7934 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : type of input 1 of step 7935 doesn't seem to be define in the database( WARNING : output 1 of step 7933 have datatype=7 whereas input 1 of step 7935 have datatype=None WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 2 of step 8092 doesn't seem to be define in the database( WARNING : type of output 3 of step 8092 doesn't seem to be define in the database( WARNING : type of input 1 of step 7933 doesn't seem to be define in the database( WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 1 of step 10917 doesn't seem to be define in the database( WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 1 of step 10918 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : output 0 of step 7935 have datatype=10 whereas input 3 of step 10916 have datatype=6 WARNING : output 0 of step 7935 have datatype=10 whereas input 0 of step 13649 have datatype=18 WARNING : type of output 1 of step 13649 doesn't seem to be define in the database( WARNING : type of input 5 of step 10916 doesn't seem to be define in the database( DataTypes for each output/input checked ! Unexpected type for variable list_input_json ERROR or WARNING : can't parse json string Expecting value: line 1 column 1 (char 0) Tried to parse : chemin de la photo was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? [(photo_id, hashtag_id, hashtag_type, x0, x1, y0, y1, score, seg_temp, polygons), ...] was removed should we ? chemin de la photo was removed should we ? [ (photo_id_loc, hashtag_id, hashtag_type, x0, x1, y0, y1, score, None), ...] was removed should we ? chemin de la photo was removed should we ? id de la photo (peut être local ou global) was removed should we ? chemin de la photo was removed should we ? (x0, y0, x1, y1) was removed should we ? chemin de la photo was removed should we ? donnée sous forme de texte was removed should we ? [ (photo_id, photo_id_loc, hashtag_type, x0, x1, y0, y1, score), ...] was removed should we ? None was removed should we ? donnée sous forme de texte was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? id de la photo (peut être local ou global) was removed should we ? donnée sous forme de texte was removed should we ? donnée sous forme de texte was removed should we ? donnée sous forme de texte was removed should we ? chemin de la photo was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? chemin de la photo was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? None was removed should we ? donnée sous forme de nombre was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? (photo_id, hashtag_id, score_max) was removed should we ? donnée sous forme de texte was removed should we ? None was removed should we ? donnée sous forme de texte was removed should we ? [ptf_id0,ptf_id1...] was removed should we ? FOUND : 1 Here is data_from_sql_as_vec to set the ParamDescriptorType : (5275, 'learn_RUBBIA_REFUS_AMIENS_23', 16384, 25088, 'learn_RUBBIA_REFUS_AMIENS_23', 'pool5', 10.0, None, None, 256, None, 0, None, 8, None, None, -1000.0, 1, datetime.datetime(2021, 4, 23, 14, 19, 39), datetime.datetime(2021, 4, 23, 14, 19, 39)) load thcls load THCL from format json or kwargs add thcl : 2847 in CacheModelConfig load pdts add pdt : 5275 in CacheModelConfig Running datou job : batch_current TODO datou_current to load to do maybe to take outside batchDatouExec updating current state to 1 list_input_json: [] Current got : datou_id : 3318, datou_cur_ids : ['3824882'] with mtr_portfolio_ids : ['27463505'] and first list_photo_ids : [] new path : /proc/2064713/ Inside batchDatouExec : verbose : 0 # VR 17-11-17 : to create in DB ! Here we check the datou graph and we reorder steps ! Tree builded and cycle checked, now we need to re-order the steps ! We have currenlty an error because there is no dependence between the last step for the case tile - detect - glue We can either keep the depence of, it is better to keep an order compatible with the id of steps if we do not have sons, so a lexical order : (number_son, step_id) All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! DONE and to test : checkNoCycle ! Here we check the consistency of inputs/outputs number between the given ones and the db ! eke 1-6-18 : checkConsistencyNbInputNbOutput should be processed after step reordering ! WARNING : number of outputs for step 7928 mask_detect is not consistent : 3 used against 2 in the step definition ! Step 8092 crop_condition have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! WARNING : number of outputs for step 8092 crop_condition is not consistent : 4 used against 3 in the step definition ! WARNING : number of inputs for step 7933 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 7933 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 7935 ventilate_hashtags_in_portfolio is not consistent : 2 used against 1 in the step definition ! Step 7934 final have less inputs used (2) than in the step definition (3) : maybe we manage optionnal inputs ! Step 7934 final have less outputs used (1) than in the step definition (2) : some outputs may be not used ! WARNING : number of outputs for step 13649 velours_tree is not consistent : 2 used against 1 in the step definition ! Step 9283 split_time_score have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! Number of inputs / outputs for each step checked ! Here we check the consistency of outputs/inputs types during steps connections eke 1-6-18 : checkConsistencyTypeOutputInput should be processed after checkConsistencyNbInputNbOutput ! We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : type of output 1 of step 7935 doesn't seem to be define in the database( WARNING : type of input 3 of step 7934 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : type of input 1 of step 7935 doesn't seem to be define in the database( WARNING : output 1 of step 7933 have datatype=7 whereas input 1 of step 7935 have datatype=None WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 2 of step 8092 doesn't seem to be define in the database( WARNING : type of output 3 of step 8092 doesn't seem to be define in the database( WARNING : type of input 1 of step 7933 doesn't seem to be define in the database( WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 1 of step 10917 doesn't seem to be define in the database( WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 1 of step 10918 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : output 0 of step 7935 have datatype=10 whereas input 3 of step 10916 have datatype=6 WARNING : output 0 of step 7935 have datatype=10 whereas input 0 of step 13649 have datatype=18 WARNING : type of output 1 of step 13649 doesn't seem to be define in the database( WARNING : type of input 5 of step 10916 doesn't seem to be define in the database( DataTypes for each output/input checked ! List Step Type Loaded in datou : mask_detect, crop_condition, rle_unique_nms_with_priority, ventilate_hashtags_in_portfolio, final, blur_detection, brightness, velours_tree, send_mail_cod, split_time_score over limit max, limiting to limit_max 40 list_input_json : [] origin We have 1 , BFBFBFBFBFBFBFBFBFwe have missing 0 photos in the step downloads : photo missing : [] try to delete the photos missing in DB length of list_filenames : 9 ; length of list_pids : 9 ; length of list_args : 9 time to download the photos : 1.4728834629058838 About to test input to load we should then remove the video here, and this would fix the bug of datou_current ! Calling datou_exec Inside datou_exec : verbose : 0 number of steps : 10 step1:mask_detect Fri Oct 3 01:30:30 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Beginning of datou step mask_detect ! save_polygon : True begin detect begin to check gpu status inside check gpu memory l 3637 free memory gpu now : 5066 max_wait_temp : 1 max_wait : 0 gpu_flag : 0 2025-10-03 01:30:34.254501: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2025-10-03 01:30:34.284572: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 3492910000 Hz 2025-10-03 01:30:34.286693: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f6038000b60 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2025-10-03 01:30:34.286743: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2025-10-03 01:30:34.290778: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2025-10-03 01:30:34.417378: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0xd4065f0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2025-10-03 01:30:34.417414: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA GeForce RTX 2080 Ti, Compute Capability 7.5 2025-10-03 01:30:34.418638: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:41:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5 coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s 2025-10-03 01:30:34.418926: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-10-03 01:30:34.420950: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2025-10-03 01:30:34.437287: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2025-10-03 01:30:34.437658: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2025-10-03 01:30:34.465184: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2025-10-03 01:30:34.469130: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2025-10-03 01:30:34.513847: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2025-10-03 01:30:34.515081: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0 2025-10-03 01:30:34.515647: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-10-03 01:30:34.516278: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix: 2025-10-03 01:30:34.516295: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 2025-10-03 01:30:34.516304: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N 2025-10-03 01:30:34.517986: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4614 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:41:00.0, compute capability: 7.5) WARNING:tensorflow:From /home/admin/workarea/git/Velours/python/mtr/mask_rcnn/mask_detection.py:69: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead. 2025-10-03 01:30:34.974359: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:41:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5 coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s 2025-10-03 01:30:34.974517: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-10-03 01:30:34.974538: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2025-10-03 01:30:34.974556: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2025-10-03 01:30:34.974574: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2025-10-03 01:30:34.974591: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2025-10-03 01:30:34.974608: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2025-10-03 01:30:34.974625: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2025-10-03 01:30:34.975577: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0 2025-10-03 01:30:34.976923: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: pciBusID: 0000:41:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5 coreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s 2025-10-03 01:30:34.976967: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2025-10-03 01:30:34.976983: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2025-10-03 01:30:34.976997: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2025-10-03 01:30:34.977011: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2025-10-03 01:30:34.977025: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2025-10-03 01:30:34.977039: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2025-10-03 01:30:34.977053: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2025-10-03 01:30:34.978014: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0 2025-10-03 01:30:34.978061: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix: 2025-10-03 01:30:34.978070: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0 2025-10-03 01:30:34.978079: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N 2025-10-03 01:30:34.979065: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4614 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:41:00.0, compute capability: 7.5) Using TensorFlow backend. WARNING:tensorflow:From /home/admin/workarea/install/Mask_RCNN/model.py:396: calling crop_and_resize_v1 (from tensorflow.python.ops.image_ops_impl) with box_ind is deprecated and will be removed in a future version. Instructions for updating: box_ind is deprecated, use box_indices instead WARNING:tensorflow:From /home/admin/workarea/install/Mask_RCNN/model.py:703: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.cast` instead. WARNING:tensorflow:From /home/admin/workarea/install/Mask_RCNN/model.py:729: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.cast` instead. Inside mask_sub_process Inside mask_detect About to load cache.load_thcl_param To do loadFromThcl(), then load ParamDescType : thcl2847 thcls : [{'id': 2847, 'mtr_user_id': 31, 'name': 'learn_RUBBIA_REFUS_AMIENS_23', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'background,papier,carton,metal,pet_clair,autre,pehd,pet_fonce,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3594, 'photo_desc_type': 5275, 'type_classification': 'mask_rcnn', 'hashtag_id_list': '0,0,0,0,0,0,0,0,0'}] thcl {'id': 2847, 'mtr_user_id': 31, 'name': 'learn_RUBBIA_REFUS_AMIENS_23', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'background,papier,carton,metal,pet_clair,autre,pehd,pet_fonce,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3594, 'photo_desc_type': 5275, 'type_classification': 'mask_rcnn', 'hashtag_id_list': '0,0,0,0,0,0,0,0,0'} Update svm_hashtag_type_desc : 5275 FOUND : 1 Here is data_from_sql_as_vec to set the ParamDescriptorType : (5275, 'learn_RUBBIA_REFUS_AMIENS_23', 16384, 25088, 'learn_RUBBIA_REFUS_AMIENS_23', 'pool5', 10.0, None, None, 256, None, 0, None, 8, None, None, -1000.0, 1, datetime.datetime(2021, 4, 23, 14, 19, 39), datetime.datetime(2021, 4, 23, 14, 19, 39)) {'thcl': {'id': 2847, 'mtr_user_id': 31, 'name': 'learn_RUBBIA_REFUS_AMIENS_23', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'background,papier,carton,metal,pet_clair,autre,pehd,pet_fonce,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3594, 'photo_desc_type': 5275, 'type_classification': 'mask_rcnn', 'hashtag_id_list': '0,0,0,0,0,0,0,0,0'}, 'list_hashtags': ['background', 'papier', 'carton', 'metal', 'pet_clair', 'autre', 'pehd', 'pet_fonce', 'environnement'], 'list_hashtags_csv': 'background,papier,carton,metal,pet_clair,autre,pehd,pet_fonce,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3594, 'svm_hashtag_type_desc': 5275, 'photo_desc_type': 5275, 'pb_hashtag_id_or_classifier': 0} list_class_names : ['background', 'papier', 'carton', 'metal', 'pet_clair', 'autre', 'pehd', 'pet_fonce', 'environnement'] Configurations: BACKBONE resnet101 BACKBONE_SHAPES [[160 160] [ 80 80] [ 40 40] [ 20 20] [ 10 10]] BACKBONE_STRIDES [4, 8, 16, 32, 64] BATCH_SIZE 1 BBOX_STD_DEV [0.1 0.1 0.2 0.2] DETECTION_MAX_INSTANCES 100 DETECTION_MIN_CONFIDENCE 0.3 DETECTION_NMS_THRESHOLD 0.3 GPU_COUNT 1 IMAGES_PER_GPU 1 IMAGE_MAX_DIM 640 IMAGE_MIN_DIM 640 IMAGE_PADDING True IMAGE_SHAPE [640 640 3] LEARNING_MOMENTUM 0.9 LEARNING_RATE 0.001 LOSS_WEIGHTS {'rpn_class_loss': 1.0, 'rpn_bbox_loss': 1.0, 'mrcnn_class_loss': 1.0, 'mrcnn_bbox_loss': 1.0, 'mrcnn_mask_loss': 1.0} MASK_POOL_SIZE 14 MASK_SHAPE [28, 28] MAX_GT_INSTANCES 100 MEAN_PIXEL [123.7 116.8 103.9] MINI_MASK_SHAPE (56, 56) NAME learn_RUBBIA_REFUS_AMIENS_23 NUM_CLASSES 9 POOL_SIZE 7 POST_NMS_ROIS_INFERENCE 1000 POST_NMS_ROIS_TRAINING 2000 ROI_POSITIVE_RATIO 0.33 RPN_ANCHOR_RATIOS [0.5, 1, 2] RPN_ANCHOR_SCALES (16, 32, 64, 128, 256) RPN_ANCHOR_STRIDE 1 RPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2] RPN_NMS_THRESHOLD 0.7 RPN_TRAIN_ANCHORS_PER_IMAGE 256 STEPS_PER_EPOCH 1000 TRAIN_ROIS_PER_IMAGE 200 USE_MINI_MASK True USE_RPN_ROIS True VALIDATION_STEPS 50 WEIGHT_DECAY 0.0001 model_param file didn't exist model_name : learn_RUBBIA_REFUS_AMIENS_23 model_type : mask_rcnn list file need : ['mask_model.h5'] file exist in s3 : ['mask_model.h5'] file manque in s3 : [] 2025-10-03 01:30:43.991154: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2025-10-03 01:30:44.229157: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2025-10-03 01:30:44.821179: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2025-10-03 01:30:44.825615: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2025-10-03 01:30:45.011851: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2025-10-03 01:30:45.014513: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2025-10-03 01:30:45.211144: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2025-10-03 01:30:45.213820: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2025-10-03 01:30:45.400311: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2025-10-03 01:30:45.402545: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2025-10-03 01:30:45.597218: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2025-10-03 01:30:45.600740: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2025-10-03 01:30:45.892346: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2025-10-03 01:30:45.894711: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2025-10-03 01:30:46.129374: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2025-10-03 01:30:46.131938: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR local folder : /data/models_weight/learn_RUBBIA_REFUS_AMIENS_23 /data/models_weight/learn_RUBBIA_REFUS_AMIENS_23/mask_model.h5 size_local : 256009536 size in s3 : 256009536 create time local : 2021-08-09 09:43:22 create time in s3 : 2021-08-06 18:54:04 mask_model.h5 already exist and didn't need to update list_images length : 9 NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 38.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 17) min: 0.00000 max: 1920.00000 error in detect the image : temp/1759447829_2064713_1387450132_197f8f18da236b3e75a2482bef6dfe1f.jpg 2 root error(s) found. (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node conv1/convolution (defined at /usr/local/lib/python3.8/dist-packages/keras/backend/tensorflow_backend.py:3007) ]] [[ROI/strided_slice_19/_20]] (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node conv1/convolution (defined at /usr/local/lib/python3.8/dist-packages/keras/backend/tensorflow_backend.py:3007) ]] 0 successful operations. 0 derived errors ignored. [Op:__inference_keras_scratch_graph_13584] Function call stack: keras_scratch_graph -> keras_scratch_graph NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 46.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 17) min: 0.00000 max: 1920.00000 error in detect the image : temp/1759447829_2064713_1387450129_21ea7ef3e4145eefeeeed8483137a7cd.jpg 2 root error(s) found. (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node conv1/convolution (defined at /usr/local/lib/python3.8/dist-packages/keras/backend/tensorflow_backend.py:3007) ]] [[ROI/strided_slice_19/_20]] (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node conv1/convolution (defined at /usr/local/lib/python3.8/dist-packages/keras/backend/tensorflow_backend.py:3007) ]] 0 successful operations. 0 derived errors ignored. [Op:__inference_keras_scratch_graph_13584] Function call stack: keras_scratch_graph -> keras_scratch_graph NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 25.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 17) min: 0.00000 max: 1920.00000 error in detect the image : temp/1759447829_2064713_1387450127_535d6d2b66b7d134f1dde23ed49f5e5d.jpg 2 root error(s) found. (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node conv1/convolution (defined at /usr/local/lib/python3.8/dist-packages/keras/backend/tensorflow_backend.py:3007) ]] [[ROI/strided_slice_19/_20]] (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node conv1/convolution (defined at /usr/local/lib/python3.8/dist-packages/keras/backend/tensorflow_backend.py:3007) ]] 0 successful operations. 0 derived errors ignored. [Op:__inference_keras_scratch_graph_13584] Function call stack: keras_scratch_graph -> keras_scratch_graph NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 28.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 17) min: 0.00000 max: 1920.00000 error in detect the image : temp/1759447829_2064713_1387450125_eacca71e44b21ce7377751e4b2db610b.jpg 2 root error(s) found. (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node conv1/convolution (defined at /usr/local/lib/python3.8/dist-packages/keras/backend/tensorflow_backend.py:3007) ]] [[ROI/strided_slice_19/_20]] (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node conv1/convolution (defined at /usr/local/lib/python3.8/dist-packages/keras/backend/tensorflow_backend.py:3007) ]] 0 successful operations. 0 derived errors ignored. [Op:__inference_keras_scratch_graph_13584] Function call stack: keras_scratch_graph -> keras_scratch_graph NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 32.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 17) min: 0.00000 max: 1920.00000 error in detect the image : temp/1759447829_2064713_1387449912_0d21f6267bd4566f5eaa69c1dca050d0.jpg 2 root error(s) found. (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node conv1/convolution (defined at /usr/local/lib/python3.8/dist-packages/keras/backend/tensorflow_backend.py:3007) ]] [[ROI/strided_slice_19/_20]] (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node conv1/convolution (defined at /usr/local/lib/python3.8/dist-packages/keras/backend/tensorflow_backend.py:3007) ]] 0 successful operations. 0 derived errors ignored. [Op:__inference_keras_scratch_graph_13584] Function call stack: keras_scratch_graph -> keras_scratch_graph NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 0.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 17) min: 0.00000 max: 1920.00000 error in detect the image : temp/1759447829_2064713_1387449910_62d9e38383bb48af65de339aa8b48502.jpg 2 root error(s) found. (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node conv1/convolution (defined at /usr/local/lib/python3.8/dist-packages/keras/backend/tensorflow_backend.py:3007) ]] [[ROI/strided_slice_19/_20]] (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node conv1/convolution (defined at /usr/local/lib/python3.8/dist-packages/keras/backend/tensorflow_backend.py:3007) ]] 0 successful operations. 0 derived errors ignored. [Op:__inference_keras_scratch_graph_13584] Function call stack: keras_scratch_graph -> keras_scratch_graph NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 45.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 17) min: 0.00000 max: 1920.00000 error in detect the image : temp/1759447829_2064713_1387449909_18c4ca8262e4e6185e0860e9939ede4d.jpg 2025-10-03 01:30:46.329869: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2025-10-03 01:30:46.332513: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2025-10-03 01:30:46.497257: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2025-10-03 01:30:46.499385: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2 root error(s) found. (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node conv1/convolution (defined at /usr/local/lib/python3.8/dist-packages/keras/backend/tensorflow_backend.py:3007) ]] [[ROI/strided_slice_19/_20]] (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node conv1/convolution (defined at /usr/local/lib/python3.8/dist-packages/keras/backend/tensorflow_backend.py:3007) ]] 0 successful operations. 0 derived errors ignored. [Op:__inference_keras_scratch_graph_13584] Function call stack: keras_scratch_graph -> keras_scratch_graph NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 51.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 17) min: 0.00000 max: 1920.00000 error in detect the image : temp/1759447829_2064713_1387449906_fe9c779999f24c73857d76ced89830af.jpg 2 root error(s) found. (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node conv1/convolution (defined at /usr/local/lib/python3.8/dist-packages/keras/backend/tensorflow_backend.py:3007) ]] [[ROI/strided_slice_19/_20]] (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node conv1/convolution (defined at /usr/local/lib/python3.8/dist-packages/keras/backend/tensorflow_backend.py:3007) ]] 0 successful operations. 0 derived errors ignored. [Op:__inference_keras_scratch_graph_13584] Function call stack: keras_scratch_graph -> keras_scratch_graph NEW PHOTO Processing 1 images image shape: (1080, 1920, 3) min: 34.00000 max: 255.00000 molded_images shape: (1, 640, 640, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 17) min: 0.00000 max: 1920.00000 error in detect the image : temp/1759447829_2064713_1387449903_b4d4ee2cfe0b4e9dbf2833d863aef94e.jpg 2 root error(s) found. (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node conv1/convolution (defined at /usr/local/lib/python3.8/dist-packages/keras/backend/tensorflow_backend.py:3007) ]] [[ROI/strided_slice_19/_20]] (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node conv1/convolution (defined at /usr/local/lib/python3.8/dist-packages/keras/backend/tensorflow_backend.py:3007) ]] 0 successful operations. 0 derived errors ignored. [Op:__inference_keras_scratch_graph_13584] Function call stack: keras_scratch_graph -> keras_scratch_graph Detection mask done ! Trying to reset tf kernel 2065339 begin to check gpu status inside check gpu memory l 3610 free memory gpu now : 11 tf kernel not reseted sub process len(results) : 0 len(list_Values) 0 None max_time_sub_proc : 3600 parent process len(results) : 0 len(list_Values) 0 process is alive finish correctly or not : True after detect begin to check gpu status inside check gpu memory l 3610 free memory gpu now : 1002 list_Values should be empty [] To do loadFromThcl(), then load ParamDescType : thcl2847 Catched exception ! Connect or reconnect ! thcls : [{'id': 2847, 'mtr_user_id': 31, 'name': 'learn_RUBBIA_REFUS_AMIENS_23', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'background,papier,carton,metal,pet_clair,autre,pehd,pet_fonce,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3594, 'photo_desc_type': 5275, 'type_classification': 'mask_rcnn', 'hashtag_id_list': '0,0,0,0,0,0,0,0,0'}] thcl {'id': 2847, 'mtr_user_id': 31, 'name': 'learn_RUBBIA_REFUS_AMIENS_23', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'background,papier,carton,metal,pet_clair,autre,pehd,pet_fonce,environnement', 'svm_portfolios_learning': '0,0,0,0,0,0,0,0,0', 'photo_hashtag_type': 3594, 'photo_desc_type': 5275, 'type_classification': 'mask_rcnn', 'hashtag_id_list': '0,0,0,0,0,0,0,0,0'} Update svm_hashtag_type_desc : 5275 ['background', 'papier', 'carton', 'metal', 'pet_clair', 'autre', 'pehd', 'pet_fonce', 'environnement'] WARNING : results is empty ! time spent for convertir_results : 0.6758220195770264 Inside saveOutput : final : False verbose : 0 eke 12-6-18 : saveMask need to be cleaned for new output ! save missing photos in datou_result : time spend for datou_step_exec : 19.51990532875061 time spend to save output : 7.772445678710938e-05 total time spend for step 1 : 19.519983053207397 step2:crop_condition Fri Oct 3 01:30:50 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! VR 22-3-18 : For now we do not clean correctly the datou structure Loading chi in step crop with photo_hashtag_type : 3594 Loading chi in step crop for list_pids : 9 ! batch 1 Loaded 0 chid ids of type : 0 begin to crop the class : papier param for this class : {'min_score': 0.7} filtre for class : papier hashtag_id of this class : 492668766 begin to crop the class : carton param for this class : {'min_score': 0.7} filtre for class : carton hashtag_id of this class : 492774966 begin to crop the class : metal param for this class : {'min_score': 0.7} filtre for class : metal hashtag_id of this class : 492628673 begin to crop the class : pet_clair param for this class : {'min_score': 0.7} filtre for class : pet_clair hashtag_id of this class : 2107755846 begin to crop the class : autre param for this class : {'min_score': 0.7} filtre for class : autre hashtag_id of this class : 494826614 begin to crop the class : pehd param for this class : {'min_score': 0.7} filtre for class : pehd hashtag_id of this class : 628944319 begin to crop the class : pet_fonce param for this class : {'min_score': 0.7} filtre for class : pet_fonce hashtag_id of this class : 2107755900 delete rles from all chi Inside saveOutput : final : False verbose : 0 saveOutput not yet implemented for datou_step.type : crop_condition we use saveGeneral [1387450132, 1387450129, 1387450127, 1387450125, 1387449912, 1387449910, 1387449909, 1387449906, 1387449903] Looping around the photos to save general results len do output : 0 before output type Here is an output not treated by saveGeneral : Here is an output not treated by saveGeneral : Here is an output not treated by saveGeneral : Managing all output in save final without adding information in the mtr_datou_result ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450132', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450129', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450127', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450125', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449912', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449910', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449909', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449906', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449903', None, None, None, None, None, '3824882') begin to insert list_values into mtr_datou_result : length of list_values in save_final : 9 time used for this insertion : 0.10645627975463867 save_final save missing photos in datou_result : time spend for datou_step_exec : 0.18825531005859375 time spend to save output : 0.1066739559173584 total time spend for step 2 : 0.29492926597595215 step3:rle_unique_nms_with_priority Fri Oct 3 01:30:50 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed complete output_args for input 0 We expect there is only one output and this part is used while all output are not tuple or array We expect there is only one output and this part is used while all output are not tuple or array We expect there is only one output and this part is used while all output are not tuple or array We expect there is only one output and this part is used while all output are not tuple or array We expect there is only one output and this part is used while all output are not tuple or array We expect there is only one output and this part is used while all output are not tuple or array We expect there is only one output and this part is used while all output are not tuple or array We expect there is only one output and this part is used while all output are not tuple or array We expect there is only one output and this part is used while all output are not tuple or array VR 22-3-18 : For now we do not clean correctly the datou structure Begin step rle-unique-nms batch 1 Loaded 0 chid ids of type : 0 No data in photo_id : 1387450132 No data in photo_id : 1387450129 No data in photo_id : 1387450127 No data in photo_id : 1387450125 No data in photo_id : 1387449912 No data in photo_id : 1387449910 No data in photo_id : 1387449909 No data in photo_id : 1387449906 No data in photo_id : 1387449903 map_output_result : {1387450132: (0.0, 'Should be the crop_list due to order', 0.0), 1387450129: (0.0, 'Should be the crop_list due to order', 0.0), 1387450127: (0.0, 'Should be the crop_list due to order', 0.0), 1387450125: (0.0, 'Should be the crop_list due to order', 0.0), 1387449912: (0.0, 'Should be the crop_list due to order', 0.0), 1387449910: (0.0, 'Should be the crop_list due to order', 0.0), 1387449909: (0.0, 'Should be the crop_list due to order', 0.0), 1387449906: (0.0, 'Should be the crop_list due to order', 0.0), 1387449903: (0.0, 'Should be the crop_list due to order', 0.0)} End step rle-unique-nms Inside saveOutput : final : False verbose : 0 saveOutput not yet implemented for datou_step.type : rle_unique_nms_with_priority we use saveGeneral [1387450132, 1387450129, 1387450127, 1387450125, 1387449912, 1387449910, 1387449909, 1387449906, 1387449903] Looping around the photos to save general results len do output : 9 /1387450132.Didn't retrieve data . /1387450129.Didn't retrieve data . /1387450127.Didn't retrieve data . /1387450125.Didn't retrieve data . /1387449912.Didn't retrieve data . /1387449910.Didn't retrieve data . /1387449909.Didn't retrieve data . /1387449906.Didn't retrieve data . /1387449903.Didn't retrieve data . before output type Used above Here is an output not treated by saveGeneral : Managing all output in save final without adding information in the mtr_datou_result ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450132', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450129', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450127', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450125', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449912', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449910', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449909', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449906', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449903', None, None, None, None, None, '3824882') begin to insert list_values into mtr_datou_result : length of list_values in save_final : 27 time used for this insertion : 0.03638410568237305 save_final save missing photos in datou_result : time spend for datou_step_exec : 0.08548402786254883 time spend to save output : 0.03682351112365723 total time spend for step 3 : 0.12230753898620605 step4:ventilate_hashtags_in_portfolio Fri Oct 3 01:30:50 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! VR 22-3-18 : For now we do not clean correctly the datou structure beginning of datou step ventilate_hashtags_in_portfolio : To implement ! Iterating over portfolio : 27463505 get user id for portfolio 27463505 SELECT mptpi.id, mptpi.mtr_portfolio_id_1, mptpi.mtr_portfolio_id_2, mptpi.type, mptpi.hashtag_id, mptpi.min_score, mptpi.mtr_user_id, mptpi.created_at, mptpi.updated_at, mptpi.last_updated_at_desc, mptpi.last_updated_at_asc, h.hashtag FROM MTRPhoto.mtr_port_to_port_ids mptpi, MTRBack.hashtags h WHERE h.hashtag_id=mptpi.hashtag_id AND mptpi.`mtr_portfolio_id_1`=27463505 AND mptpi.`type`=3594 AND mptpi.`hashtag_id` in (select hashtag_id FROM MTRBack.hashtags where hashtag in ('metal','carton','autre','mal_croppe','pehd','pet_clair','papier','background','environnement','flou','pet_fonce')) AND mptpi.`min_score`=0.5 To do To do SELECT mptpi.id, mptpi.mtr_portfolio_id_1, mptpi.mtr_portfolio_id_2, mptpi.type, mptpi.hashtag_id, mptpi.min_score, mptpi.mtr_user_id, mptpi.created_at, mptpi.updated_at, mptpi.last_updated_at_desc, mptpi.last_updated_at_asc, h.hashtag FROM MTRPhoto.mtr_port_to_port_ids mptpi, MTRBack.hashtags h WHERE h.hashtag_id=mptpi.hashtag_id AND mptpi.`mtr_portfolio_id_1`=27463505 AND mptpi.`type`=3594 AND mptpi.`hashtag_id` in (select hashtag_id FROM MTRBack.hashtags where hashtag in ('metal','carton','autre','mal_croppe','pehd','pet_clair','papier','background','environnement','flou','pet_fonce')) AND mptpi.`min_score`=0.5 To do Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") Catched exception ! Connect or reconnect ! (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')\n and cspi.crop_hashtag_id = chi.id' at line 3") To do ! Use context local managing function ! SELECT mptpi.id, mptpi.mtr_portfolio_id_1, mptpi.mtr_portfolio_id_2, mptpi.type, mptpi.hashtag_id, mptpi.min_score, mptpi.mtr_user_id, mptpi.created_at, mptpi.updated_at, mptpi.last_updated_at_desc, mptpi.last_updated_at_asc, h.hashtag FROM MTRPhoto.mtr_port_to_port_ids mptpi, MTRBack.hashtags h WHERE h.hashtag_id=mptpi.hashtag_id AND mptpi.`mtr_portfolio_id_1`=27463505 AND mptpi.`type`=3594 AND mptpi.`hashtag_id` in (select hashtag_id FROM MTRBack.hashtags where hashtag in ('metal','carton','autre','mal_croppe','pehd','pet_clair','papier','background','environnement','flou','pet_fonce')) AND mptpi.`min_score`=0.5 To do lien utilise dans velours : https://marlene.fotonower.com/velours/27464229,27464230,27464231,27464232,27464233,27464234,27464235,27464236,27464237,27464238,27464239?tags=metal,carton,autre,mal_croppe,pehd,pet_clair,papier,background,environnement,flou,pet_fonce Inside saveOutput : final : False verbose : 0 saveOutput not yet implemented for datou_step.type : ventilate_hashtags_in_portfolio we use saveGeneral [1387450132, 1387450129, 1387450127, 1387450125, 1387449912, 1387449910, 1387449909, 1387449906, 1387449903] Looping around the photos to save general results len do output : 1 /27463505. before output type Here is an output not treated by saveGeneral : Managing all output in save final without adding information in the mtr_datou_result ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450132', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450129', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450127', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450125', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449912', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449910', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449909', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449906', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449903', None, None, None, None, None, '3824882') begin to insert list_values into mtr_datou_result : length of list_values in save_final : 10 time used for this insertion : 0.10207939147949219 save_final save missing photos in datou_result : time spend for datou_step_exec : 5.499187469482422 time spend to save output : 0.10237407684326172 total time spend for step 4 : 5.601561546325684 step5:final Fri Oct 3 01:30:56 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! complete output_args for input 2 VR 22-3-18 : For now we do not clean correctly the datou structure Beginning of datou step final ! Catched exception ! Connect or reconnect ! Inside saveOutput : final : False verbose : 0 original output for save of step final : {1387450132: ('0.0',), 1387450129: ('0.0',), 1387450127: ('0.0',), 1387450125: ('0.0',), 1387449912: ('0.0',), 1387449910: ('0.0',), 1387449909: ('0.0',), 1387449906: ('0.0',), 1387449903: ('0.0',)} new output for save of step final : {1387450132: ('0.0',), 1387450129: ('0.0',), 1387450127: ('0.0',), 1387450125: ('0.0',), 1387449912: ('0.0',), 1387449910: ('0.0',), 1387449909: ('0.0',), 1387449906: ('0.0',), 1387449903: ('0.0',)} [1387450132, 1387450129, 1387450127, 1387450125, 1387449912, 1387449910, 1387449909, 1387449906, 1387449903] Looping around the photos to save general results len do output : 9 /1387450132.Didn't retrieve data . /1387450129.Didn't retrieve data . /1387450127.Didn't retrieve data . /1387450125.Didn't retrieve data . /1387449912.Didn't retrieve data . /1387449910.Didn't retrieve data . /1387449909.Didn't retrieve data . /1387449906.Didn't retrieve data . /1387449903.Didn't retrieve data . before output type Used above Used above Managing all output in save final without adding information in the mtr_datou_result ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450132', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450129', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450127', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450125', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449912', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449910', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449909', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449906', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449903', None, None, None, None, None, '3824882') begin to insert list_values into mtr_datou_result : length of list_values in save_final : 27 time used for this insertion : 0.037808895111083984 save_final save missing photos in datou_result : time spend for datou_step_exec : 0.3788332939147949 time spend to save output : 0.03838086128234863 total time spend for step 5 : 0.41721415519714355 step6:blur_detection Fri Oct 3 01:30:56 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! VR 22-3-18 : For now we do not clean correctly the datou structure inside step blur_detection methode: ratio et variance treat image : temp/1759447829_2064713_1387450132_197f8f18da236b3e75a2482bef6dfe1f.jpg resize: (1080, 1920) 1387450132 -0.10182150475401078 treat image : temp/1759447829_2064713_1387450129_21ea7ef3e4145eefeeeed8483137a7cd.jpg resize: (1080, 1920) 1387450129 3.2025423468537086 treat image : temp/1759447829_2064713_1387450127_535d6d2b66b7d134f1dde23ed49f5e5d.jpg resize: (1080, 1920) 1387450127 -4.284066170139759 treat image : temp/1759447829_2064713_1387450125_eacca71e44b21ce7377751e4b2db610b.jpg resize: (1080, 1920) 1387450125 -4.203660885501499 treat image : temp/1759447829_2064713_1387449912_0d21f6267bd4566f5eaa69c1dca050d0.jpg resize: (1080, 1920) 1387449912 0.6556870253168069 treat image : temp/1759447829_2064713_1387449910_62d9e38383bb48af65de339aa8b48502.jpg resize: (1080, 1920) 1387449910 -1.2600230236788352 treat image : temp/1759447829_2064713_1387449909_18c4ca8262e4e6185e0860e9939ede4d.jpg resize: (1080, 1920) 1387449909 0.32192591530599 treat image : temp/1759447829_2064713_1387449906_fe9c779999f24c73857d76ced89830af.jpg resize: (1080, 1920) 1387449906 -0.862037763796502 treat image : temp/1759447829_2064713_1387449903_b4d4ee2cfe0b4e9dbf2833d863aef94e.jpg resize: (1080, 1920) 1387449903 -4.5426744898711675 Inside saveOutput : final : False verbose : 0 begin to insert list_values into class_photo_scores : length of list_valuse in save_photo_hashtag_id_thcl_score : 9 time used for this insertion : 0.03869509696960449 begin to insert list_values into photo_hahstag_ids : length of list_valuse in save_photo_hashtag_id_type : 9 time used for this insertion : 0.03619074821472168 save missing photos in datou_result : time spend for datou_step_exec : 10.111830472946167 time spend to save output : 0.09310364723205566 total time spend for step 6 : 10.204934120178223 step7:brightness Fri Oct 3 01:31:06 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! VR 22-3-18 : For now we do not clean correctly the datou structure inside step calcul brightness treat image : temp/1759447829_2064713_1387450132_197f8f18da236b3e75a2482bef6dfe1f.jpg treat image : temp/1759447829_2064713_1387450129_21ea7ef3e4145eefeeeed8483137a7cd.jpg treat image : temp/1759447829_2064713_1387450127_535d6d2b66b7d134f1dde23ed49f5e5d.jpg treat image : temp/1759447829_2064713_1387450125_eacca71e44b21ce7377751e4b2db610b.jpg treat image : temp/1759447829_2064713_1387449912_0d21f6267bd4566f5eaa69c1dca050d0.jpg treat image : temp/1759447829_2064713_1387449910_62d9e38383bb48af65de339aa8b48502.jpg treat image : temp/1759447829_2064713_1387449909_18c4ca8262e4e6185e0860e9939ede4d.jpg treat image : temp/1759447829_2064713_1387449906_fe9c779999f24c73857d76ced89830af.jpg treat image : temp/1759447829_2064713_1387449903_b4d4ee2cfe0b4e9dbf2833d863aef94e.jpg Inside saveOutput : final : False verbose : 0 begin to insert list_values into class_photo_scores : length of list_valuse in save_photo_hashtag_id_thcl_score : 9 time used for this insertion : 0.040444135665893555 begin to insert list_values into photo_hahstag_ids : length of list_valuse in save_photo_hashtag_id_type : 9 time used for this insertion : 0.039704322814941406 save missing photos in datou_result : time spend for datou_step_exec : 2.127915382385254 time spend to save output : 0.09780621528625488 total time spend for step 7 : 2.225721597671509 step8:velours_tree Fri Oct 3 01:31:09 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed complete output_args for input 0 VR 22-3-18 : For now we do not clean correctly the datou structure can't find the photo_desc_type Inside saveOutput : final : False verbose : 0 ouput is None No outpout to save, returning out of save general time spend for datou_step_exec : 0.10988759994506836 time spend to save output : 5.91278076171875e-05 total time spend for step 8 : 0.10994672775268555 step9:send_mail_cod Fri Oct 3 01:31:09 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed complete output_args for input 0 complete output_args for input 1 complete output_args for input 2 Inconsistent number of input and output, step which parrallelize and manage error in input by avoiding sending an output for this data can't be used in tree dependencies of input and output complete output_args for input 3 We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! VR 22-3-18 : For now we do not clean correctly the datou structure dans la step send mail cod work_area: /home/admin/workarea/git/Velours/python in order to get the selector url, please entre the license of selector results_Auto_P27463505_03-10-2025_01_31_09.pdf 27464229 imagette274642291759447869 27464230 imagette274642301759447869 27464231 imagette274642311759447869 27464232 imagette274642321759447869 27464233 imagette274642331759447869 27464234 imagette274642341759447869 27464235 imagette274642351759447869 27464236 imagette274642361759447869 27464238 imagette274642381759447869 27464239 imagette274642391759447869 SELECT h.hashtag,pcr.value FROM MTRUser.portfolio_carac_ratio pcr, MTRBack.hashtags h where pcr.portfolio_id=27463505 and hashtag_type = 3594 and pcr.hashtag_id = h.hashtag_id; velour_link : https://marlene.fotonower.com/velours/27464229,27464230,27464231,27464232,27464233,27464234,27464235,27464236,27464237,27464238,27464239?tags=metal,carton,autre,mal_croppe,pehd,pet_clair,papier,background,environnement,flou,pet_fonce args[1387450132] : ((1387450132, -0.10182150475401078, 492688767), (1387450132, 0.5114447287971023, 2107752395), '0.0') We are sending mail with results at report@fotonower.com args[1387450129] : ((1387450129, 3.2025423468537086, 492688767), (1387450129, 0.5250061259534251, 2107752395), '0.0') We are sending mail with results at report@fotonower.com args[1387450127] : ((1387450127, -4.284066170139759, 492609224), (1387450127, 0.32813200195344466, 2107752395), '0.0') We are sending mail with results at report@fotonower.com args[1387450125] : ((1387450125, -4.203660885501499, 492609224), (1387450125, 0.4402545490914276, 2107752395), '0.0') We are sending mail with results at report@fotonower.com args[1387449912] : ((1387449912, 0.6556870253168069, 492688767), (1387449912, 0.5752519369673955, 2107752395), '0.0') We are sending mail with results at report@fotonower.com args[1387449910] : ((1387449910, -1.2600230236788352, 492688767), (1387449910, 0.15458275149452833, 2107752395), '0.0') We are sending mail with results at report@fotonower.com args[1387449909] : ((1387449909, 0.32192591530599, 492688767), (1387449909, 0.6376776040473515, 2107752395), '0.0') We are sending mail with results at report@fotonower.com args[1387449906] : ((1387449906, -0.862037763796502, 492688767), (1387449906, 1.0030224554972207, 2107752395), '0.0') We are sending mail with results at report@fotonower.com args[1387449903] : ((1387449903, -4.5426744898711675, 492609224), (1387449903, 0.3109460543654036, 2107752395), '0.0') We are sending mail with results at report@fotonower.com refus_total : 0.0 2022-04-13 10:29:59 0 SELECT ph.photo_id,ph.url,ph.username,ph.uploaded_at,ph.text FROM MTRBack.photos_view ph, MTRUser.mtr_portfolio_photos mpp WHERE ph.photo_id=mpp.mtr_photo_id AND mpp.mtr_portfolio_id=27463505 AND mpp.hide_status=0 ORDER BY mpp.order LIMIT 0, 1000 start upload file to ovh https://storage.sbg.cloud.ovh.net/v1/AUTH_3b171620e76e4af496c5fd050759c9f0/media.fotonower.com/results_Auto_P27463505_03-10-2025_01_31_09.pdf results_Auto_P27463505_03-10-2025_01_31_09.pdf uploaded to url https://storage.sbg.cloud.ovh.net/v1/AUTH_3b171620e76e4af496c5fd050759c9f0/media.fotonower.com/results_Auto_P27463505_03-10-2025_01_31_09.pdf start insert file to database insert into MTRUser.mtr_files (mtd_id,mtr_portfolio_id,text,url,format,tags,file_size,value) values ('3318','27463505','results_Auto_P27463505_03-10-2025_01_31_09.pdf','https://storage.sbg.cloud.ovh.net/v1/AUTH_3b171620e76e4af496c5fd050759c9f0/media.fotonower.com/results_Auto_P27463505_03-10-2025_01_31_09.pdf','pdf','','0.08','0.0') message_in_mail: Bonjour,
Veuillez trouver ci dessous les résultats du service carac on demand pour le portfolio: https://www.fotonower.com/view/27463505

https://www.fotonower.com/image?json=false&list_photos_id=1387450132
Bravo, la photo est bien prise.
https://www.fotonower.com/image?json=false&list_photos_id=1387450129
La photo est trop floue, merci de reprendre une photo.(avec le score = 3.2025423468537086)
https://www.fotonower.com/image?json=false&list_photos_id=1387450127
Bravo, la photo est bien prise.
https://www.fotonower.com/image?json=false&list_photos_id=1387450125
Bravo, la photo est bien prise.
https://www.fotonower.com/image?json=false&list_photos_id=1387449912
Bravo, la photo est bien prise.
https://www.fotonower.com/image?json=false&list_photos_id=1387449910
Bravo, la photo est bien prise.
https://www.fotonower.com/image?json=false&list_photos_id=1387449909
Bravo, la photo est bien prise.
https://www.fotonower.com/image?json=false&list_photos_id=1387449906
Bravo, la photo est bien prise.
https://www.fotonower.com/image?json=false&list_photos_id=1387449903
Bravo, la photo est bien prise.

Dans ces conditions,le taux de refus est: 0.00%
Veuillez trouver les photos des contaminants.

Veuillez trouver le rapport en pdf:https://storage.sbg.cloud.ovh.net/v1/AUTH_3b171620e76e4af496c5fd050759c9f0/media.fotonower.com/results_Auto_P27463505_03-10-2025_01_31_09.pdf.

Lien vers velours :https://marlene.fotonower.com/velours/27464229,27464230,27464231,27464232,27464233,27464234,27464235,27464236,27464237,27464238,27464239?tags=metal,carton,autre,mal_croppe,pehd,pet_clair,papier,background,environnement,flou,pet_fonce.


L'équipe Fotonower 202 b'' Server: nginx Date: Thu, 02 Oct 2025 23:31:11 GMT Content-Length: 0 Connection: close X-Message-Id: F6O1ThXuQnaLMuuiwtrIyQ Access-Control-Allow-Origin: https://sendgrid.api-docs.io Access-Control-Allow-Methods: POST Access-Control-Allow-Headers: Authorization, Content-Type, On-behalf-of, x-sg-elas-acl Access-Control-Max-Age: 600 X-No-CORS-Reason: https://sendgrid.com/docs/Classroom/Basics/API/cors.html Strict-Transport-Security: max-age=31536000; includeSubDomains Content-Security-Policy: frame-ancestors 'none' Cache-Control: no-cache X-Content-Type-Options: no-sniff Referrer-Policy: strict-origin-when-cross-origin Inside saveOutput : final : False verbose : 0 saveOutput not yet implemented for datou_step.type : send_mail_cod we use saveGeneral [1387450132, 1387450129, 1387450127, 1387450125, 1387449912, 1387449910, 1387449909, 1387449906, 1387449903] Looping around the photos to save general results len do output : 0 before output type Used above Managing all output in save final without adding information in the mtr_datou_result ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450132', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450129', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450127', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450125', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449912', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449910', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449909', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449906', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449903', None, None, None, None, None, '3824882') begin to insert list_values into mtr_datou_result : length of list_values in save_final : 9 time used for this insertion : 0.10417318344116211 save_final save missing photos in datou_result : time spend for datou_step_exec : 2.457021713256836 time spend to save output : 0.10442423820495605 total time spend for step 9 : 2.561445951461792 step10:split_time_score Fri Oct 3 01:31:11 2025 VR 17-11-17 : now, only for linear exec dependencies tree, some output goes to fill the input of the next VR 22-3-18 : now we test the dependencies tree, but keep two separate code for datou_prepare_output_input until the code is correctly tested, clean and works in both case VR 22-3-18 : but we use the first code for the first step id = -1, build in the code of datou_exec VR 22-3-18 : we should manage here the case when we are at the first step instead of building this step before datou_exec Currently we do not manage missing dependencies information, that could maybe be correctly interpreted with default behavior Some of the step done at execution of the step could be done before when the tree of execution is build and the dependencies of different step analysed We should have FATAL ERROR but same_nb_input_output==True : this should be an optionnal input ! complete output_args for input 1 VR 22-3-18 : For now we do not clean correctly the datou structure begin split time score Catched exception ! Connect or reconnect ! TODO : Insert select and so on Begin split_port_in_batch_balle thcls : [{'id': 861, 'mtr_user_id': 31, 'name': 'Rungis_class_dechets_1212', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': 'Rungis_Aluminium,Rungis_Carton,Rungis_Papier,Rungis_Plastique_clair,Rungis_Plastique_dur,Rungis_Plastique_fonce,Rungis_Tapis_vide,Rungis_Tetrapak', 'svm_portfolios_learning': '1160730,571842,571844,571839,571933,571840,571841,572307', 'photo_hashtag_type': 999, 'photo_desc_type': 3963, 'type_classification': 'caffe', 'hashtag_id_list': '2107751280,2107750907,2107750908,2107750909,2107750910,2107750911,2107750912,2107750913'}] thcls : [{'id': 758, 'mtr_user_id': 31, 'name': 'Rungis_amount_dechets_fall_2018_v2', 'pb_hashtag_id': 0, 'live': b'\x00', 'list_hashtags': '05102018_Papier_non_papier_dense,05102018_Papier_non_papier_peu_dense,05102018_Papier_non_papier_presque_vide,05102018_Papier_non_papier_tres_dense,05102018_Papier_non_papier_tres_peu_dense', 'svm_portfolios_learning': '1108385,1108386,1108388,1108384,1108387', 'photo_hashtag_type': 856, 'photo_desc_type': 3853, 'type_classification': 'caffe', 'hashtag_id_list': '2107751013,2107751014,2107751015,2107751016,2107751017'}] (('18', 9),) ERROR counted https://github.com/fotonower/Velours/issues/663#issuecomment-421136223 {} 02102025 27463505 Nombre de photos uploadées : 9 / 23040 (0%) 02102025 27463505 Nombre de photos taguées (types de déchets): 0 / 9 (0%) 02102025 27463505 Nombre de photos taguées (volume) : 0 / 9 (0%) elapsed_time : load_data_split_time_score 1.6689300537109375e-06 elapsed_time : order_list_meta_photo_and_scores 6.198883056640625e-06 ????????? elapsed_time : fill_and_build_computed_from_old_data 0.0005548000335693359 Catched exception ! Connect or reconnect ! Catched exception ! Connect or reconnect ! elapsed_time : insert_dashboard_record_day_entry 0.656674861907959 We will return after consolidate but for now we need the day, how to get it, for now depending on the previous heavy steps find url: select completion_json, dashboard_run_id from MTRPhoto.dashboard_results where mtr_portfolio_id = 27463472 order by id desc limit 1 find url: select completion_json, dashboard_run_id from MTRPhoto.dashboard_results where mtr_portfolio_id = 27463474 order by id desc limit 1 find url: select completion_json, dashboard_run_id from MTRPhoto.dashboard_results where mtr_portfolio_id = 27463475 order by id desc limit 1 find url: select completion_json, dashboard_run_id from MTRPhoto.dashboard_results where mtr_portfolio_id = 27463477 order by id desc limit 1 find url: select completion_json, dashboard_run_id from MTRPhoto.dashboard_results where mtr_portfolio_id = 27463481 order by id desc limit 1 find url: select completion_json, dashboard_run_id from MTRPhoto.dashboard_results where mtr_portfolio_id = 27463483 order by id desc limit 1 find url: select completion_json, dashboard_run_id from MTRPhoto.dashboard_results where mtr_portfolio_id = 27463485 order by id desc limit 1 find url: select completion_json, dashboard_run_id from MTRPhoto.dashboard_results where mtr_portfolio_id = 27463488 order by id desc limit 1 find url: select completion_json, dashboard_run_id from MTRPhoto.dashboard_results where mtr_portfolio_id = 27463504 order by id desc limit 1 Qualite : 0.0 find url: https://storage.sbg.cloud.ovh.net/v1/AUTH_3b171620e76e4af496c5fd050759c9f0/media.fotonower.com/results_Auto_P27463505_03-10-2025_01_31_09.pdf select completion_json, dashboard_run_id from MTRPhoto.dashboard_results where mtr_portfolio_id = 27463505 order by id desc limit 1 # VR 17-11-17 : to create in DB ! Here we check the datou graph and we reorder steps ! Tree builded and cycle checked, now we need to re-order the steps ! We have currenlty an error because there is no dependence between the last step for the case tile - detect - glue We can either keep the depence of, it is better to keep an order compatible with the id of steps if we do not have sons, so a lexical order : (number_son, step_id) All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! All sons are already in current list ! DONE and to test : checkNoCycle ! Here we check the consistency of inputs/outputs number between the given ones and the db ! eke 1-6-18 : checkConsistencyNbInputNbOutput should be processed after step reordering ! WARNING : number of outputs for step 7928 mask_detect is not consistent : 3 used against 2 in the step definition ! Step 8092 crop_condition have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! WARNING : number of outputs for step 8092 crop_condition is not consistent : 4 used against 3 in the step definition ! WARNING : number of inputs for step 7933 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 7933 rle_unique_nms_with_priority is not consistent : 2 used against 1 in the step definition ! WARNING : number of outputs for step 7935 ventilate_hashtags_in_portfolio is not consistent : 2 used against 1 in the step definition ! Step 7934 final have less inputs used (2) than in the step definition (3) : maybe we manage optionnal inputs ! Step 7934 final have less outputs used (1) than in the step definition (2) : some outputs may be not used ! WARNING : number of outputs for step 13649 velours_tree is not consistent : 2 used against 1 in the step definition ! Step 9283 split_time_score have less inputs used (1) than in the step definition (2) : maybe we manage optionnal inputs ! Number of inputs / outputs for each step checked ! Here we check the consistency of outputs/inputs types during steps connections eke 1-6-18 : checkConsistencyTypeOutputInput should be processed after checkConsistencyNbInputNbOutput ! We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : type of output 1 of step 7935 doesn't seem to be define in the database( WARNING : type of input 3 of step 7934 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : type of input 1 of step 7935 doesn't seem to be define in the database( WARNING : output 1 of step 7933 have datatype=7 whereas input 1 of step 7935 have datatype=None WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 2 of step 8092 doesn't seem to be define in the database( WARNING : type of output 3 of step 8092 doesn't seem to be define in the database( WARNING : type of input 1 of step 7933 doesn't seem to be define in the database( WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 1 of step 10917 doesn't seem to be define in the database( WARNING : type of output 2 of step 7928 doesn't seem to be define in the database( WARNING : type of input 1 of step 10918 doesn't seem to be define in the database( We ignore checkConsistencyTypeOutputInput for datou_step final ! WARNING : output 0 of step 7935 have datatype=10 whereas input 3 of step 10916 have datatype=6 WARNING : output 0 of step 7935 have datatype=10 whereas input 0 of step 13649 have datatype=18 WARNING : type of output 1 of step 13649 doesn't seem to be define in the database( WARNING : type of input 5 of step 10916 doesn't seem to be define in the database( DataTypes for each output/input checked ! TODO Duplicate data, are they consistent 3 ? Duplicate data, are they consistent 4 ? SELECT mptpi.id, mptpi.mtr_portfolio_id_1, mptpi.mtr_portfolio_id_2, mptpi.type, mptpi.hashtag_id, mptpi.min_score, mptpi.mtr_user_id, mptpi.created_at, mptpi.updated_at, mptpi.last_updated_at_desc, mptpi.last_updated_at_asc, h.hashtag FROM MTRPhoto.mtr_port_to_port_ids mptpi, MTRBack.hashtags h WHERE h.hashtag_id=mptpi.hashtag_id AND mptpi.`mtr_portfolio_id_1`=27463505 AND mptpi.`type`=3594 To do NUMBER BATCH : 0 # DISPLAY ALL COLLECTED DATA : {'02102025': {'nb_upload': 9, 'nb_taggue_class': 0, 'nb_taggue_densite': 0}} Inside saveOutput : final : True verbose : 0 saveOutput not yet implemented for datou_step.type : split_time_score we use saveGeneral [1387450132, 1387450129, 1387450127, 1387450125, 1387449912, 1387449910, 1387449909, 1387449906, 1387449903] Looping around the photos to save general results len do output : 1 /27463505Didn't retrieve data . before output type Here is an output not treated by saveGeneral : Managing all output in save final without adding information in the mtr_datou_result ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450132', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450129', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450127', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387450125', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449912', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449910', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449909', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449906', None, None, None, None, None, '3824882') ('3318', None, None, None, None, None, None, None, '3824882') ('3318', '27463505', '1387449903', None, None, None, None, None, '3824882') begin to insert list_values into mtr_datou_result : length of list_values in save_final : 10 time used for this insertion : 0.11811971664428711 save_final save missing photos in datou_result : time spend for datou_step_exec : 7.008483648300171 time spend to save output : 0.11837148666381836 total time spend for step 10 : 7.126855134963989 caffe_path_current : About to save ! 2 After save, about to update current ! ret : 2 len(input) + len(total_photo_id_missing) : 9 set_done_treatment 20.99user 13.61system 0:52.76elapsed 65%CPU (0avgtext+0avgdata 2020944maxresident)k 1418960inputs+3872outputs (5548major+827414minor)pagefaults 0swaps