+PPOCRLabelv2 is a semi-automatic graphic annotation tool suitable for OCR field, with built-in PP-OCR model to automatically detect and re-recognize data. It is written in Python3 and PyQT5, supporting rectangular box, table, irregular text and key information annotation modes. Annotations can be directly used for the training of PP-OCR detection and recognition models.
+- 2022.05: Add table annotations, follow `2.2 Table Annotations` for more information (by [whjdark](https://github.com/peterh0323); [Evezerest](https://github.com/Evezerest))
+ - Add KIE Mode by using `--kie`, for [detection + identification + keyword extraction] labeling.
+ - Improve user experience: prompt for the number of files and labels, optimize interaction, and fix bugs such as only use CPU when inference
+ - New functions: Support using `C` or `X` to rotate box.
+- 2021.11.17:
+ - Support install and start PPOCRLabel through the whl package (by [d2623587501](https://github.com/d2623587501))
+ - Dataset segmentation: Divide the annotation file into training, verification and testing parts (refer to section 3.5 below, by [MrCuiHao](https://github.com/MrCuiHao))
+- 2021.8.11:
+ - New functions: Open the dataset folder, image rotation (Note: Please delete the label box before rotating the image) (by [Wei-JL](https://github.com/Wei-JL))
+ - Added shortcut key description (Help-Shortcut Key), repaired the direction shortcut key movement function under batch processing (by [d2623587501](https://github.com/d2623587501))
+- 2021.2.5: New batch processing and undo functions (by [Evezerest](https://github.com/Evezerest)):
+ - **Batch processing function**: Press and hold the Ctrl key to select the box, you can move, copy, and delete in batches.
+ - **Undo function**: In the process of drawing a four-point label box or after editing the box, press Ctrl+Z to undo the previous operation.
+ - Fix image rotation and size problems, optimize the process of editing the mark frame (by [ninetailskim](https://github.com/ninetailskim)、 [edencfc](https://github.com/edencfc)).
+- 2021.1.11: Optimize the labeling experience (by [edencfc](https://github.com/edencfc)),
+ - Users can choose whether to pop up the label input dialog after drawing the detection box in "View - Pop-up Label Input Dialog".
+ - The recognition result scrolls synchronously when users click related detection box.
+ - Click to modify the recognition result.(If you can't change the result, please switch to the system default input method, or switch back to the original input method again)
+- 2020.12.18: Support re-recognition of a single label box (by [ninetailskim](https://github.com/ninetailskim) ), perfect shortcut keys.
+
+
+
+## 1. Installation and Run
+
+### 1.1 Install PaddlePaddle
+
+```bash
+pip3 install --upgrade pip
+
+# If you have cuda9 or cuda10 installed on your machine, please run the following command to install
+For more software version requirements, please refer to the instructions in [Installation Document](https://www.paddlepaddle.org.cn/install/quick) for operation.
+
+### 1.2 Install and Run PPOCRLabel
+
+PPOCRLabel can be started in two ways: whl package and Python script. The whl package form is more convenient to start, and the python script to start is convenient for secondary development.
+
+#### Windows
+
+```bash
+pip install PPOCRLabel # install
+
+# Select label mode and run
+PPOCRLabel # [Normal mode] for [detection + recognition] labeling
+> If you getting this error `OSError: [WinError 126] The specified module could not be found` when you install shapely on windows. Please try to download Shapely whl file using http://www.lfd.uci.edu/~gohlke/pythonlibs/#shapely.
+>
+> Reference: [Solve shapely installation on windows](https://stackoverflow.com/questions/44398265/install-shapely-oserror-winerror-126-the-specified-module-could-not-be-found)
+>
+
+#### Ubuntu Linux
+
+```bash
+pip3 install PPOCRLabel
+pip3 install trash-cli
+
+# Select label mode and run
+PPOCRLabel # [Normal mode] for [detection + recognition] labeling
+If you modify the PPOCRLabel file (for example, specifying a new built-in model), it will be more convenient to see the results by running the Python script. If you still want to start with the whl package, you need to uninstall the whl package in the current environment and then recompile it according to the next section.
+
+```bash
+cd ./PPOCRLabel # Switch to the PPOCRLabel directory
+
+# Select label mode and run
+python PPOCRLabel.py # [Normal mode] for [detection + recognition] labeling
+1. Build and launch using the instructions above.
+
+2. Click 'Open Dir' in Menu/File to select the folder of the picture.<sup>[1]</sup>
+
+3. Click 'Auto recognition', use PP-OCR model to automatically annotate images which marked with 'X' <sup>[2]</sup>before the file name.
+
+4. Create Box:
+
+ 4.1 Click 'Create RectBox' or press 'W' in English keyboard mode to draw a new rectangle detection box. Click and release left mouse to select a region to annotate the text area.
+
+ 4.2 Press 'Q' to enter four-point labeling mode which enables you to create any four-point shape by clicking four points with the left mouse button in succession and DOUBLE CLICK the left mouse as the signal of labeling completion.
+
+5. After the marking frame is drawn, the user clicks "OK", and the detection frame will be pre-assigned a "TEMPORARY" label.
+
+6. Click 're-Recognition', model will rewrite ALL recognition results in ALL detection box<sup>[3]</sup>.
+
+7. Single click the result in 'recognition result' list to manually change inaccurate recognition results.
+
+8. **Click "Check", the image status will switch to "√",then the program automatically jump to the next.**
+
+9. Click "Delete Image", and the image will be deleted to the recycle bin.
+
+10. Labeling result: the user can export the label result manually through the menu "File - Export Label", while the program will also export automatically if "File - Auto export Label Mode" is selected. The manually checked label will be stored in *Label.txt* under the opened picture folder. Click "File"-"Export Recognition Results" in the menu bar, the recognition training data of such pictures will be saved in the *crop_img* folder, and the recognition label will be saved in *rec_gt.txt*<sup>[4]</sup>.
+
+### 2.2 Table Annotation
+The table annotation is aimed at extracting the structure of the table in a picture and converting it to Excel format,
+so the annotation needs to be done simultaneously with external software to edit Excel.
+In PPOCRLabel, complete the text information labeling (text and position), complete the table structure information
+labeling in the Excel file, the recommended steps are:
+
+1. Table annotation: After opening the table picture, click on the `Table Recognition` button in the upper right corner of PPOCRLabel, which will call the table recognition model in PP-Structure to automatically label
+ the table and pop up Excel at the same time.
+
+2. Change the recognition result: **label each cell** (i.e. the text in a cell is marked as a box). Right click on the box and click on `Cell Re-recognition`.
+ You can use the model to automatically recognise the text within a cell.
+
+3. Mark the table structure: for each cell contains the text, **mark as any identifier (such as `1`) in Excel**, to ensure that the merged cell structure is same as the original picture.
+
+ > Note: If there are blank cells in the table, you also need to mark them with a bounding box so that the total number of cells is the same as in the image.
+
+4. ***Adjust cell order:*** Click on the menu `View` - `Show Box Number` to show the box ordinal numbers, and drag all the results under the 'Recognition Results' column on the right side of the software interface to make the box numbers are arranged from left to right, top to bottom
+
+5. Export JSON format annotation: close all Excel files corresponding to table images, click `File-Export Table Label` to obtain `gt.txt` annotation results.
+
+### 2.3 Note
+
+[1] PPOCRLabel uses the opened folder as the project. After opening the image folder, the picture will not be displayed in the dialog. Instead, the pictures under the folder will be directly imported into the program after clicking "Open Dir".
+
+[2] The image status indicates whether the user has saved the image manually. If it has not been saved manually it is "X", otherwise it is "√", PPOCRLabel will not relabel pictures with a status of "√".
+
+[3] After clicking "Re-recognize", the model will overwrite ALL recognition results in the picture. Therefore, if the recognition result has been manually changed before, it may change after re-recognition.
+
+[4] The files produced by PPOCRLabel can be found under the opened picture folder including the following, please do not manually change the contents, otherwise it will cause the program to be abnormal.
+| Label.txt | The detection label file can be directly used for PP-OCR detection model training. After the user saves 5 label results, the file will be automatically exported. It will also be written when the user closes the application or changes the file folder. |
+| fileState.txt | The picture status file save the image in the current folder that has been manually confirmed by the user. |
+| Cache.cach | Cache files to save the results of model recognition. |
+| rec_gt.txt | The recognition label file, which can be directly used for PP-OCR identification model training, is generated after the user clicks on the menu bar "File"-"Export recognition result". |
+| crop_img | The recognition data, generated at the same time with *rec_gt.txt* |
+| Ctrl + Shift + R | Re-recognize all the labels of the current image |
+| W | Create a rect box |
+| Q | Create a multi-points box |
+| X | Rotate the box anti-clockwise |
+| C | Rotate the box clockwise |
+| Ctrl + E | Edit label of the selected box |
+| Ctrl + X | Change key class of the box when enable `--kie` |
+| Ctrl + R | Re-recognize the selected box |
+| Ctrl + C | Copy and paste the selected box |
+| Ctrl + Left Mouse Button | Multi select the label box |
+| Backspace | Delete the selected box |
+| Ctrl + V | Check image |
+| Ctrl + Shift + d | Delete image |
+| D | Next image |
+| A | Previous image |
+| Ctrl++ | Zoom in |
+| Ctrl-- | Zoom out |
+| ↑→↓← | Move selected box |
+
+### 3.2 Built-in Model
+
+- Default model: PPOCRLabel uses the Chinese and English ultra-lightweight OCR model in PaddleOCR by default, supports Chinese, English and number recognition, and multiple language detection.
+
+- Model language switching: Changing the built-in model language is supportable by clicking "PaddleOCR"-"Choose OCR Model" in the menu bar. Currently supported languagesinclude French, German, Korean, and Japanese.
+ For specific model download links, please refer to [PaddleOCR Model List](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_en/models_list_en.md#multilingual-recognition-modelupdating)
+
+- **Custom Model**: If users want to replace the built-in model with their own inference model, they can follow the [Custom Model Code Usage](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.3/doc/doc_en/whl_en.md#31-use-by-code) by modifying PPOCRLabel.py for [Instantiation of PaddleOCR class](https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/PPOCRLabel/PPOCRLabel.py#L86) :
+PPOCRLabel supports three ways to export Label.txt
+
+- Automatically export: After selecting "File - Auto Export Label Mode", the program will automatically write the annotations into Label.txt every time the user confirms an image. If this option is not turned on, it will be automatically exported after detecting that the user has manually checked 5 images.
+
+ > The automatically export mode is turned off by default
+
+- Manual export: Click "File-Export Marking Results" to manually export the label.
+
+- Close application export
+
+### 3.4 Dataset division
+
+- Enter the following command in the terminal to execute the dataset division script:
+
+ ```
+ cd ./PPOCRLabel # Change the directory to the PPOCRLabel folder
+ - `trainValTestRatio` is the division ratio of the number of images in the training set, validation set, and test set, set according to your actual situation, the default is `6:2:2`
+
+ - `datasetRootPath` is the storage path of the complete dataset labeled by PPOCRLabel. The default path is `PaddleOCR/train_data` .
+ ```
+ |-train_data
+ |-crop_img
+ |- word_001_crop_0.png
+ |- word_002_crop_0.jpg
+ |- word_003_crop_0.jpg
+ | ...
+ | Label.txt
+ | rec_gt.txt
+ |- word_001.png
+ |- word_002.jpg
+ |- word_003.jpg
+ | ...
+ ```
+
+### 3.5 Error message
+
+- If paddleocr is installed with whl, it has a higher priority than calling PaddleOCR class with paddleocr.py, which may cause an exception if whl package is not updated.
+
+- For Linux users, if you get an error starting with **objc[XXXXX]** when opening the software, it proves that your opencv version is too high. It is recommended to install version 4.2:
+
+ ```
+ pip install opencv-python==4.2.0.32
+ ```
+- If you get an error starting with **Missing string id **,you need to recompile resources:
+ ```
+ pyrcc5 -o libs/resources.py resources.qrc
+ ```
+- If you get an error ``` module 'cv2' has no attribute 'INTER_NEAREST'```, you need to delete all opencv related packages first, and then reinstall the 4.2.0.32 version of headless opencv
+ msg = "1. Build and launch using the instructions above.\n" \
+ "2. Click 'Open Dir' in Menu/File to select the folder of the picture.\n" \
+ "3. Click 'Auto recognition', use PPOCR model to automatically annotate images which marked with 'X' before the file name." \
+ "4. Create Box:\n" \
+ "4.1 Click 'Create RectBox' or press 'W' in English keyboard mode to draw a new rectangle detection box. Click and release left mouse to select a region to annotate the text area.\n" \
+ "4.2 Press 'P' to enter four-point labeling mode which enables you to create any four-point shape by clicking four points with the left mouse button in succession and DOUBLE CLICK the left mouse as the signal of labeling completion.\n" \
+ "5. After the marking frame is drawn, the user clicks 'OK', and the detection frame will be pre-assigned a TEMPORARY label.\n" \
+ "6. Click re-Recognition, model will rewrite ALL recognition results in ALL detection box.\n" \
+ "7. Double click the result in 'recognition result' list to manually change inaccurate recognition results.\n" \
+ "8. Click 'Save', the image status will switch to '√',then the program automatically jump to the next.\n" \
+ "9. Click 'Delete Image' and the image will be deleted to the recycle bin.\n" \
+ "10. Labeling result: After closing the application or switching the file path, the manually saved label will be stored in *Label.txt* under the opened picture folder.\n" \
+ " Click PaddleOCR-Save Recognition Results in the menu bar, the recognition training data of such pictures will be saved in the *crop_img* folder, and the recognition label will be saved in *rec_gt.txt*.\n"
+
+ return msg
+
+
+def keysInfo(lang='en'):
+ if lang == 'ch':
+ msg = "快捷键\t\t\t说明\n" \
+ "———————————————————————\n" \
+ "Ctrl + shift + R\t\t对当前图片的所有标记重新识别\n" \
+ "W\t\t\t新建矩形框\n" \
+ "Q\t\t\t新建四点框\n" \
+ "Ctrl + E\t\t编辑所选框标签\n" \
+ "Ctrl + R\t\t重新识别所选标记\n" \
+ "Ctrl + C\t\t复制并粘贴选中的标记框\n" \
+ "Ctrl + 鼠标左键\t\t多选标记框\n" \
+ "Backspace\t\t删除所选框\n" \
+ "Ctrl + V\t\t确认本张图片标记\n" \
+ "Ctrl + Shift + d\t删除本张图片\n" \
+ "D\t\t\t下一张图片\n" \
+ "A\t\t\t上一张图片\n" \
+ "Ctrl++\t\t\t缩小\n" \
+ "Ctrl--\t\t\t放大\n" \
+ "↑→↓←\t\t\t移动标记框\n" \
+ "———————————————————————\n" \
+ "注:Mac用户Command键替换上述Ctrl键"
+
+ else:
+ msg = "Shortcut Keys\t\tDescription\n" \
+ "———————————————————————\n" \
+ "Ctrl + shift + R\t\tRe-recognize all the labels\n" \
+ "\t\t\tof the current image\n" \
+ "\n" \
+ "W\t\t\tCreate a rect box\n" \
+ "Q\t\t\tCreate a four-points box\n" \
+ "Ctrl + E\t\tEdit label of the selected box\n" \
+ "Ctrl + R\t\tRe-recognize the selected box\n" \
+ "Ctrl + C\t\tCopy and paste the selected\n" \
+ "\t\t\tbox\n" \
+ "\n" \
+ "Ctrl + Left Mouse\tMulti select the label\n" \
+ "Button\t\t\tbox\n" \
+ "\n" \
+ "Backspace\t\tDelete the selected box\n" \
+ "Ctrl + V\t\tCheck image\n" \
+ "Ctrl + Shift + d\tDelete image\n" \
+ "D\t\t\tNext image\n" \
+ "A\t\t\tPrevious image\n" \
+ "Ctrl++\t\t\tZoom in\n" \
+ "Ctrl--\t\t\tZoom out\n" \
+ "↑→↓←\t\t\tMove selected box" \
+ "———————————————————————\n" \
+ "Notice:For Mac users, use the 'Command' key instead of the 'Ctrl' key"
+ d="m 44,15.5 c -9.374,0 -17,7.626 -17,17 v 200 c 0,9.374 7.626,17 17,17 h 176 c 9.375,0 17,-7.626 17,-17 v -200 c 0,-9.374 -7.625,-17 -17,-17 H 44 z" /><path
+ style="opacity:0.2"
+ inkscape:connector-curvature="0"
+ id="path714"
+ d="m 42,13.5 c -9.374,0 -17,7.626 -17,17 v 200 c 0,9.374 7.626,17 17,17 h 176 c 9.375,0 17,-7.626 17,-17 v -200 c 0,-9.374 -7.625,-17 -17,-17 H 42 z" /><path
+ style="opacity:0.2"
+ inkscape:connector-curvature="0"
+ id="path715"
+ d="m 40,12.5 c -9.374,0 -17,7.626 -17,17 v 200 c 0,9.374 7.626,17 17,17 h 176 c 9.375,0 17,-7.626 17,-17 v -200 c 0,-9.374 -7.625,-17 -17,-17 H 40 z" /><path
+ inkscape:connector-curvature="0"
+ style="fill:url(#linearGradient80089)"
+ id="path722"
+ d="m 41,11 c -9.374,0 -17,7.626 -17,17 v 200 c 0,9.374 7.626,17 17,17 h 176 c 9.375,0 17,-7.626 17,-17 V 28 c 0,-9.374 -7.625,-17 -17,-17 H 41 z" /><path
+ style="fill:#ffffff"
+ inkscape:connector-curvature="0"
+ id="path723"
+ d="m 28,228 c 0,6.627 5.373,12 12,12 h 176 c 6.627,0 12,-5.373 12,-12 V 28 c 0,-6.627 -5.373,-12 -12,-12 H 40 c -6.627,0 -12,5.373 -12,12 v 200 z" /><path
+ inkscape:connector-curvature="0"
+ style="fill:url(#linearGradient80085)"
+ id="path730"
+ d="m 40,21 c -3.86,0 -7,3.14 -7,7 v 200 c 0,3.859 3.14,7 7,7 h 176 c 3.859,0 7,-3.141 7,-7 V 28 c 0,-3.86 -3.141,-7 -7,-7 H 40 z" /><path
+ style="opacity:0.2"
+ inkscape:connector-curvature="0"
+ id="path731"
+ d="m 191.924,170.38398 c -11.613,-36.12699 -13.717,-42.66999 -14.859,-44.06399 0.119,0.076 0.289,0.178 0.289,0.178 L 98.804,39.042999 c -4.195,-4.65 -14.005,0.356 -21.355,6.976 -7.283,6.542 -13.32,15.772999 -9.37,20.563999 l 78.944,87.542982 0.533,0.094 37.768,17.602 7.688,2.365 -1.088,-3.803 z" /><path
+ style="opacity:0.2"
+ inkscape:connector-curvature="0"
+ id="path732"
+ d="m 193.557,167.91598 c -11.611,-36.12499 -13.713,-42.66999 -14.855,-44.06399 0.117,0.072 0.287,0.178 0.287,0.178 L 100.444,36.574999 c -4.199,-4.651 -14.015,0.355 -21.361,6.975 -7.281,6.545 -13.32,15.772999 -9.368,20.565999 l 78.945,87.538982 0.533,0.1 37.77,17.598 7.682,2.367 -1.088,-3.804 z" /><path
+ style="opacity:0.2"
+ inkscape:connector-curvature="0"
+ id="path733"
+ d="M 186.773,165.44898 C 175.16,129.32199 173.06,122.77699 171.91,121.38099 c 0.121,0.074 0.295,0.18 0.295,0.18 L 93.653,34.103999 c -4.192,-4.65 -14.009,0.359 -21.354,6.978 -7.283,6.542 -13.321,15.770999 -9.369,20.564999 l 78.942,87.540982 0.535,0.096 37.768,17.598 7.686,2.367 -1.088,-3.8 z" /><path
+ style="fill:#ffffff"
+ inkscape:connector-curvature="0"
+ id="path734"
+ d="m 186.43,163.75498 c -11.613,-36.12499 -13.713,-42.66599 -14.863,-44.06099 0.123,0.072 0.293,0.18 0.293,0.18 L 93.314,32.415999 c -4.199,-4.651 -14.015,0.357 -21.359,6.977 -7.283,6.543 -13.322,15.773999 -9.37,20.565999 l 78.941,87.540982 0.535,0.098 37.771,17.598 7.686,2.363 -1.088,-3.804 z" /><path
+ inkscape:connector-curvature="0"
+ style="fill:url(#linearGradient80078)"
+ id="path741"
+ d="m 186.43,163.75498 c -11.613,-36.12499 -13.713,-42.66599 -14.863,-44.06099 0.123,0.072 0.293,0.18 0.293,0.18 L 93.314,32.415999 c -4.199,-4.651 -14.015,0.357 -21.359,6.977 -7.283,6.543 -13.322,15.773999 -9.37,20.565999 l 78.941,87.540982 0.535,0.098 37.771,17.598 7.686,2.363 -1.088,-3.804 z" /><path
+ inkscape:connector-curvature="0"
+ style="fill:url(#linearGradient80075)"
+ id="path748"
+ d="m 166.969,122.16199 13.723,38.12899 -36.371,-17.90199 0.168,-0.152 c -0.25,-0.08 -0.496,-0.178 -0.701,-0.316 l -0.125,0.121 -75.303,-83.569992 0.123,-0.104 c -2.246,-2.49 1.032,-9.093999 7.308,-14.751999 6.28,-5.652 13.18,-8.219 15.425,-5.733 l 75.292,83.564991 0.461,0.714 z" /><path
+ inkscape:connector-curvature="0"
+ style="fill:url(#linearGradient80072)"
+ id="path758"
+ d="m 148.652,144.52098 c 2.076,-0.369 4.635,-1.479 7.252,-3.13899 1.617,-1.018 3.279,-2.283 4.898,-3.744 1.455,-1.303 2.736,-2.666 3.84,-4.01 2.076,-2.531 3.322,-5.213 3.781,-7.424 l -1.455,-4.043 -0.463,-0.715 -74.798,-83.017991 c 0.608,2.24 -0.962,5.938 -4.063,9.74 -1.134,1.389 -2.441,2.789 -3.945,4.141 -1.574,1.418999 -3.195,2.651999 -4.767,3.653999 -4.493,2.871 -8.628,3.928 -10.548,2.486 l -0.025,0.021 75.303,83.569992 0.125,-0.121 c 0.205,0.139 0.451,0.236 0.701,0.316 l -0.168,0.152 4.332,2.13399 z" /><path
+ style="fill:#ffffff"
+ inkscape:connector-curvature="0"
+ id="path759"
+ d="m 68.083,57.809998 c 1.732,1.772 5.994,0.776 10.643,-2.194 1.541,-0.982 3.132,-2.193 4.677,-3.585999 1.476,-1.325 2.759,-2.701 3.872,-4.063 3.578,-4.388 5.091,-8.642 3.477,-10.584 l 0.023,-0.024 75.817,84.118991 c 0.635,2.262 -0.588,6.498 -3.754,10.357 -1.082,1.318 -2.34,2.656 -3.77,3.934 -1.588,1.434 -3.219,2.676 -4.807,3.676 -4.74,3.006 -9.303,4.19899 -11.016,2.301 -0.393,-0.439 -2.098,-2.336 -2.145,-2.406 l -73.255,-81.313992 0.238,-0.216 z" /><path
+ style="fill:#ffffff"
+ inkscape:connector-curvature="0"
+ id="path760"
+ d="m 75.79,43.614999 c 6.28,-5.652 13.18,-8.219 15.425,-5.733 l 16.961,18.827999 1.152,26.49 -17.973,0.784 -22.996,-25.513 0.123,-0.104 c -2.246,-2.49 1.032,-9.092999 7.308,-14.751999 z" /><path
+ style="fill:#ffffff"
+ inkscape:connector-curvature="0"
+ id="path761"
+ d="m 68.083,57.809998 c 1.732,1.772 5.994,0.776 10.643,-2.194 1.541,-0.982 3.132,-2.193 4.677,-3.585999 1.476,-1.325 2.759,-2.701 3.872,-4.063 3.578,-4.388 5.091,-8.642 3.477,-10.584 l 0.023,-0.024 75.817,84.118991 c 0.635,2.262 -0.588,6.498 -3.754,10.357 -1.082,1.318 -2.34,2.656 -3.77,3.934 -1.588,1.434 -3.219,2.676 -4.807,3.676 -4.74,3.006 -9.303,4.19899 -11.016,2.301 -0.393,-0.439 -2.098,-2.336 -2.145,-2.406 l -73.255,-81.313992 0.238,-0.216 z" /><path
+ inkscape:connector-curvature="0"
+ style="fill:url(#linearGradient80066)"
+ id="path768"
+ d="m 75.79,43.614999 c 6.28,-5.652 13.18,-8.219 15.425,-5.733 l 16.961,18.827999 1.152,26.49 -17.973,0.784 -22.996,-25.513 0.123,-0.104 c -2.246,-2.49 1.032,-9.092999 7.308,-14.751999 z" /><path
+ inkscape:connector-curvature="0"
+ style="fill:url(#linearGradient80063)"
+ id="path778"
+ d="m 68.083,57.809998 c 1.732,1.772 5.994,0.776 10.643,-2.194 1.541,-0.982 3.132,-2.193 4.677,-3.585999 1.476,-1.325 2.759,-2.701 3.872,-4.063 3.578,-4.388 5.091,-8.642 3.477,-10.584 l 0.023,-0.024 75.817,84.118991 c 0.635,2.262 -0.588,6.498 -3.754,10.357 -1.082,1.318 -2.34,2.656 -3.77,3.934 -1.588,1.434 -3.219,2.676 -4.807,3.676 -4.74,3.006 -9.303,4.19899 -11.016,2.301 -0.393,-0.439 -2.098,-2.336 -2.145,-2.406 l -73.255,-81.313992 0.238,-0.216 z" /><path
+ inkscape:connector-curvature="0"
+ style="fill:url(#linearGradient80060)"
+ id="path790"
+ d="m 74.357,65.112998 c 0,0 6.036,-0.212 10.685,-3.182 1.542,-0.983 3.132,-2.193 4.677,-3.586 1.477,-1.326 2.76,-2.701 3.873,-4.064 2.928,-3.588999 4.469,-7.087999 4.049,-9.306999 l -6.865,-7.617 -0.023,0.024 c 1.614,1.942 0.102,6.196 -3.477,10.584 -1.113,1.362 -2.396,2.738 -3.872,4.063 -1.545,1.392999 -3.136,2.603999 -4.677,3.585999 -4.648,2.971 -8.91,3.967 -10.643,2.194 l -0.238,0.217 73.256,81.310992 c 0.047,0.07 1.752,1.967 2.145,2.406 0.342,0.377 0.799,0.627 1.344,0.771 L 74.357,65.112998 z" /><path
+ style="fill:#003333"
+ inkscape:connector-curvature="0"
+ id="path791"
+ d="m 172.035,149.75398 c -1.635,1.477 -3.307,2.764 -4.949,3.84 l 13.605,6.697 -5.096,-14.156 c -1.058,1.218 -2.243,2.441 -3.56,3.619 z" /><path
+ style="opacity:0.5;fill:#ffffff"
+ inkscape:connector-curvature="0"
+ id="path792"
+ d="M 163.121,131.45299 86.968,48.329999 c 0.1,-0.12 0.213,-0.242 0.307,-0.364 1.428,-1.752 2.52,-3.49 3.225,-5.058 l 75.768,82.706991 c -0.553,1.824 -1.6,3.867 -3.147,5.838 z" /><path
+ style="opacity:0.5;fill:#ffffff"
+ inkscape:connector-curvature="0"
+ id="path793"
+ d="m 87.275,47.965999 c 0.634,-0.774 1.189,-1.548 1.694,-2.3 l 76.015,82.973991 c -0.578,1.063 -1.283,2.146 -2.146,3.193 -0.744,0.896 -1.566,1.805 -2.465,2.697 L 84.152,51.331999 c 1.164,-1.108 2.209,-2.24 3.123,-3.366 z" /><rect
+ style="font-size:42.66666794px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:justify;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Linux Libertine O C;-inkscape-font-specification:Linux Libertine O C"
+ x="24.554667"
+ y="207.10201"
+ id="text80094"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ id="tspan80096"
+ x="24.554667"
+ y="207.10201"
+ style="font-style:italic;font-weight:bold;-inkscape-font-specification:Linux Libertine O C Bold Italic">Labels</tspan></text>
+ d="M 17.219,51.266 C 16.115,51.266 15.219,52.163 15.219,53.266 L 15.219,202.735 C 15.219,203.838 16.115,204.735 17.219,204.735 L 179.885,204.735 C 180.989,204.735 181.885,203.838 181.885,202.735 L 181.885,75.933 C 181.885,74.83 180.989,73.933 179.885,73.933 L 100.552,73.933 L 100.552,53.266 C 100.552,52.163 99.656,51.266 98.552,51.266 L 17.219,51.266 z "
+ style="fill:url(#XMLID_14_)"
+ id="path23" />
+ <linearGradient
+ x1="98.551804"
+ y1="41.258801"
+ x2="98.551804"
+ y2="214.7274"
+ id="XMLID_15_"
+ gradientUnits="userSpaceOnUse">
+ <stop
+ style="stop-color:#dcf0ff;stop-opacity:1"
+ offset="0"
+ id="stop26" />
+ <stop
+ style="stop-color:#428aff;stop-opacity:1"
+ offset="0.58990002"
+ id="stop28" />
+ <stop
+ style="stop-color:#006dff;stop-opacity:1"
+ offset="0.85949999"
+ id="stop30" />
+ <stop
+ style="stop-color:#0035ed;stop-opacity:1"
+ offset="1"
+ id="stop32" />
+ <a:midPointStop
+ style="stop-color:#DCF0FF"
+ offset="0" />
+ <a:midPointStop
+ style="stop-color:#DCF0FF"
+ offset="0.5" />
+ <a:midPointStop
+ style="stop-color:#428AFF"
+ offset="0.5899" />
+ <a:midPointStop
+ style="stop-color:#428AFF"
+ offset="0.5" />
+ <a:midPointStop
+ style="stop-color:#006DFF"
+ offset="0.8595" />
+ <a:midPointStop
+ style="stop-color:#006DFF"
+ offset="0.5" />
+ <a:midPointStop
+ style="stop-color:#0035ED"
+ offset="1" />
+ </linearGradient>
+ <path
+ d="M 20.219,56.266 C 20.219,61.91 20.219,194.091 20.219,199.735 C 25.891,199.735 171.213,199.735 176.885,199.735 C 176.885,194.154 176.885,84.514 176.885,78.933 C 171.33,78.933 95.552,78.933 95.552,78.933 C 95.552,78.933 95.552,60.651 95.552,56.266 C 90.2,56.266 25.572,56.266 20.219,56.266 z "
+ style="fill:url(#XMLID_15_)"
+ id="path34" />
+ <linearGradient
+ x1="98.551804"
+ y1="41.2593"
+ x2="98.551804"
+ y2="214.72549"
+ id="XMLID_16_"
+ gradientUnits="userSpaceOnUse">
+ <stop
+ style="stop-color:#ffffff;stop-opacity:1"
+ offset="0"
+ id="stop37" />
+ <stop
+ style="stop-color:#e9f2ff;stop-opacity:1"
+ offset="0.1147"
+ id="stop39" />
+ <stop
+ style="stop-color:#b0d2ff;stop-opacity:1"
+ offset="0.35389999"
+ id="stop41" />
+ <stop
+ style="stop-color:#579fff;stop-opacity:1"
+ offset="0.6936"
+ id="stop43" />
+ <stop
+ style="stop-color:#006dff;stop-opacity:1"
+ offset="1"
+ id="stop45" />
+ <a:midPointStop
+ style="stop-color:#FFFFFF"
+ offset="0" />
+ <a:midPointStop
+ style="stop-color:#FFFFFF"
+ offset="0.5424" />
+ <a:midPointStop
+ style="stop-color:#006DFF"
+ offset="1" />
+ </linearGradient>
+ <path
+ d="M 179.885,73.933 L 100.552,73.933 L 100.552,53.266 C 100.552,52.163 99.656,51.266 98.552,51.266 L 17.219,51.266 C 16.115,51.266 15.219,52.163 15.219,53.266 L 15.219,57.266 L 91.552,57.266 C 92.656,57.266 93.552,58.163 93.552,59.266 L 93.552,79.933 L 172.885,79.933 C 173.989,79.933 174.885,80.83 174.885,81.933 L 174.885,204.735 L 179.885,204.735 C 180.989,204.735 181.885,203.838 181.885,202.735 L 181.885,75.933 C 181.885,74.83 180.988,73.933 179.885,73.933 z "
+ style="fill:url(#XMLID_16_)"
+ id="path47" />
+ <linearGradient
+ x1="106.9839"
+ y1="98.599098"
+ x2="106.9839"
+ y2="206.73489"
+ id="XMLID_17_"
+ gradientUnits="userSpaceOnUse">
+ <stop
+ style="stop-color:#0099ff;stop-opacity:1"
+ offset="0"
+ id="stop50" />
+ <stop
+ style="stop-color:#0089e5;stop-opacity:1"
+ offset="0.0937"
+ id="stop52" />
+ <stop
+ style="stop-color:#00406b;stop-opacity:1"
+ offset="0.54689997"
+ id="stop54" />
+ <stop
+ style="stop-color:#00121e;stop-opacity:1"
+ offset="0.85769999"
+ id="stop56" />
+ <stop
+ style="stop-color:#000000;stop-opacity:1"
+ offset="1"
+ id="stop58" />
+ <a:midPointStop
+ style="stop-color:#0099FF"
+ offset="0" />
+ <a:midPointStop
+ style="stop-color:#0099FF"
+ offset="0.4689" />
+ <a:midPointStop
+ style="stop-color:#000000"
+ offset="1" />
+ </linearGradient>
+ <path
+ d="M 32.083,106.599 L 32.083,206.734 L 42.083,206.734 C 42.083,180.445 42.083,111.718 42.083,108.599 C 45.222,108.599 143.57,108.599 181.884,108.599 L 181.884,98.599 L 40.083,98.599 C 35.665,98.599 32.083,102.181 32.083,106.599 z "
+ style="opacity:0.3;fill:url(#XMLID_17_)"
+ id="path60" />
+ <linearGradient
+ x1="6.3671999"
+ y1="47.148399"
+ x2="179.4046"
+ y2="220.1859"
+ id="XMLID_18_"
+ gradientUnits="userSpaceOnUse">
+ <stop
+ style="stop-color:#0053bd;stop-opacity:1"
+ offset="0"
+ id="stop63" />
+ <stop
+ style="stop-color:#00008d;stop-opacity:1"
+ offset="1"
+ id="stop65" />
+ <a:midPointStop
+ style="stop-color:#0053BD"
+ offset="0" />
+ <a:midPointStop
+ style="stop-color:#0053BD"
+ offset="0.5" />
+ <a:midPointStop
+ style="stop-color:#00008D"
+ offset="1" />
+ </linearGradient>
+ <path
+ d="M 179.885,63.933 L 110.552,63.933 L 110.552,53.266 C 110.552,46.639 105.18,41.266 98.552,41.266 L 17.219,41.266 C 10.591,41.266 5.219,46.639 5.219,53.266 L 5.219,75.933 L 5.219,202.735 C 5.219,209.362 10.591,214.735 17.219,214.735 L 98.552,214.735 L 179.885,214.735 C 186.512,214.735 191.885,209.362 191.885,202.735 L 191.885,75.933 C 191.885,69.305 186.512,63.933 179.885,63.933 z M 181.885,202.734 C 181.885,203.837 180.989,204.734 179.885,204.734 L 17.219,204.734 C 16.115,204.734 15.219,203.837 15.219,202.734 L 15.219,53.266 C 15.219,52.163 16.115,51.266 17.219,51.266 L 98.552,51.266 C 99.656,51.266 100.552,52.163 100.552,53.266 L 100.552,73.933 L 179.885,73.933 C 180.989,73.933 181.885,74.83 181.885,75.933 L 181.885,202.734 z "
+ style="fill:url(#XMLID_18_)"
+ id="path67" />
+ <linearGradient
+ x1="128.48441"
+ y1="86.066902"
+ x2="128.48441"
+ y2="228.0708"
+ id="XMLID_19_"
+ gradientUnits="userSpaceOnUse">
+ <stop
+ style="stop-color:#c9e6ff;stop-opacity:1"
+ offset="0"
+ id="stop70" />
+ <stop
+ style="stop-color:#006dff;stop-opacity:1"
+ offset="0.55620003"
+ id="stop72" />
+ <stop
+ style="stop-color:#0035ed;stop-opacity:1"
+ offset="1"
+ id="stop74" />
+ <a:midPointStop
+ style="stop-color:#C9E6FF"
+ offset="0" />
+ <a:midPointStop
+ style="stop-color:#C9E6FF"
+ offset="0.5" />
+ <a:midPointStop
+ style="stop-color:#006DFF"
+ offset="0.5562" />
+ <a:midPointStop
+ style="stop-color:#006DFF"
+ offset="0.5" />
+ <a:midPointStop
+ style="stop-color:#0035ED"
+ offset="1" />
+ </linearGradient>
+ <path
+ d="M 51.083,96.599 C 51.083,100.388 51.083,200.946 51.083,204.734 C 54.933,204.734 202.035,204.734 205.884,204.734 C 205.884,200.946 205.884,100.387 205.884,96.599 C 202.035,96.599 54.933,96.599 51.083,96.599 z "
+ style="fill:url(#XMLID_19_)"
+ id="path76" />
+ <linearGradient
+ x1="128.48441"
+ y1="86.064499"
+ x2="128.48441"
+ y2="228.06689"
+ id="XMLID_20_"
+ gradientUnits="userSpaceOnUse">
+ <stop
+ style="stop-color:#dcf0ff;stop-opacity:1"
+ offset="0"
+ id="stop79" />
+ <stop
+ style="stop-color:#428aff;stop-opacity:1"
+ offset="0.6742"
+ id="stop81" />
+ <stop
+ style="stop-color:#006dff;stop-opacity:1"
+ offset="1"
+ id="stop83" />
+ <a:midPointStop
+ style="stop-color:#DCF0FF"
+ offset="0" />
+ <a:midPointStop
+ style="stop-color:#DCF0FF"
+ offset="0.5" />
+ <a:midPointStop
+ style="stop-color:#428AFF"
+ offset="0.6742" />
+ <a:midPointStop
+ style="stop-color:#428AFF"
+ offset="0.5" />
+ <a:midPointStop
+ style="stop-color:#006DFF"
+ offset="1" />
+ </linearGradient>
+ <path
+ d="M 56.083,101.599 C 56.083,110.255 56.083,191.079 56.083,199.734 C 65.135,199.734 191.833,199.734 200.884,199.734 C 200.884,191.079 200.884,110.255 200.884,101.599 C 191.834,101.599 65.135,101.599 56.083,101.599 z "
+ style="fill:url(#XMLID_20_)"
+ id="path85" />
+ <linearGradient
+ x1="54.491199"
+ y1="76.673798"
+ x2="217.155"
+ y2="239.3376"
+ id="XMLID_21_"
+ gradientUnits="userSpaceOnUse">
+ <stop
+ style="stop-color:#0053bd;stop-opacity:1"
+ offset="0"
+ id="stop88" />
+ <stop
+ style="stop-color:#00008d;stop-opacity:1"
+ offset="1"
+ id="stop90" />
+ <a:midPointStop
+ style="stop-color:#0053BD"
+ offset="0" />
+ <a:midPointStop
+ style="stop-color:#0053BD"
+ offset="0.5" />
+ <a:midPointStop
+ style="stop-color:#00008D"
+ offset="1" />
+ </linearGradient>
+ <path
+ d="M 207.885,86.599 L 49.083,86.599 C 44.664,86.599 41.083,90.181 41.083,94.599 L 41.083,206.734 C 41.083,211.152 44.664,214.734 49.083,214.734 L 207.884,214.734 C 212.302,214.734 215.884,211.152 215.884,206.734 L 215.884,94.599 C 215.885,90.181 212.303,86.599 207.885,86.599 z M 205.885,204.734 C 202.035,204.734 54.933,204.734 51.084,204.734 C 51.084,200.946 51.084,100.387 51.084,96.599 C 54.934,96.599 202.036,96.599 205.885,96.599 C 205.885,100.388 205.885,200.946 205.885,204.734 z "
+ style="fill:url(#XMLID_21_)"
+ id="path92" />
+ <linearGradient
+ x1="128.48441"
+ y1="86.066902"
+ x2="128.48441"
+ y2="228.0708"
+ id="XMLID_22_"
+ gradientUnits="userSpaceOnUse">
+ <stop
+ style="stop-color:#ffffff;stop-opacity:1"
+ offset="0"
+ id="stop95" />
+ <stop
+ style="stop-color:#f7fbff;stop-opacity:1"
+ offset="0.0862"
+ id="stop97" />
+ <stop
+ style="stop-color:#e2eeff;stop-opacity:1"
+ offset="0.2177"
+ id="stop99" />
+ <stop
+ style="stop-color:#c0dbff;stop-opacity:1"
+ offset="0.3779"
+ id="stop101" />
+ <stop
+ style="stop-color:#8fbfff;stop-opacity:1"
+ offset="0.56089997"
+ id="stop103" />
+ <stop
+ style="stop-color:#529cff;stop-opacity:1"
+ offset="0.76310003"
+ id="stop105" />
+ <stop
+ style="stop-color:#0871ff;stop-opacity:1"
+ offset="0.97839999"
+ id="stop107" />
+ <stop
+ style="stop-color:#006dff;stop-opacity:1"
+ offset="1"
+ id="stop109" />
+ <a:midPointStop
+ style="stop-color:#FFFFFF"
+ offset="0" />
+ <a:midPointStop
+ style="stop-color:#FFFFFF"
+ offset="0.6158" />
+ <a:midPointStop
+ style="stop-color:#006DFF"
+ offset="1" />
+ </linearGradient>
+ <path
+ d="M 51.083,96.599 C 51.083,97.141 51.083,99.667 51.083,103.599 C 82.419,103.599 194.529,103.599 197.884,103.599 C 197.884,106.846 197.884,181.163 197.884,204.734 C 202.511,204.734 205.39,204.734 205.884,204.734 C 205.884,200.946 205.884,100.387 205.884,96.599 C 202.035,96.599 54.933,96.599 51.083,96.599 z "
+ style="fill:url(#XMLID_22_)"
+ id="path111" />
+ <path
+ d="M 132.455,30.044 C 126.885,30.044 122.355,34.574 122.355,40.143 L 122.355,158.953 C 122.355,164.521 126.885,169.053 132.455,169.053 L 237.008,169.053 C 242.576,169.053 247.108,164.522 247.108,158.953 L 247.108,40.143 C 247.108,34.574 242.577,30.044 237.008,30.044 L 132.455,30.044 z "
+ style="fill:#003366"
+ id="path113" />
+ <linearGradient
+ x1="158.8916"
+ y1="73.708504"
+ x2="299.68201"
+ y2="214.4994"
+ id="XMLID_23_"
+ gradientUnits="userSpaceOnUse">
+ <stop
+ style="stop-color:#ffffff;stop-opacity:1"
+ offset="0"
+ id="stop116" />
+ <stop
+ style="stop-color:#99ccff;stop-opacity:1"
+ offset="1"
+ id="stop118" />
+ <a:midPointStop
+ style="stop-color:#FFFFFF"
+ offset="0" />
+ <a:midPointStop
+ style="stop-color:#FFFFFF"
+ offset="0.5" />
+ <a:midPointStop
+ style="stop-color:#99CCFF"
+ offset="1" />
+ </linearGradient>
+ <path
+ d="M 132.455,35.984 C 130.162,35.984 128.295,37.85 128.295,40.143 L 128.295,158.953 C 128.295,161.246 130.162,163.111 132.455,163.111 L 237.008,163.111 C 239.301,163.111 241.166,161.246 241.166,158.953 L 241.166,40.143 C 241.166,37.85 239.301,35.984 237.008,35.984 L 132.455,35.984 z "
+ style="fill:url(#XMLID_23_)"
+ id="path120" />
+ <path
+ d="M 205.523,86.479 C 216.566,76.124 229.841,71.031 244.136,68.5 L 244.136,40.143 C 244.136,36.206 240.943,33.014 237.007,33.014 L 132.455,33.014 C 128.517,33.014 125.326,36.206 125.326,40.143 L 125.326,125.251 C 154.779,127.473 182.639,106.979 205.523,86.479 z "
+ style="opacity:0.4;fill:#ffffff"
+ id="path122" />
+ <linearGradient
+ x1="141.7061"
+ y1="66.528297"
+ x2="239.2188"
+ y2="164.041"
+ id="XMLID_24_"
+ gradientUnits="userSpaceOnUse">
+ <stop
+ style="stop-color:#0053bd;stop-opacity:1"
+ offset="0"
+ id="stop125" />
+ <stop
+ style="stop-color:#00008d;stop-opacity:1"
+ offset="1"
+ id="stop127" />
+ <a:midPointStop
+ style="stop-color:#0053BD"
+ offset="0" />
+ <a:midPointStop
+ style="stop-color:#0053BD"
+ offset="0.5" />
+ <a:midPointStop
+ style="stop-color:#00008D"
+ offset="1" />
+ </linearGradient>
+ <path
+ d="M 207.885,86.599 L 122.355,86.599 L 122.355,96.599 C 162.027,96.599 203.855,96.599 205.885,96.599 C 205.885,98.946 205.885,138.441 205.885,169.053 L 215.885,169.053 L 215.885,94.599 C 215.885,90.181 212.303,86.599 207.885,86.599 z "
+ style="opacity:0.2;fill:url(#XMLID_24_)"
+ id="path129" />
+ <linearGradient
+ x1="164.1201"
+ y1="89.542"
+ x2="164.1201"
+ y2="184.68871"
+ id="XMLID_25_"
+ gradientUnits="userSpaceOnUse">
+ <stop
+ style="stop-color:#c9e6ff;stop-opacity:1"
+ offset="0"
+ id="stop132" />
+ <stop
+ style="stop-color:#006dff;stop-opacity:1"
+ offset="0.55620003"
+ id="stop134" />
+ <stop
+ style="stop-color:#0035ed;stop-opacity:1"
+ offset="1"
+ id="stop136" />
+ <a:midPointStop
+ style="stop-color:#C9E6FF"
+ offset="0" />
+ <a:midPointStop
+ style="stop-color:#C9E6FF"
+ offset="0.5" />
+ <a:midPointStop
+ style="stop-color:#006DFF"
+ offset="0.5562" />
+ <a:midPointStop
+ style="stop-color:#006DFF"
+ offset="0.5" />
+ <a:midPointStop
+ style="stop-color:#0035ED"
+ offset="1" />
+ </linearGradient>
+ <path
+ d="M 122.355,158.953 C 122.355,164.521 126.885,169.053 132.455,169.053 L 205.885,169.053 C 205.885,138.442 205.885,98.947 205.885,96.599 C 203.856,96.599 162.028,96.599 122.355,96.599 L 122.355,158.953 L 122.355,158.953 z "
+ style="opacity:0.2;fill:url(#XMLID_25_)"
+ id="path138" />
+ <linearGradient
+ x1="185.8848"
+ y1="86.066902"
+ x2="185.8848"
+ y2="228.0708"
+ id="XMLID_26_"
+ gradientUnits="userSpaceOnUse">
+ <stop
+ style="stop-color:#0053bd;stop-opacity:1"
+ offset="0"
+ id="stop141" />
+ <stop
+ style="stop-color:#00008d;stop-opacity:1"
+ offset="1"
+ id="stop143" />
+ <a:midPointStop
+ style="stop-color:#0053BD"
+ offset="0" />
+ <a:midPointStop
+ style="stop-color:#0053BD"
+ offset="0.5" />
+ <a:midPointStop
+ style="stop-color:#00008D"
+ offset="1" />
+ </linearGradient>
+ <path
+ d="M 181.885,96.599 L 181.885,202.734 C 181.885,203.837 180.989,204.734 179.885,204.734 C 184.268,204.734 188.244,204.734 191.705,204.734 C 191.814,204.083 191.885,203.417 191.885,202.734 L 191.885,96.599 C 188.916,96.599 185.557,96.599 181.885,96.599 z "
+ style="opacity:0.3;fill:url(#XMLID_26_)"
+ id="path145" />
+ <path
+ d="M 122.355,96.599 L 122.355,103.599 C 159.458,103.599 195.991,103.599 197.885,103.599 C 197.885,105.771 197.885,139.741 197.885,169.053 L 205.885,169.053 C 205.885,138.442 205.885,98.947 205.885,96.599 C 203.855,96.599 162.027,96.599 122.355,96.599 z "
+ description='PPOCRLabelv2 is a semi-automatic graphic annotation tool suitable for OCR field, with built-in PP-OCR model to automatically detect and re-recognize data. It is written in Python3 and PyQT5, supporting rectangular box, table, irregular text and key information annotation modes. Annotations can be directly used for the training of PP-OCR detection and recognition models.',
+- **2022.10 Release [optimized JS version PP-OCRv3 model](./deploy/paddlejs/README.md)** with 4.3M model size, 8x faster inference time, and a ready-to-use web demo
+
+- 💥 **Live Playback: Introduction to PP-StructureV2 optimization strategy**. Scan [the QR code below](#Community) using WeChat, follow the PaddlePaddle official account and fill out the questionnaire to join the WeChat group, get the live link and 20G OCR learning materials (including PDF2Word application, 10 models in vertical scenarios, etc.)
+ - Release [PP-StructureV2](./ppstructure/),with functions and performance fully upgraded, adapted to Chinese scenes, and new support for [Layout Recovery](./ppstructure/recovery) and **one line command to convert PDF to Word**;
+ - [Layout Analysis](./ppstructure/layout) optimization: model storage reduced by 95%, while speed increased by 11 times, and the average CPU time-cost is only 41ms;
+ - [Table Recognition](./ppstructure/table) optimization: 3 optimization strategies are designed, and the model accuracy is improved by 6% under comparable time consumption;
+ - [Key Information Extraction](./ppstructure/kie) optimization:a visual-independent model structure is designed, the accuracy of semantic entity recognition is increased by 2.8%, and the accuracy of relation extraction is increased by 9.1%.
+- **🔥2022.8 Release [OCR scene application collection](./applications/README_en.md)**
+ - Release **9 vertical models** such as digital tube, LCD screen, license plate, handwriting recognition model, high-precision SVTR model, etc, covering the main OCR vertical applications in general, manufacturing, finance, and transportation industries.
+- **2022.8 Add implementation of [8 cutting-edge algorithms](doc/doc_en/algorithm_overview_en.md)**
+ - Text Detection: [FCENet](doc/doc_en/algorithm_det_fcenet_en.md), [DB++](doc/doc_en/algorithm_det_db_en.md)
+ - Text Recognition: [ViTSTR](doc/doc_en/algorithm_rec_vitstr_en.md), [ABINet](doc/doc_en/algorithm_rec_abinet_en.md), [VisionLAN](doc/doc_en/algorithm_rec_visionlan_en.md), [SPIN](doc/doc_en/algorithm_rec_spin_en.md), [RobustScanner](doc/doc_en/algorithm_rec_robustscanner_en.md)
+ - Release [PP-OCRv3](./doc/doc_en/ppocr_introduction_en.md#pp-ocrv3): With comparable speed, the effect of Chinese scene is further improved by 5% compared with PP-OCRv2, the effect of English scene is improved by 11%, and the average recognition accuracy of 80 language multilingual models is improved by more than 5%.
+ - Release [PPOCRLabelv2](./PPOCRLabel): Add the annotation function for table recognition task, key information extraction task and irregular text image.
+ - Release interactive e-book [*"Dive into OCR"*](./doc/doc_en/ocr_book_en.md), covers the cutting-edge theory and code practice of OCR full stack technology.
+- [more](./doc/doc_en/update_en.md)
+
+
+## 🌟 Features
+
+PaddleOCR support a variety of cutting-edge algorithms related to OCR, and developed industrial featured models/solution [PP-OCR](./doc/doc_en/ppocr_introduction_en.md) and [PP-Structure](./ppstructure/README.md) on this basis, and get through the whole process of data production, model training, compression, inference and deployment.
+> It is recommended to start with the “quick experience” in the document tutorial
+
+
+## ⚡ Quick Experience
+
+- Web online experience for the ultra-lightweight OCR: [Online Experience](https://www.paddlepaddle.org.cn/hub/scene/ocr)
+- Mobile DEMO experience (based on EasyEdge and Paddle-Lite, supports iOS and Android systems): [Sign in to the website to obtain the QR code for installing the App](https://ai.baidu.com/easyedge/app/openSource?from=paddlelite)
+- One line of code quick use: [Quick Start](./doc/doc_en/quickstart_en.md)
+
+
+<a name="book"></a>
+## 📚 E-book: *Dive Into OCR*
+- [Dive Into OCR ](./doc/doc_en/ocr_book_en.md)
+
+<a name="Community"></a>
+
+## 👫 Community
+
+- For international developers, we regard [PaddleOCR Discussions](https://github.com/PaddlePaddle/PaddleOCR/discussions) as our international community platform. All ideas and questions can be discussed here in English.
+
+- For Chinese develops, Scan the QR code below with your Wechat, you can join the official technical discussion group. For richer community content, please refer to [中文README](README_ch.md), looking forward to your participation.
+| Chinese and English ultra-lightweight PP-OCRv3 model(16.2M) | ch_PP-OCRv3_xx | Mobile & Server | [inference model](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_distill_train.tar) | [inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_train.tar) | [inference model](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_train.tar) |
+| English ultra-lightweight PP-OCRv3 model(13.4M) | en_PP-OCRv3_xx | Mobile & Server | [inference model](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_det_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_det_distill_train.tar) | [inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_train.tar) | [inference model](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_rec_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_rec_train.tar) |
+| Chinese and English ultra-lightweight PP-OCRv2 model(11.6M) | ch_PP-OCRv2_xx |Mobile & Server|[inference model](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_distill_train.tar)| [inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_train.tar) |[inference model](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_train.tar)|
+| Chinese and English ultra-lightweight PP-OCR model (9.4M) | ch_ppocr_mobile_v2.0_xx | Mobile & server |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_train.tar)|[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_train.tar) |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_train.tar) |
+| Chinese and English general PP-OCR model (143.4M) | ch_ppocr_server_v2.0_xx | Server |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_train.tar) |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_train.tar) |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_train.tar) |
+
+
+- For more model downloads (including multiple languages), please refer to [PP-OCR series model downloads](./doc/doc_en/models_list_en.md).
+- For a new language request, please refer to [Guideline for new language_requests](#language_requests).
+- For structural document analysis models, please refer to [PP-Structure models](./ppstructure/docs/models_list_en.md).
+If you want to request a new language support, a PR with 1 following files are needed:
+
+1. In folder [ppocr/utils/dict](./ppocr/utils/dict),
+it is necessary to submit the dict text to this path and name it with `{language}_dict.txt` that contains a list of all characters. Please see the format example from other files in that folder.
+
+If your language has unique elements, please tell me in advance within any way, such as useful links, wikipedia and so on.
+
+More details, please refer to [Multilingual OCR Development Plan](https://github.com/PaddlePaddle/PaddleOCR/issues/1048).
+
+
+<a name="LICENSE"></a>
+## 📄 License
+This project is released under <a href="https://github.com/PaddlePaddle/PaddleOCR/blob/master/LICENSE">Apache 2.0 license</a>
+The Style-Text data synthesis tool is a tool based on Baidu and HUST cooperation research work, "Editing Text in the Wild" [https://arxiv.org/abs/1908.03047](https://arxiv.org/abs/1908.03047).
+
+Different from the commonly used GAN-based data synthesis tools, the main framework of Style-Text includes:
+* (1) Text foreground style transfer module.
+* (2) Background extraction module.
+* (3) Fusion module.
+
+After these three steps, you can quickly realize the image text style transfer. The following figure is some results of the data synthesis tool.
+
+<div align="center">
+ <img src="doc/images/10.png" width="1000">
+</div>
+
+
+<a name="Preparation"></a>
+#### Preparation
+
+1. Please refer the [QUICK INSTALLATION](../doc/doc_en/installation_en.md) to install PaddlePaddle. Python3 environment is strongly recommended.
+If you save the model in another location, please modify the address of the model file in `configs/config.yml`, and you need to modify these three configurations at the same time:
+
+```
+bg_generator:
+ pretrain: style_text_models/bg_generator
+...
+text_generator:
+ pretrain: style_text_models/text_generator
+...
+fusion_generator:
+ pretrain: style_text_models/fusion_generator
+```
+
+<a name="Quick_Start"></a>
+### Quick Start
+
+#### Synthesis single image
+
+1. You can run `tools/synth_image` and generate the demo image, which is saved in the current folder.
+
+```python
+python3 tools/synth_image.py -c configs/config.yml --style_image examples/style_images/2.jpg --text_corpus PaddleOCR --language en
+```
+
+* Note 1: The language options is correspond to the corpus. Currently, the tool only supports English(en), Simplified Chinese(ch) and Korean(ko).
+* Note 2: Synth-Text is mainly used to generate images for OCR recognition models.
+ So the height of style images should be around 32 pixels. Images in other sizes may behave poorly.
+* Note 3: You can modify `use_gpu` in `configs/config.yml` to determine whether to use GPU for prediction.
+
+
+
+For example, enter the following image and corpus `PaddleOCR`.
+What's more, the medium result `fake_bg.jpg` will also be saved, which is the background output.
+
+<div align="center">
+ <img src="doc/images/7.jpg" width="300">
+</div>
+
+
+`fake_text.jpg` * `fake_text.jpg` is the generated image with the same font style as `Style Input`.
+
+
+<div align="center">
+ <img src="doc/images/8.jpg" width="300">
+</div>
+
+
+#### Batch synthesis
+
+In actual application scenarios, it is often necessary to synthesize pictures in batches and add them to the training set. StyleText can use a batch of style pictures and corpus to synthesize data in batches. The synthesis process is as follows:
+
+1. The referenced dataset can be specifed in `configs/dataset_config.yml`:
+
+ * `Global`:
+ * `output_dir:`:Output synthesis data path.
+ * `StyleSampler`:
+ * `image_home`:style images' folder.
+ * `label_file`:Style images' file list. If label is provided, then it is the label file path.
+ * `with_label`:Whether the `label_file` is label file list.
+ * `CorpusGenerator`:
+ * `method`:Method of CorpusGenerator,supports `FileCorpus` and `EnNumCorpus`. If `EnNumCorpus` is used,No other configuration is needed,otherwise you need to set `corpus_file` and `language`.
+ * `language`:Language of the corpus. Currently, the tool only supports English(en), Simplified Chinese(ch) and Korean(ko).
+ * `corpus_file`: Filepath of the corpus. Corpus file should be a text file which will be split by line-endings('\n'). Corpus generator samples one line each time.
+
+
+Example of corpus file:
+```
+PaddleOCR
+飞桨文字识别
+StyleText
+风格文本图像数据合成
+```
+
+We provide a general dataset containing Chinese, English and Korean (50,000 images in all) for your trial ([download link](https://paddleocr.bj.bcebos.com/dygraph_v2.0/style_text/chkoen_5w.tar)), some examples are given below :
+
+<div align="center">
+ <img src="doc/images/5.png" width="800">
+</div>
+
+2. You can run the following command to start synthesis task:
+If you run the code above directly, you will get example output data in `output_data` folder.
+You will get synthesis images and labels as below:
+ <div align="center">
+ <img src="doc/images/12.png" width="800">
+ </div>
+There will be some cache under the `label` folder. If the program exit unexpectedly, you can find cached labels there.
+When the program finish normally, you will find all the labels in `label.txt` which give the final results.
+
+<a name="Applications"></a>
+### Applications
+We take two scenes as examples, which are metal surface English number recognition and general Korean recognition, to illustrate practical cases of using StyleText to synthesize data to improve text recognition. The following figure shows some examples of real scene images and composite images:
+
+<div align="center">
+ <img src="doc/images/11.png" width="800">
+</div>
+
+
+After adding the above synthetic data for training, the accuracy of the recognition model is improved, which is shown in the following table:
+
+
+| Scenario | Characters | Raw Data | Test Data | Only Use Raw Data</br>Recognition Accuracy | New Synthetic Data | Simultaneous Use of Synthetic Data</br>Recognition Accuracy | Index Improvement |