<div dir="ltr"><p dir="ltr" style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><b>2nd International workshop on Machine vision and NLP for Document Analysis (VINALDO)</b><span style="font-weight:700"> </span></p><p dir="ltr" style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><span style="font-family:Arial,Helvetica,sans-serif;font-size:small;color:rgb(34,34,34)"><a href="https://sites.google.com/view/vinaldo-workshop-icdar-2024/home" target="_blank"><b>https://sites.google.com/view/vinaldo-workshop-icdar-2024/home</b></a></span></p><p dir="ltr" style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><b>As part of the 18th International Conference on Document Analysis and Recognition</b><br></p><p dir="ltr" style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><span style="font-weight:700">(ICDAR 2024)</span></p><p dir="ltr" style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><span style="font-family:Arial,Helvetica,sans-serif;font-size:small;color:rgb(34,34,34)"><a href="https://streaklinks.com/B1g0s1RaGsZY6cd03wNltzbF/https%3A%2F%2Ficdar2024.net%2F?email=boutalbi.rafika%40gmail.com" target="_blank"><b>https://icdar2024.net/</b></a></span></p><p dir="ltr" style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px"><b>August 30- September 4, 2024 — </b><span style="font-family:Arial,Helvetica,sans-serif;font-size:small;color:rgb(34,34,34)"><b>Athens, Greece</b></span></p><p dir="ltr" style="line-height:1.31825;margin-right:0.226074pt;text-align:justify;margin-top:15.3292pt;margin-bottom:0pt"><span style="font-size:11.0042pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><span style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px;font-weight:700;text-align:start">Context</span>
</span></p><p dir="ltr" style="line-height:1.31825;margin-right:0.226074pt;text-align:justify;margin-top:15.3292pt;margin-bottom:0pt"><span style="font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Document understanding is an essential task in various application areas such as data invoice extraction, subject review, medical prescription analysis, etc., and holds significant commercial potential. Several approaches are proposed in the literature, but datasets' availability and data privacy challenge them. Considering the problem of information extraction from documents, different aspects must be taken into account, such as (1) document classification, (2) text localization, (3) OCR (Optical Character Recognition), (4) table extraction, and (5) key information detection. </span></p><p dir="ltr" style="line-height:1.31825;margin-right:0.226074pt;text-align:justify;margin-top:15.3292pt;margin-bottom:0pt"><span style="font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">In this context, machine vision and, more precisely, deep learning models for image processing are attractive methods. In fact, several models for document analysis were developed for text box detection, text extraction, table extraction, etc. Different kinds of deep learning approaches, such as GNN, are used to tackle these tasks. On the other hand, the extracted text from documents can be represented using different embeddings based on recent NLP approaches such as Transformers. Also, understanding spatial relationships is critical for text document extraction results for some applications such as invoice analysis. Thus, the aim is to capture the structural connections between keywords (invoice number, date, amounts) and the main value (the desired information). An effective approach requires a combination of visual (spatial) and textual information. </span></p><p dir="ltr" style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif"><span id="m_-3433132689574950233m_7068639984392886946gmail-docs-internal-guid-fd5c13ca-7fff-2c2f-13c1-eec3e67900dd"></span><span style="font-weight:700">Objective</span><br></p><p dir="ltr" style="line-height:1.31825;margin-right:0.226074pt;text-align:justify;margin-top:15.3292pt;margin-bottom:0pt"><span style="font-family:Arial,sans-serif;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">After the success of </span><a href="https://streaklinks.com/B1g2oX7XVztjvqUeqQZcGj2O/https%3A%2F%2Fsites.google.com%2Fview%2Fvinaldo-workshop-icdar-2023%2Fhome" target="_blank" style="text-decoration-line:none"><span style="font-family:Arial,sans-serif;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;text-decoration-line:underline;vertical-align:baseline">VINALDO 2023</span></a><span style="font-family:Arial,sans-serif;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">, in </span><span style="font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">the second edition of the VINALDO workshop, we encourage the description of novel problems or applications for document analysis in the area of information retrieval that has emerged in recent years. On the other hand, we want to highlight a particular topic namely “Multi-view and Multimodal approaches”. In fact, the VINALDO workshop aims to combine visual and textual information for document analysis, in this context, multi-view and multimodal methods have really an important advantage in dealing with different types of data. Thus, we encourage works that combine machine vision and NLP through Multiview or/and multimodal approaches. Finally, we also encourage works that combine NLP and computer vision methods and develop new document datasets for novel applications.</span></p><p dir="ltr" style="line-height:1.38;margin-right:0.226074pt;text-align:justify;margin-top:15.3292pt;margin-bottom:0pt"><span style="font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">The VINALDO workshop aims to bring together an area for experts from industry, science, and academia to exchange ideas and discuss ongoing </span><span style="font-family:Arial,sans-serif;color:rgb(33,33,33);font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">research in Computer Vision and NLP for scanned document analysis.</span></p><p dir="ltr" style="background-color:transparent;box-sizing:border-box;font-variant-ligatures:none;margin:15pt 0px 0pt;outline:none;text-decoration-line:inherit;color:rgb(33,33,33);font-family:Roboto;line-height:1.5819;border-width:initial;border-style:none;border-color:initial;padding:0pt;text-align:justify"><span id="m_-3433132689574950233m_7068639984392886946gmail-docs-internal-guid-fee3d4be-7fff-6fc3-a895-d46d534b4726"></span><span style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-variant-ligatures:normal;font-weight:700;text-align:start">Topics of interests</span></p><p dir="ltr" style="line-height:1.38;margin:0pt 28.6724pt 0pt 0.543472pt;padding:0pt 0pt 0pt 0.226822pt"><span style="font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><br></span></p><p dir="ltr" style="line-height:1.38;margin:0pt 28.6724pt 0pt 0.543472pt;padding:0pt 0pt 0pt 0.226822pt"><span style="font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">This workshop invites submissions with high-quality works that are related, but are not limited, to the topics below:</span></p><p dir="ltr" style="background-color:transparent;box-sizing:border-box;font-variant-ligatures:none;margin:15pt 0px 0pt;outline:none;text-decoration-line:inherit;color:rgb(33,33,33);font-family:Roboto;line-height:1.5819;border-width:initial;border-style:none;border-color:initial;padding:0pt;text-align:justify"><span id="m_-3433132689574950233m_7068639984392886946gmail-docs-internal-guid-8cd7e9f6-7fff-f98a-d8d7-ae8ca4a41f1e"></span></p><ul style="margin-top:0px;margin-bottom:0px"><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.29553;margin-right:75.2692pt;margin-top:2.17065pt;margin-bottom:0pt"><span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Multi-view document representation</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.29553;margin-right:75.2692pt;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Multi-view algorithms for document clustering</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.29553;margin-right:75.2692pt;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Multimodal document classification</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.29553;margin-right:75.2692pt;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Multimodal deep networks</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.29553;margin-right:75.2692pt;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Multi-view models for document ranking</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.29553;margin-right:75.2692pt;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Document retrieval using multi-view document representation</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.29553;margin-right:75.2692pt;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Document structure and layout learning</span><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"> </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">OCR based methods</span><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"> </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.2;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Semi-supervised methods for document analysis</span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.2;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Dynamic graph analysis </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.2;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Information Retrieval and Extraction from documents </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.2;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Knowledge graph for semantic document analysis </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.2;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Semantic understanding of document content</span><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"> </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.2;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Entity and link prediction in graphs </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.29553;margin-right:75.2692pt;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Merging ontologies with graph-based methods using NLP techniques </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.29553;margin-right:75.2692pt;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Cleansing and image enhancement techniques for scanned document</span><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"> </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.2;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Font text recognition in a scanned document </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.2;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Table identification and extraction from scanned documents</span><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"> </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.2;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Handwriting detection and recognition in documents</span><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"> </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.2;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Signature detection and verification in documents</span><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"> </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.2;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Visual document structure understanding</span><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"> </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.2;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Visual Question Answering</span><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"> </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.2;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Invoice analysis</span><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"> </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.2;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Scanned documents classification</span><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"> </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.2;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Scanned documents summarization</span><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"> </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.2;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Scanned documents translation</span><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"> </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.36371;margin-right:75.8924pt;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Graph-based approaches for a spatial component in a scanned document </span></p></li><li dir="ltr" style="margin-left:15px;list-style-type:disc;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline"><p dir="ltr" role="presentation" style="line-height:1.36371;margin-right:75.8924pt;text-align:justify;margin-top:0pt;margin-bottom:0pt"><span style="background-color:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">Graph representation learning for NLP</span></p></li></ul><div style="text-align:justify"><font color="#000000" face="Arial, sans-serif"><br></font></div><div style="text-align:justify"><span style="color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-weight:700;text-align:start">Submission</span> </div><div style="text-align:justify"><div style="text-align:start;color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif"><p dir="ltr">The workshop is open to original papers of theoretical or practical nature. Papers should be formatted according to <span style="text-decoration-line:underline"><a href="https://streaklinks.com/B1g2oYAsrI85if-kXQ2EuWxU/https%3A%2F%2Fwww.springer.com%2Ffr" target="_blank" style="color:rgb(41,98,255);text-decoration-line:none">LNCS instructions for authors</a></span>. VINALDO 2024 will follow a double-blind review process. Authors should not include their names and affiliations anywhere in the manuscript. Authors should also ensure that their identity is not revealed indirectly by citing their previous work in the third person and omitting acknowledgments until the camera-ready version. Papers have to be submitted via the workshop's <u><a href="https://streaklinks.com/B1g2oX_TEzvV4dbfAw7OQRmD/https%3A%2F%2Feasychair.org%2Fconferences%2F%3Fconf%3Dvinaldo2" target="_blank">Easychair</a></u> submission page.</p><p dir="ltr">We welcome the following types of contributions:</p><ul><li dir="ltr" style="margin-left:15px"><p dir="ltr">Full research papers (12-15 pages): Finished or consolidated R&D works to be included in one of the Workshop topics</p></li><li dir="ltr" style="margin-left:15px"><p dir="ltr">Short papers (6-8 pages): ongoing works with relevant preliminary results, opened to discussion.</p></li></ul><p dir="ltr">At least one author of each accepted paper must register for the workshop in order to present the paper. For further instructions, please refer to the<a href="https://streaklinks.com/B1g2oX7np24tHs4hWQkHlyqE/https%3A%2F%2Fwww.google.com%2Furl%3Fq%3Dhttps%253A%252F%252Ficdar2021.org%252F%26sa%3DD%26sntz%3D1%26usg%3DAOvVaw0W4EcU263Y1GNomxyRFH3n" target="_blank" style="color:rgb(41,98,255);text-decoration-line:none"> </a><a href="https://streaklinks.com/B1g2oX7KCubPFfGgFAJJhZ4_/https%3A%2F%2Ficdar2024.net%2F" target="_blank"><font color="#2962ff">ICDAR 202</font>4</a> page.</p><p dir="ltr"><span style="font-weight:700">Important dates</span></p><ul style="list-style-type:square;box-sizing:border-box;padding:0px;margin:6px 0px 0px;color:rgb(0,0,0);font-family:sans-serif"><li dir="ltr" style="margin:0px 0px 0px 15pt;box-sizing:border-box;font-variant-ligatures:none;outline:none;text-decoration:inherit;color:rgb(33,33,33);font-style:inherit;font-weight:inherit;font-family:Roboto;line-height:0;padding-top:0px"><p dir="ltr" role="presentation" style="box-sizing:border-box;margin:15pt 0px 0pt 0pt;outline:none;text-decoration:inherit;font-style:inherit;font-weight:inherit;line-height:1.44;padding:0pt;background-color:transparent;border-width:initial;border-style:none;border-color:initial;text-indent:0pt"><span style="box-sizing:border-box;color:rgb(0,0,0);font-family:Arial">Submission Deadline: March 20, 2024 at 11:59pm Pacific Time</span></p></li><li dir="ltr" style="margin:6px 0px 0px 15pt;box-sizing:border-box;font-variant-ligatures:none;outline:none;text-decoration:inherit;color:rgb(33,33,33);font-style:inherit;font-weight:inherit;font-family:Roboto;line-height:0"><p dir="ltr" role="presentation" style="box-sizing:border-box;margin:0pt 0px 0pt 0pt;outline:none;text-decoration:inherit;font-style:inherit;font-weight:inherit;line-height:1.44;padding:0pt;background-color:transparent;border-width:initial;border-style:none;border-color:initial;text-indent:0pt"><span style="box-sizing:border-box;color:rgb(0,0,0);font-family:Arial">Decisions Announced: April 29, 2024, at 11:59pm Pacific Time</span></p></li><li dir="ltr" style="margin:6px 0px 0px 15pt;box-sizing:border-box;font-variant-ligatures:none;outline:none;text-decoration:inherit;color:rgb(33,33,33);font-style:inherit;font-weight:inherit;font-family:Roboto;line-height:0"><p dir="ltr" role="presentation" style="box-sizing:border-box;margin:0pt 0px 0pt 0pt;outline:none;text-decoration:inherit;font-style:inherit;font-weight:inherit;line-height:1.44;padding:0pt;background-color:transparent;border-width:initial;border-style:none;border-color:initial;text-indent:0pt"><span style="box-sizing:border-box;color:rgb(0,0,0);font-family:Arial">Camera Ready Deadline: May 10, 2024, at 11:59pm Pacific Time</span></p></li><li dir="ltr" style="margin:6px 0px 0px 15pt;box-sizing:border-box;font-variant-ligatures:none;outline:none;text-decoration:inherit;color:rgb(33,33,33);font-style:inherit;font-weight:inherit;font-family:Roboto;line-height:0;padding-bottom:0px"><p dir="ltr" role="presentation" style="box-sizing:border-box;margin:0px 0px 0px 0pt;outline:none;text-decoration:inherit;font-style:inherit;font-weight:inherit;line-height:1.55;padding-top:0px;padding-bottom:0px;padding-left:0pt;text-indent:0pt"><span style="box-sizing:border-box;font-family:Arial">Workshop: To be announced<span style="font-size:13pt"> </span></span></p></li></ul></div><p dir="ltr" style="text-align:start;color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif"><span style="font-weight:700">Workshop Chairs</span></p><p dir="ltr" style="text-align:start;color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif"><a href="http://rim.hantach@gmail.com "><span style="font-weight:700"><font color="#2962ff">Rim Hantach</font></span>,</a> Engie, France</p><p dir="ltr" style="text-align:start;color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif"></p><p dir="ltr" style="text-align:start;color:rgba(0,0,0,0.87);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif"><a href="mailto:rafika.boutalbi@lis-lab.fr"><span style="font-weight:700"><font color="#2962ff">Rafika Boutalbi</font></span>,</a> Aix-Marseille University, France</p></div></div><div hspace="streak-pt-mark" style="max-height:1px"><img alt="" style="width:0px;max-height:0px;overflow:hidden" src="https://mailfoogae.appspot.com/t?sender=aYm91dGFsYmkucmFmaWthQGdtYWlsLmNvbQ%3D%3D&type=zerocontent&guid=eed1b6c5-dde8-4a0b-a331-bb84b62289a3"><font color="#ffffff" size="1">ᐧ</font></div>