Show simple item record

dc.contributor.authorMANUEL, a
dc.contributor.authorGATTET, E.
dc.contributor.authorDE LUCA, Livio
dc.contributor.authorVERON, Philippe
dc.date.accessioned2015
dc.date.available2015
dc.date.issued2013
dc.date.submitted2015
dc.identifier.urihttp://hdl.handle.net/10985/9834
dc.description.abstractThanks to nowadays technologies, innovative tools afford to increase our knowledge of historic monuments, in the field of preservation and valuation of cultural heritage. These tools are aimed to help experts to create, enrich and share information on historical buildings. Among the various documentary sources, photographs contain a high level of details about shapes and colors. With the development of image analysis and image-based-modeling techniques, large sets of images can be spatially oriented towards a digital mock-up. For these reasons, digital photographs prove to be an easy to use, affordable and flexible support, for heritage documentation. This article presents, in a first step, an approach for 2D/3D semantic annotations in a set of spatially-oriented photographs (whose positions and orientations in space are automatically estimated). In a second step, we will focus on a method for displaying those annotations on new images acquired by mobile devices in situ. Firstly, an automated image-based reconstruction method produces 3D information (specifically 3D coordinates) by processing a large images set. Then, images are semantically annotated and a process uses the previously generated 3D information inherent to images for the annotations transfer. As a consequence, this protocol provides a simple way to finely annotate a large quantity of images at once instead of one by one. As those images annotations are directly inherent to 3D information, they can be stored as 3D files. To bring up on screen the information related to a building, the user takes a picture in situ. An image processing method allows estimating the orientation parameters of this new photograph inside the already oriented large images base. Then the annotations can be precisely projected on the oriented picture and send back to the user. In this way a continuity of information could be established from the initial acquisition to the in situ visualization.
dc.language.isoen
dc.publisherProceedings of Digital Heritage International Congress DH'13
dc.rightsPost-print
dc.subjectSemantic annotations, photogrammetry, image processing, dense image matching
dc.titleAn approach for precise 2D/3D semantic annotation of spatially-oriented images for in-situ visualization applications
dc.typdocCommunications avec actes
dc.localisationCentre de Aix en Provence
dc.subject.halInformatique: Ingénierie assistée par ordinateur
ensam.audienceInternationale
ensam.conference.titleDigital Heritage International Congress DH'13
ensam.conference.date2013
ensam.countryFrance
ensam.title.proceedingProceedings of Digital Heritage International Congress DH'13
ensam.pagena
ensam.cityMarseille
hal.identifierhal-01178690
hal.version1
hal.submission.permittedupdateMetadata


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record