Uploaded image for project: 'OASIS Content Management Interoperability Services (CMIS) TC'
  1. OASIS Content Management Interoperability Services (CMIS) TC
  2. CMIS-86

Provide a new service that will allow search crawlers to efficiently navigate a CMIS repository.

    XMLWordPrintable

    Details

    • Proposal:
      Hide

      See http://www.oasis-open.org/apps/org/workgroup/cmis/document.php?document_id=31491. That document describes a new service that will allow search crawlers to efficiently navigate a CMIS repository.

      Show
      See http://www.oasis-open.org/apps/org/workgroup/cmis/document.php?document_id=31491 . That document describes a new service that will allow search crawlers to efficiently navigate a CMIS repository.
    • Resolution:
      Hide

      TC resolution: accepted, use GET instead of POST; defer scoping to V2

      Show
      TC resolution: accepted, use GET instead of POST; defer scoping to V2

      Description

      CMIS needs to allow repositories to expose what information inside the repository has changed in an efficient manner for applications of interest, like search crawlers, to facilitate incremental indexing of a repository.

      In theory, a search crawler could index the content of a CMIS repository by using the navigation mechanisms already defined as part of the proposed specification. For example, a crawler engine could start at the root collection and, using the REST bindings, progressively navigate through the folders, get the document content and metadata, and index that content. It could use the CMIS date/time stamps to more efficiently do this by querying for documents modified since the last crawl.

      But there are problems with this approach. First, there is no mechanism for knowing what has been deleted from the repository, so the indexed content would contain 'dead' references. Second, there is no standard way to get the access control information needed to filter the search results so the search consumer only sees the content (s)he is supposed to see. Third, each indexer would solve the crawling of the repository in a different way (for example, one could use query and one could use navigation) causing different performance and scalability characteristics that would be hard to control in such a system. Finally, the cost of indexing an entire repository can be prohibitive for large content, or content that changes often, requiring support for incremental crawling and paging results.

        Attachments

          Activity

            People

            • Assignee:
              ethang Ethan Gur-esh
              Reporter:
              melahn Gregory Melahn (Inactive)
            • Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: