• Solr:Schema设计


    Solr将数据以结构化的方式存入系统中,存储的过程中可以对数据建立索引,这个结构的定义就是通过schema.xml来配置的。

    复制代码
    <?xml version="1.0" encoding="UTF-8" ?>
    <!--
     Licensed to the Apache Software Foundation (ASF) under one or more
     contributor license agreements.  See the NOTICE file distributed with
     this work for additional information regarding copyright ownership.
     The ASF licenses this file to You under the Apache License, Version 2.0
     (the "License"); you may not use this file except in compliance with
     the License.  You may obtain a copy of the License at
    
         http://www.apache.org/licenses/LICENSE-2.0
    
     Unless required by applicable law or agreed to in writing, software
     distributed under the License is distributed on an "AS IS" BASIS,
     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     See the License for the specific language governing permissions and
     limitations under the License.
    -->
    
    <!--  
     This is the Solr schema file. This file should be named "schema.xml" and
     should be in the conf directory under the solr home
     (i.e. ./solr/conf/schema.xml by default) 
     or located where the classloader for the Solr webapp can find it.
    
     This example schema is the recommended starting point for users.
     It should be kept correct and concise, usable out-of-the-box.
    
     For more information, on how to customize this file, please see
     http://wiki.apache.org/solr/SchemaXml
    -->
    
    <schema name="example" version="1.5">
      <!-- attribute "name" is the name of this schema and is only used for display purposes.
           version="x.y" is Solr's version number for the schema syntax and 
           semantics.  It should not normally be changed by applications.
    
           1.0: multiValued attribute did not exist, all fields are multiValued 
                by nature
           1.1: multiValued attribute introduced, false by default 
           1.2: omitTermFreqAndPositions attribute introduced, true by default 
                except for text fields.
           1.3: removed optional field compress feature
           1.4: autoGeneratePhraseQueries attribute introduced to drive QueryParser
                behavior when a single string produces multiple tokens.  Defaults 
                to off for version >= 1.4
           1.5: omitNorms defaults to true for primitive field types 
                (int, float, boolean, string...)
         -->
    
    
       <!-- Valid attributes for fields:
         name: mandatory - the name for the field
         type: mandatory - the name of a field type from the 
           <types> fieldType section
         indexed: true if this field should be indexed (searchable or sortable)
         stored: true if this field should be retrievable
         docValues: true if this field should have doc values. Doc values are
           useful for faceting, grouping, sorting and function queries. Although not
           required, doc values will make the index faster to load, more
           NRT-friendly and more memory-efficient. They however come with some
           limitations: they are currently only supported by StrField, UUIDField
           and all Trie*Fields, and depending on the field type, they might
           require the field to be single-valued, be required or have a default
           value (check the documentation of the field type you're interested in
           for more information)
         multiValued: true if this field may contain multiple values per document
         omitNorms: (expert) set to true to omit the norms associated with
           this field (this disables length normalization and index-time
           boosting for the field, and saves some memory).  Only full-text
           fields or fields that need an index-time boost need norms.
           Norms are omitted for primitive (non-analyzed) types by default.
         termVectors: [false] set to true to store the term vector for a
           given field.
           When using MoreLikeThis, fields used for similarity should be
           stored for best performance.
         termPositions: Store position information with the term vector.  
           This will increase storage costs.
         termOffsets: Store offset information with the term vector. This 
           will increase storage costs.
         required: The field is required.  It will throw an error if the
           value does not exist
         default: a value that should be used if no value is specified
           when adding a document.
       -->
    
       <!-- field names should consist of alphanumeric or underscore characters only and
          not start with a digit.  This is not currently strictly enforced,
          but other field names will not have first class support from all components
          and back compatibility is not guaranteed.  Names with both leading and
          trailing underscores (e.g. _version_) are reserved.
       -->
    
       <!-- If you remove this field, you must _also_ disable the update log in solrconfig.xml
          or Solr won't start. _version_ and update log are required for SolrCloud
       --> 
       <field name="_version_" type="long" indexed="true" stored="true"/>
       
       <!-- points to the root document of a block of nested documents. Required for nested
          document support, may be removed otherwise
       -->
       <field name="_root_" type="string" indexed="true" stored="false"/>
    
       <!-- Only remove the "id" field if you have a very good reason to. While not strictly
         required, it is highly recommended. A <uniqueKey> is present in almost all Solr 
         installations. See the <uniqueKey> declaration below where <uniqueKey> is set to "id".
         Do NOT change the type and apply index-time analysis to the <uniqueKey> as it will likely 
         make routing in SolrCloud and document replacement in general fail. Limited _query_ time
         analysis is possible as long as the indexing process is guaranteed to index the term
         in a compatible way. Any analysis applied to the <uniqueKey> should _not_ produce multiple
         tokens
       -->   
       <field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" /> 
    
       <!-- Dynamic field definitions allow using convention over configuration
           for fields via the specification of patterns to match field names. 
           EXAMPLE:  name="*_i" will match any field ending in _i (like myid_i, z_i)
           RESTRICTION: the glob-like pattern in the name attribute must have
           a "*" only at the start or the end.  -->
       
       <dynamicField name="*_i"  type="int"    indexed="true"  stored="true"/>
       <dynamicField name="*_is" type="int"    indexed="true"  stored="true"  multiValued="true"/>
       <dynamicField name="*_s"  type="string"  indexed="true"  stored="true" />
       <dynamicField name="*_ss" type="string"  indexed="true"  stored="true" multiValued="true"/>
       <dynamicField name="*_l"  type="long"   indexed="true"  stored="true"/>
       <dynamicField name="*_ls" type="long"   indexed="true"  stored="true"  multiValued="true"/>
       <dynamicField name="*_t"  type="text_general"    indexed="true"  stored="true"/>
       <dynamicField name="*_txt" type="text_general"   indexed="true"  stored="true" multiValued="true"/>
       <dynamicField name="*_en"  type="text_en"    indexed="true"  stored="true" multiValued="true"/>
       <dynamicField name="*_b"  type="boolean" indexed="true" stored="true"/>
       <dynamicField name="*_bs" type="boolean" indexed="true" stored="true"  multiValued="true"/>
       <dynamicField name="*_f"  type="float"  indexed="true"  stored="true"/>
       <dynamicField name="*_fs" type="float"  indexed="true"  stored="true"  multiValued="true"/>
       <dynamicField name="*_d"  type="double" indexed="true"  stored="true"/>
       <dynamicField name="*_ds" type="double" indexed="true"  stored="true"  multiValued="true"/>
    
       <!-- Type used to index the lat and lon components for the "location" FieldType -->
       <dynamicField name="*_coordinate"  type="tdouble" indexed="true"  stored="false" />
    
       <dynamicField name="*_dt"  type="date"    indexed="true"  stored="true"/>
       <dynamicField name="*_dts" type="date"    indexed="true"  stored="true" multiValued="true"/>
       <dynamicField name="*_p"  type="location" indexed="true" stored="true"/>
    
       <!-- some trie-coded dynamic fields for faster range queries -->
       <dynamicField name="*_ti" type="tint"    indexed="true"  stored="true"/>
       <dynamicField name="*_tl" type="tlong"   indexed="true"  stored="true"/>
       <dynamicField name="*_tf" type="tfloat"  indexed="true"  stored="true"/>
       <dynamicField name="*_td" type="tdouble" indexed="true"  stored="true"/>
       <dynamicField name="*_tdt" type="tdate"  indexed="true"  stored="true"/>
    
       <dynamicField name="*_c"   type="currency" indexed="true"  stored="true"/>
    
       <dynamicField name="ignored_*" type="ignored" multiValued="true"/>
       <dynamicField name="attr_*" type="text_general" indexed="true" stored="true" multiValued="true"/>
    
       <dynamicField name="random_*" type="random" />
    
       <!-- uncomment the following to ignore any fields that don't already match an existing 
            field name or dynamic field, rather than reporting them as an error. 
            alternately, change the type="ignored" to some other type e.g. "text" if you want 
            unknown fields indexed and/or stored by default --> 
       <!--dynamicField name="*" type="ignored" multiValued="true" /-->
    
     <!-- Field to use to determine and enforce document uniqueness. 
          Unless this field is marked with required="false", it will be a required field
       -->
     <uniqueKey>id</uniqueKey>
    
      <!-- copyField commands copy one field to another at the time a document
            is added to the index.  It's used either to index the same field differently,
            or to add multiple fields to the same field for easier/faster searching.  -->
    
      <!--
       <copyField source="title" dest="text"/>
       <copyField source="body" dest="text"/>
      -->
      
        <!-- field type definitions. The "name" attribute is
           just a label to be used by field definitions.  The "class"
           attribute and any other attributes determine the real
           behavior of the fieldType.
             Class names starting with "solr" refer to java classes in a
           standard package such as org.apache.solr.analysis
        -->
    
        <!-- The StrField type is not analyzed, but indexed/stored verbatim.
           It supports doc values but in that case the field needs to be
           single-valued and either required or have a default value.
          -->
        <fieldType name="string" class="solr.StrField" sortMissingLast="true" />
    
        <!-- boolean type: "true" or "false" -->
        <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true"/>
    
        <!-- sortMissingLast and sortMissingFirst attributes are optional attributes are
             currently supported on types that are sorted internally as strings
             and on numeric types.
           This includes "string","boolean", and, as of 3.5 (and 4.x),
           int, float, long, date, double, including the "Trie" variants.
           - If sortMissingLast="true", then a sort on this field will cause documents
             without the field to come after documents with the field,
             regardless of the requested sort order (asc or desc).
           - If sortMissingFirst="true", then a sort on this field will cause documents
             without the field to come before documents with the field,
             regardless of the requested sort order.
           - If sortMissingLast="false" and sortMissingFirst="false" (the default),
             then default lucene sorting will be used which places docs without the
             field first in an ascending sort and last in a descending sort.
        -->    
    
        <!--
          Default numeric field types. For faster range queries, consider the tint/tfloat/tlong/tdouble types.
    
          These fields support doc values, but they require the field to be
          single-valued and either be required or have a default value.
        -->
        <fieldType name="int" class="solr.TrieIntField" precisionStep="0" positionIncrementGap="0"/>
        <fieldType name="float" class="solr.TrieFloatField" precisionStep="0" positionIncrementGap="0"/>
        <fieldType name="long" class="solr.TrieLongField" precisionStep="0" positionIncrementGap="0"/>
        <fieldType name="double" class="solr.TrieDoubleField" precisionStep="0" positionIncrementGap="0"/>
    
        <!--
         Numeric field types that index each value at various levels of precision
         to accelerate range queries when the number of values between the range
         endpoints is large. See the javadoc for NumericRangeQuery for internal
         implementation details.
    
         Smaller precisionStep values (specified in bits) will lead to more tokens
         indexed per value, slightly larger index size, and faster range queries.
         A precisionStep of 0 disables indexing at different precision levels.
        -->
        <fieldType name="tint" class="solr.TrieIntField" precisionStep="8" positionIncrementGap="0"/>
        <fieldType name="tfloat" class="solr.TrieFloatField" precisionStep="8" positionIncrementGap="0"/>
        <fieldType name="tlong" class="solr.TrieLongField" precisionStep="8" positionIncrementGap="0"/>
        <fieldType name="tdouble" class="solr.TrieDoubleField" precisionStep="8" positionIncrementGap="0"/>
    
        <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and
             is a more restricted form of the canonical representation of dateTime
             http://www.w3.org/TR/xmlschema-2/#dateTime    
             The trailing "Z" designates UTC time and is mandatory.
             Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z
             All other components are mandatory.
    
             Expressions can also be used to denote calculations that should be
             performed relative to "NOW" to determine the value, ie...
    
                   NOW/HOUR
                      ... Round to the start of the current hour
                   NOW-1DAY
                      ... Exactly 1 day prior to now
                   NOW/DAY+6MONTHS+3DAYS
                      ... 6 months and 3 days in the future from the start of
                          the current day
                          
             Consult the TrieDateField javadocs for more information.
    
             Note: For faster range queries, consider the tdate type
          -->
        <fieldType name="date" class="solr.TrieDateField" precisionStep="0" positionIncrementGap="0"/>
    
        <!-- A Trie based date field for faster date range queries and date faceting. -->
        <fieldType name="tdate" class="solr.TrieDateField" precisionStep="6" positionIncrementGap="0"/>
    
    
        <!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings -->
        <fieldType name="binary" class="solr.BinaryField"/>
    
        <!-- The "RandomSortField" is not used to store or search any
             data.  You can declare fields of this type it in your schema
             to generate pseudo-random orderings of your docs for sorting 
             or function purposes.  The ordering is generated based on the field
             name and the version of the index. As long as the index version
             remains unchanged, and the same field name is reused,
             the ordering of the docs will be consistent.  
             If you want different psuedo-random orderings of documents,
             for the same version of the index, use a dynamicField and
             change the field name in the request.
         -->
        <fieldType name="random" class="solr.RandomSortField" indexed="true" />
    
        <!-- solr.TextField allows the specification of custom text analyzers
             specified as a tokenizer and a list of token filters. Different
             analyzers may be specified for indexing and querying.
    
             The optional positionIncrementGap puts space between multiple fields of
             this type on the same document, with the purpose of preventing false phrase
             matching across fields.
    
             For more info on customizing your analyzer chain, please see
             http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
         -->
    
        <!-- One can also specify an existing Analyzer class that has a
             default constructor via the class attribute on the analyzer element.
             Example:
        <fieldType name="text_greek" class="solr.TextField">
          <analyzer class="org.apache.lucene.analysis.el.GreekAnalyzer"/>
        </fieldType>
        -->
    
        <!-- A text field that only splits on whitespace for exact matching of words -->
        <fieldType name="text_ws" class="solr.TextField" positionIncrementGap="100">
          <analyzer>
            <tokenizer class="solr.WhitespaceTokenizerFactory"/>
          </analyzer>
        </fieldType>
    
        <!-- A general text field that has reasonable, generic
             cross-language defaults: it tokenizes with StandardTokenizer,
       removes stop words from case-insensitive "stopwords.txt"
       (empty by default), and down cases.  At query time only, it
       also applies synonyms. -->
        <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100">
          <analyzer type="index">
            <tokenizer class="solr.StandardTokenizerFactory"/>
            <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
            <!-- in this example, we will only use synonyms at query time
            <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
            -->
            <filter class="solr.LowerCaseFilterFactory"/>
          </analyzer>
          <analyzer type="query">
            <tokenizer class="solr.StandardTokenizerFactory"/>
            <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
            <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
            <filter class="solr.LowerCaseFilterFactory"/>
          </analyzer>
        </fieldType>
    
        <!-- A text field with defaults appropriate for English: it
             tokenizes with StandardTokenizer, removes English stop words
             (lang/stopwords_en.txt), down cases, protects words from protwords.txt, and
             finally applies Porter's stemming.  The query time analyzer
             also applies synonyms from synonyms.txt. -->
        <fieldType name="text_en" class="solr.TextField" positionIncrementGap="100">
          <analyzer type="index">
            <tokenizer class="solr.StandardTokenizerFactory"/>
            <!-- in this example, we will only use synonyms at query time
            <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
            -->
            <!-- Case insensitive stop word removal.
            -->
            <filter class="solr.StopFilterFactory"
                    ignoreCase="true"
                    words="lang/stopwords_en.txt"
                    />
            <filter class="solr.LowerCaseFilterFactory"/>
      <filter class="solr.EnglishPossessiveFilterFactory"/>
            <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
      <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
            <filter class="solr.EnglishMinimalStemFilterFactory"/>
      -->
            <filter class="solr.PorterStemFilterFactory"/>
          </analyzer>
          <analyzer type="query">
            <tokenizer class="solr.StandardTokenizerFactory"/>
            <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
            <filter class="solr.StopFilterFactory"
                    ignoreCase="true"
                    words="lang/stopwords_en.txt"
                    />
            <filter class="solr.LowerCaseFilterFactory"/>
      <filter class="solr.EnglishPossessiveFilterFactory"/>
            <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
      <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
            <filter class="solr.EnglishMinimalStemFilterFactory"/>
      -->
            <filter class="solr.PorterStemFilterFactory"/>
          </analyzer>
        </fieldType>
    
        <!-- A text field with defaults appropriate for English, plus
       aggressive word-splitting and autophrase features enabled.
       This field is just like text_en, except it adds
       WordDelimiterFilter to enable splitting and matching of
       words on case-change, alpha numeric boundaries, and
       non-alphanumeric chars.  This means certain compound word
       cases will work, for example query "wi fi" will match
       document "WiFi" or "wi-fi".
            -->
        <fieldType name="text_en_splitting" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
          <analyzer type="index">
            <tokenizer class="solr.WhitespaceTokenizerFactory"/>
            <!-- in this example, we will only use synonyms at query time
            <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
            -->
            <!-- Case insensitive stop word removal.
            -->
            <filter class="solr.StopFilterFactory"
                    ignoreCase="true"
                    words="lang/stopwords_en.txt"
                    />
            <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
            <filter class="solr.LowerCaseFilterFactory"/>
            <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
            <filter class="solr.PorterStemFilterFactory"/>
          </analyzer>
          <analyzer type="query">
            <tokenizer class="solr.WhitespaceTokenizerFactory"/>
            <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
            <filter class="solr.StopFilterFactory"
                    ignoreCase="true"
                    words="lang/stopwords_en.txt"
                    />
            <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
            <filter class="solr.LowerCaseFilterFactory"/>
            <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
            <filter class="solr.PorterStemFilterFactory"/>
          </analyzer>
        </fieldType>
    
        <!-- Less flexible matching, but less false matches.  Probably not ideal for product names,
             but may be good for SKUs.  Can insert dashes in the wrong place and still match. -->
        <fieldType name="text_en_splitting_tight" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
          <analyzer>
            <tokenizer class="solr.WhitespaceTokenizerFactory"/>
            <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>
            <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt"/>
            <filter class="solr.WordDelimiterFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>
            <filter class="solr.LowerCaseFilterFactory"/>
            <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
            <filter class="solr.EnglishMinimalStemFilterFactory"/>
            <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes
                 possible with WordDelimiterFilter in conjuncton with stemming. -->
            <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
          </analyzer>
        </fieldType>
    
        <!-- Just like text_general except it reverses the characters of
       each token, to enable more efficient leading wildcard queries. -->
        <fieldType name="text_general_rev" class="solr.TextField" positionIncrementGap="100">
          <analyzer type="index">
            <tokenizer class="solr.StandardTokenizerFactory"/>
            <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
            <filter class="solr.LowerCaseFilterFactory"/>
            <filter class="solr.ReversedWildcardFilterFactory" withOriginal="true"
               maxPosAsterisk="3" maxPosQuestion="2" maxFractionAsterisk="0.33"/>
          </analyzer>
          <analyzer type="query">
            <tokenizer class="solr.StandardTokenizerFactory"/>
            <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
            <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
            <filter class="solr.LowerCaseFilterFactory"/>
          </analyzer>
        </fieldType>
    
        <!-- This is an example of using the KeywordTokenizer along
             With various TokenFilterFactories to produce a sortable field
             that does not include some properties of the source text
          -->
        <fieldType name="alphaOnlySort" class="solr.TextField" sortMissingLast="true" omitNorms="true">
          <analyzer>
            <!-- KeywordTokenizer does no actual tokenizing, so the entire
                 input string is preserved as a single token
              -->
            <tokenizer class="solr.KeywordTokenizerFactory"/>
            <!-- The LowerCase TokenFilter does what you expect, which can be
                 when you want your sorting to be case insensitive
              -->
            <filter class="solr.LowerCaseFilterFactory" />
            <!-- The TrimFilter removes any leading or trailing whitespace -->
            <filter class="solr.TrimFilterFactory" />
            <!-- The PatternReplaceFilter gives you the flexibility to use
                 Java Regular expression to replace any sequence of characters
                 matching a pattern with an arbitrary replacement string, 
                 which may include back references to portions of the original
                 string matched by the pattern.
                 
                 See the Java Regular Expression documentation for more
                 information on pattern and replacement string syntax.
                 
                 http://docs.oracle.com/javase/7/docs/api/java/util/regex/package-summary.html
              -->
            <filter class="solr.PatternReplaceFilterFactory"
                    pattern="([^a-z])" replacement="" replace="all"
            />
          </analyzer>
        </fieldType>
    
        <!-- lowercases the entire field value, keeping it as a single token.  -->
        <fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100">
          <analyzer>
            <tokenizer class="solr.KeywordTokenizerFactory"/>
            <filter class="solr.LowerCaseFilterFactory" />
          </analyzer>
        </fieldType>
    
        <!-- since fields of this type are by default not stored or indexed,
             any data added to them will be ignored outright.  --> 
        <fieldType name="ignored" stored="false" indexed="false" multiValued="true" class="solr.StrField" />
    
        <!-- This point type indexes the coordinates as separate fields (subFields)
          If subFieldType is defined, it references a type, and a dynamic field
          definition is created matching *___<typename>.  Alternately, if 
          subFieldSuffix is defined, that is used to create the subFields.
          Example: if subFieldType="double", then the coordinates would be
            indexed in fields myloc_0___double,myloc_1___double.
          Example: if subFieldSuffix="_d" then the coordinates would be indexed
            in fields myloc_0_d,myloc_1_d
          The subFields are an implementation detail of the fieldType, and end
          users normally should not need to know about them.
         -->
        <fieldType name="point" class="solr.PointType" dimension="2" subFieldSuffix="_d"/>
    
        <!-- A specialized field for geospatial search. If indexed, this fieldType must not be multivalued. -->
        <fieldType name="location" class="solr.LatLonType" subFieldSuffix="_coordinate"/>
    
        <!-- An alternative geospatial field type new to Solr 4.  It supports multiValued and polygon shapes.
          For more information about this and other Spatial fields new to Solr 4, see:
          http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4
        -->
        <fieldType name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType"
            geo="true" distErrPct="0.025" maxDistErr="0.001" distanceUnits="kilometers" />
    
        <!-- Spatial rectangle (bounding box) field. It supports most spatial predicates, and has
         special relevancy modes: score=overlapRatio|area|area2D (local-param to the query).  DocValues is recommended for
         relevancy. -->
        <fieldType name="bbox" class="solr.BBoxField"
                   geo="true" distanceUnits="kilometers" numberType="_bbox_coord" />
        <fieldType name="_bbox_coord" class="solr.TrieDoubleField" precisionStep="8" docValues="true" stored="false"/>
    
       <!-- Money/currency field type. See http://wiki.apache.org/solr/MoneyFieldType
            Parameters:
              defaultCurrency: Specifies the default currency if none specified. Defaults to "USD"
              precisionStep:   Specifies the precisionStep for the TrieLong field used for the amount
              providerClass:   Lets you plug in other exchange provider backend:
                               solr.FileExchangeRateProvider is the default and takes one parameter:
                                 currencyConfig: name of an xml file holding exchange rates
                               solr.OpenExchangeRatesOrgProvider uses rates from openexchangerates.org:
                                 ratesFileLocation: URL or path to rates JSON file (default latest.json on the web)
                                 refreshInterval: Number of minutes between each rates fetch (default: 1440, min: 60)
       -->
        <fieldType name="currency" class="solr.CurrencyField" precisionStep="8" defaultCurrency="USD" currencyConfig="currency.xml" />
    
    </schema>
    复制代码

    schema.xml位于solr/conf/目录下,类似于数据表配置文件,定义了加入索引的数据的数据类型,主要包括type、fields和其他的一些缺省设置。Solr的schema配置是非常灵活和丰富,下面将对此进行详细介绍。

    基本的schema配置

    先来看一个简单的schema配置:

    复制代码
    <?xml version="1.0" encoding="UTF-8" ?>
    <schema name="user" version="1.5">
       <field name="_version_" type="long" indexed="true" stored="true"/>
       <field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" />
       <field name="name" type="text_general" indexed="true" stored="true"/>
       <uniqueKey>id</uniqueKey>
    
       <fieldType name="long" class="solr.TrieLongField" precisionStep="0" positionIncrementGap="0"/>
       <fieldType name="string" class="solr.StrField" sortMissingLast="true" />
       <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100">
          <analyzer type="index">
            <tokenizer class="solr.StandardTokenizerFactory"/>
          </analyzer>
          <analyzer type="query">
            <tokenizer class="solr.StandardTokenizerFactory"/>
          </analyzer>
        </fieldType>
    </schema>
    复制代码

    schema.xml 配置文件的根元素就是 schema, 有个 name 属性, name 属性值可以随便配,根元素没什么好说的, schema 元素下主要有两个标签元素即 field 和fieldType,field 表示域,用来定义域, fieldType 用来定义域类型。

    常规Field设置

    在fields结点内定义具体的字段(类似数据库中的字段),就是filed,filed定义包括name,type(为之前定义过的各种FieldType),indexed(是否被索引),stored(是否被储存),multiValued(是否有多个值)等等。 
    例: 

    复制代码
    <fields> 
      <field name="id" type="integer" indexed="true" stored="true" required="true" /> 
      <field name="name" type="text" indexed="true" stored="true" /> 
      <field name="summary" type="text" indexed="true" stored="true" /> 
      <field name="author" type="string" indexed="true" stored="true" /> 
      <field name="date" type="date" indexed="false" stored="true" /> 
      <field name="content" type="text" indexed="true" stored="false" /> 
      <field name="keywords" type="keyword_text" indexed="true" stored="false" multiValued="true" /> 
      <field name="all" type="text" indexed="true" stored="false" multiValued="true"/> 
    </fields> 
    复制代码

    Field即结构化中得某一个字段,field属性说明:

    • name:属性的名称,这里有个特殊的属性“_version_”是必须添加的。
    • type:字段的数据结构类型,所用到的类型需要在fieldType中设置。
    • default:默认值。
    • indexed:是否创建索引
    • stored:是否存储原始数据(如果不需要存储相应字段值,尽量设为false)
    • docValues:表示此域是否需要添加一个 docValues 域,这对 facet 查询, group 分组,排序, function 查询有好处,尽管这个属性不是必须的,但他能加快索引数据加载,对 NRT 近实时搜索比较友好,且更节省内存,但它也有一些限制,比如当前docValues 域只支持 strField,UUIDField,Trie*Field 等域,且要求域的域值是单值不能是多值域
    • solrMissingFirst/solrMissingLast:查询结果排序的过程中,如果发现这个字段的值不存在,则排在前面/后面,忽略排序的条件
    • multValued:是否有多个值,比如说一个用户的所有好友id。(对可能存在多值的字段尽量设置为true,避免建索引时抛出错误)
    • omitNorms:此属性若设置为 true ,即表示将忽略域值的长度标准化,忽略在索引过程中对当前域的权重设置,且会节省内存。只有全文本域或者你需要在索引创建过程中设置域的权重时才需要把这个值设为 false, 对于基本数据类型且不分词的域如intFeild,longField,StrField 等默认此属性值就是 true, 否则默认就是 false.
    • required:添加文档时,该字段必须存在,类似mysql的not null
    • termVectors: 设置为 true 即表示需要为该 field 存储项向量信息,当你需要MoreLikeThis 功能时,则需要将此属性值设为 true ,这样会带来一些性能提升。
    • termPositions: 是否存储 Term 的起始位置信息,这会增大索引的体积,但高亮功能需要依赖此项设置,否则无法高亮
    • termOffsets: 表示是否存储索引的位置偏移量,高亮功能需要此项配置,当你使用SpanQuery 时,此项配置会影响匹配的结果集

    field的定义相当重要,有几个技巧需注意一下

    1. 将所有只用于搜索的,而不需要作为查询结果的field(特别是一些比较大的field)的stored设置为false。
    2. 将不需要被用于搜索的,而只是作为查询结果返回的field的indexed设置为false。
    3. 删除所有不必要的copyField声明,根据需要决定是否进行存储。
    4. 为了索引字段的最小化和搜索的效率,将所有的 text fields的index都设置成false,然后使用copyField将他们都复制到一个总的 text field上,然后对他进行搜索。

    field 里还有两个比较难理解的域,是 Solr 扩展的,在 Lucene 中没有的概念,即dynamicField 动态域和 copyField 复制域: 

    正常数据结构一个是需要考虑中文分词,二个是考虑是否索引,是否分词,是否存储等等。下面的示范用到了三种类型的数据:

    1. 字段需要分词、需要索引、需要存储,如:网页中的标题、内容等字段。
    2. 字段需要索引,但不需要分词,需要存储,如:网页的发布时间等内容。
    3. 字段不需要索引,不需要分词,但需要存储,如:引用的图片位置。

    不存在不需要索引、也不需要分词,也不需要存储的字段,因为这样的字段在Lucene中无意义。

    示范配置:

    复制代码
    <?xml version="1.0" ?>
    <schema name="news" version="1.1">
        <fields>
            <!--下面三个字段需要分词,索引,存储 -->
            <!-- 发布者 -->
            <field name="webUser" type="text_mm4j" indexed="true" stored="true"/>
            <!-- 标题 -->
            <field name="webTitle" type="text_mm4j" indexed="true" stored="true" termVectors="true" termPositions="true" termOffsets="true"/>
            <!-- 内容 -->
            <field name="webContent" type="text_mm4j" indexed="true" stored="true" termVectors="true" termPositions="true" termOffsets="true"/>
     
            <!--下面需要索引,不分词,需要存储 -->
            <!-- 来源ID -->
            <field name="webId" type="int" indexed="true" stored="true"/>
            <!-- 主键ObjectID -->
            <field name="objectId" type="string" indexed="true" stored="true" required="true" multiValued="false" />
            <!-- 论坛类型(txt/pic/video) -->
            <field name="webType" type="string" indexed="true" stored="true"/>
            <!-- 发布时间 -->
            <field name="webTime" type="date" indexed="true" stored="true"/>
     
            <!--下面信息仅存储 -->
            <!-- 网站描述 -->
            <field name="webCommit" type="string" indexed="false" stored="true"/>
            <!-- 网址 -->
            <field name="webUrl" type="string" indexed="false" stored="true"/>
            <!-- 生成网页地址 -->
            <field name="webHtml" type="string" indexed="false" stored="true"/>
            <!-- 视频 -->
            <field name="webVideo" type="string" indexed="false" stored="true"/>
            <!-- 图片 -->
            <field name="webImage" type="string" indexed="false" stored="true" multiValued="true"/>
     
            <!--下面信息为区别数据类型,索引,不分词,存储 -->
            <!-- 索引类型,bbs/news/blog -->
            <field name="indexType" type="string" indexed="true" stored="true"/>
            <!-- 拷贝字段 ,索引不存储 -->
            <field name="text" type="text_mm4j" indexed="true" stored="false" multiValued="true"/>
            <field name="_version_" type="long" indexed="true" stored="true"/>
        </fields>
     
        <copyField source="webUser" dest="text"/>
        <copyField source="webTitle" dest="text"/>
        <copyField source="webContent" dest="text"/>
     
        <uniqueKey>objectId</uniqueKey>
     
        <defaultSearchField>text</defaultSearchField>
     
        <solrQueryParser defaultOperator="OR"/>
     
        <types>
            <fieldType name="int" class="solr.TrieIntField" precisionStep="0" positionIncrementGap="0"/>
            <fieldtype name="string" class="solr.StrField" sortMissingLast="true" omitNorms="true"/>
            <fieldType name="long" class="solr.TrieLongField" precisionStep="0" positionIncrementGap="0"/>
            <fieldType name="date" class="solr.TrieDateField" precisionStep="0" positionIncrementGap="0"/>
            <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100">
                <analyzer type="index">
                    <tokenizer class="solr.StandardTokenizerFactory"/>
                    <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
                    <!-- in this example, we will only use synonyms at query time
                    <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
                    -->
                    <filter class="solr.LowerCaseFilterFactory"/>
                </analyzer>
                <analyzer type="query">
                    <tokenizer class="solr.StandardTokenizerFactory"/>
                    <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
                    <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
                    <filter class="solr.LowerCaseFilterFactory"/>
                </analyzer>
            </fieldType>
            <fieldType name="text_ik" class="solr.TextField">
                <analyzer type="index" class="org.wltea.analyzer.lucene.IKAnalyzer"/>
                <analyzer type="query" class="org.wltea.analyzer.lucene.IKAnalyzer"/>
            </fieldType>
            <fieldType name="text_mm4j" class="solr.TextField" >
                <analyzer type="index">
                    <tokenizer class="com.chenlb.mmseg4j.solr.MMSegTokenizerFactory" mode="simple" dicPath="C:/solr/mm4jdic"/>
                    <!--
                    <tokenizer class="com.chenlb.mmseg4j.solr.MMSegTokenizerFactory" mode="simple" dicPath="/usr/local/solr/mm4jdic"/>
                    -->
                    <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
                </analyzer>
                <analyzer type="query">
                    <tokenizer class="com.chenlb.mmseg4j.solr.MMSegTokenizerFactory" mode="simple" dicPath="C:/solr/mm4jdic"/>
                    <!--
                    <tokenizer class="com.chenlb.mmseg4j.solr.MMSegTokenizerFactory" mode="simple" dicPath="/usr/local/solr/mm4jdic"/>
                    -->
                    <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
                </analyzer>
            </fieldType>
        </types>
    </schema>
     
    复制代码

    动态字段dynamicField

    动态域的属性配置跟普通的 field 差不多就不多说了,唯一有点区别就是 name 的属性值,可以用通配符,这样就可以模糊匹配多个域啦,这样设计的目的就是不用频繁的去修改我们的 schema.xml 中的 field 配置去增加 field 域啦,比如之前有个 link_s域,某一天你想再增加一个 url_s 域,那你就需要去修改 schema.xml 配置文件,由于schema.xml 修改过后需要重启 tomcat 才能生效,重启即意味着程序的中断,这往往是不可接受的。所以引入动态域来避免频繁添加修改域,但前提是你的域需要符合你提前定义的动态域的域名称命名规则哦。

    <dynamicField name="*_i" type="int" indexed="true" stored="true"/>

    冗余复制字段copyField

    建议建立一个拷贝字段,将所有的 全文本 字段复制到一个字段中,以便进行统一的检索。

    假如有一个文章schema,一开始业务系统搜索的时候主要是搜索文章的内容,后来我希望搜索的时候能同时去搜索文章的标题,使用copyField,将标题和内容冗余为一个字段。

    例如:

    <field name="title" type="text_general" indexed="true" stored="true"/>
    <field name="content" type="text_general" indexed="true" stored="true"/>
    <copyField source="title" dest="text"/>
    <copyField source="content" dest="text"/>

    拷贝字段就是查询的时候不用再输入:title:张三 and content:张三的个人简介。直接可以输入"张三"就可以将“名字”含“张三”或者“简介”中含“张三”的查询出来。他将需要查询的内容放在了一个字段中,并且默认查询该字段设为该字段就行了。  

    要注意的是,如果你只是复制单个域,那么如果你被复制域本身就是多值域,那么目标域也是多值域,这毋庸置疑,那如果你复制的是多个域,只要其中有一个域是多值域,那么目标域就一定是多值域,这点一定要谨记。

    field 说完了,接着说说 fieldType 元素,它用来定义域类型, solr 内置的域类型有StrField , BoolField , TrieIntField , TrieFloatField , TrieLongField ,TrieDoubleField , TrieDateField , BinaryField , RandomSortField , TextField等,其他更多域类型请自己查阅 Solr API 文档。 

    常规Field Type设置

        <fieldType name="int" class="solr.TrieIntField" precisionStep="0" positionIncrementGap="0"/>
        <fieldType name="float" class="solr.TrieFloatField" precisionStep="0" positionIncrementGap="0"/>
        <fieldType name="long" class="solr.TrieLongField" precisionStep="0" positionIncrementGap="0"/>
        <fieldType name="double" class="solr.TrieDoubleField" precisionStep="0" positionIncrementGap="0"/>
    • StrField: 这是一个不分词的字符串域,它支持 docValues 域,但当为其添加了docValues 域,则要求只能是单值域且该域必须存在或者该域有默认值
    • BoolField : boolean 域,对应 true/false
    • TrieIntField, TrieFloatField, TrieLongField, TrieDoubleField 这几个都是默认的数字域, precisionStep 属性一般用于数字范围查询, precisionStep 值越小,则索引时该域的域值分出的 token 个数越多,会增大硬盘上索引的体积,但它会加快数字范围检索的响应速度, positionIncrementGap 属性表示如果当前域是多值域时,多个值之间的间距,单值域,设置此项无意义。
    • TrieDateField :显然这是一个日期域类型,不过遗憾的是它支持 1995-12-31T23:59:59Z 这种格式的日期,比较坑爹,为此我自定义了一个 TrieCNDateField 域类型,用于支持国人比较喜欢的 yyyy-MM-dd HH:mm:ss 格式的日期。源码请参见我的上一篇博客。
    • BinaryField :经过 base64 编码的字符串域类型,即你需要把 binary 数据进行base64 编码才能被 solr 进行索引。
    • RandomSortField :随机排序域类型,当你需要实现伪随机排序时,请使用此域类型。
    • TextField :是用的最多的一种域类型,它需要进行分词,所以它一般需要配置分词器。至于具体它如何配置 IK 分词器,这里就不展开了。

    field type是对field类型的详细描述:

    • name:类型的名称,对应field中的type
    • class:类型对应的java对象, solr默认提供大概20多种类型
    • positionIncrementGap:当field设置multValued为true时,用来分隔多个值之间的间隙大小
    • autoGeneratePhraseQueries:有点类似找近义词或者自动纠错的设置,例如可以将 wi fi自动转为 wifi或wi-fi,如果不设置这个属性则需要在查询时强制加上引号,例如 ‘wi fi’ 

    fieldType 元素还有一些额外的属性也需要注意下,比如sortMissingFirst,sortMissingLast 等:  

    • sortMissingLast 表示如果域值为 null, 在根据当前域进行排序时,把包含 null 值的document 排在最后一位
    • sortMissingFirst :与 sortMissingLast 对应的,不言自明了,你应该懂的
    • docValues :表示是否为 docValues 域,一般排序, group,facet 时会用到docValues 域。

    在FieldType定义的时候最重要的就是定义这个类型的数据在建立索引和进行查询的时候要使用的分析器analyzer,包括分词和过滤。必要的时候fieldType还需要自己定义这个类型的数据在建立索引和进行查询的时候要使用的分析器analyzer,包括分词和过滤。

    例如: 

    复制代码
    <fieldType name="text" class="solr.TextField" positionIncrementGap="100"> 
          <analyzer type="index"> 
            <tokenizer class="solr.WhitespaceTokenizerFactory"/> 
            <!-- in this example, we will only use synonyms at query time 
            <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/> 
            --> 
            <!-- Case insensitive stop word removal. 
                 enablePositionIncrements=true ensures that a 'gap' is left to 
                 allow for accurate phrase queries. 
            --> 
            <filter class="solr.StopFilterFactory" 
                    ignoreCase="true" 
                    words="stopwords.txt" 
                    enablePositionIncrements="true" 
                    /> 
            <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/> 
            <filter class="solr.LowerCaseFilterFactory"/> 
            <filter class="solr.EnglishPorterFilterFactory" protected="protwords.txt"/> 
            <filter class="solr.RemoveDuplicatesTokenFilterFactory"/> 
          </analyzer> 
         …… 
    </fieldType> 
    复制代码

    在index的analyzer中使用 solr.WhitespaceTokenizerFactory这个分词包,就是空格分词,然后使用 solr.StopFilterFactory,solr.WordDelimiterFilterFactory,solr.LowerCaseFilterFactory,solr.EnglishPorterFilterFactory,solr.RemoveDuplicatesTokenFilterFactory 这几个过滤器。在向索引库中添加text类型的索引的时候,Solr会首先用空格进行分词,然后把分词结果依次使用指定的过滤器进行过滤,最后剩下的结果才会加入到索引库中以备查询。Solr的analysis包并没有带支持中文分词的包。

    uniqueKey 元素

    最后需要说的就是 uniqueKey 元素,它用来配置 document 的唯一标识域,即 solr是用此域来决定增量导入时是否重复导入,如果 id 一样,则不会重复导入,或者当你更新索引时,你可以根据指定的 uniqueKey 域,来确定一个 document ,然后对该document 进行更新。总之,它是用来唯一确定一个 document 的,跟数据库表里的主键 id 概念类似,前提是你 uniqueKey 里配置的域名称你需要提前使用 field 元素进行定义。

    schema.xml里有一个uniqueKey,的配置,这里将id字段作为索引文档的唯一标识符,非常重要。

     <uniqueKey>id</uniqueKey> 

    Schema设计

    1. 决定需要哪些查询
    2. 决定每个查询需要哪些实体
    3. 对每个实体,反规范化所有相关的数据
    4. 忽略不包含在查询结果中的字段
  • 相关阅读:
    iperf/iperf3网络测试工具的安装与使用
    驱动模块(4)——模块编译
    760. Find Anagram Mappings
    MySQL面试题
    MySQL 的数据存储引擎
    203. Remove Linked List Elements
    数据库事务隔离级别
    232. Implement Queue using Stacks
    MySQL中的事务
    482. License Key Formatting
  • 原文地址:https://www.cnblogs.com/cuihongyu3503319/p/9510242.html
Copyright © 2020-2023  润新知