-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(search): supporting chinese glossaryterm full text retrieval(#3914) #3956
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow - awesome PR!
This looks great to me. Want another pair of eyes on it, then we can ship. (cc. @dexter-mh-lee)
Thank you @Huyueeer!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
…ahub-project#3914) (datahub-project#3956) * feat(search): supporting chinese glossaryterm full text retrieval(datahub-project#3914) * refactor(search): modify mainTokenizer to appropriate position(datahub-project#3914) Co-authored-by: Shirshanka Das <shirshanka@apache.org>
@xiangqiao123 Sorry, this part should be rebuilt. It seems that you should find the person who implements this part |
Checklist
Change
source of problem: #3914
configure the analyzer to replace word segmentation in other languages supported by
main_tokenizer
, The tokenizer that has been tested includessmartcn_tokenizer & ik_smart
, Elasticsearch analysis plugins.