I am using a pattern tokenizer inside a JAVA Program. I am using the regex: "\p{Punct}{1}"
I also created a JAVA program with the same regex. But, when I compared the result of both the JAVA program and the elasticsearch analyzer with the same pattern, they were different.
The code inside my JAVA file is:
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import java.util.*;
public class characters {
public static void main(String[] args) {
String userInput = "HTTps://www.google.cOM/";
userInput = userInput.toLowerCase();
Pattern pattern = Pattern.compile("\\p{Punct}{1}");
List<String> list = new ArrayList<String>();
Matcher m = pattern.matcher(userInput);
while (m.find()) {
list.add(m.group());
}
System.out.println(list);
}
}
The above program gave the below result:
[:, /, /, ., ., /]
The code inside the elasticsearch pattern file is:
import org.apache.lucene.analysis.Tokenizer;
import org.apache.lucene.analysis.pattern.PatternTokenizer;
import org.elasticsearch.common.regex.Regex;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import org.elasticsearch.index.IndexSettings;
import org.elasticsearch.index.analysis.AbstractTokenizerFactory;
import java.util.regex.Pattern;
public class UrlTokenizerFactory extends AbstractTokenizerFactory {
private final Pattern pattern;
private final int group;
public UrlTokenizerFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) {
super(indexSettings, name, settings);
String sPattern = settings.get("pattern", "\\p{Punct}{1}");
if (sPattern == null) {
throw new IllegalArgumentException("pattern is missing for [" name "] tokenizer of type 'pattern'");
}
this.pattern = Regex.compile(sPattern, settings.get("flags"));
this.group = settings.getAsInt("group", -1);
}
@Override
public Tokenizer create() {
return new PatternTokenizer(pattern, group);
}
}
It is generating the below result:
"tokens" : [
{
"token" : "https",
"start_offset" : 0,
"end_offset" : 5,
"type" : "word",
"position" : 0
},
{
"token" : "www",
"start_offset" : 8,
"end_offset" : 11,
"type" : "word",
"position" : 1
},
{
"token" : "google",
"start_offset" : 12,
"end_offset" : 18,
"type" : "word",
"position" : 2
},
{
"token" : "com",
"start_offset" : 19,
"end_offset" : 22,
"type" : "word",
"position" : 3
}
]
The desired result is that of the JAVA file. But, I'm getting a different one in elasticsearch's case.
CodePudding user response:
I was able to solve this issue by simply changing the value of
this.group = settings.getAsInt("group", -1);
to:
this.group = settings.getAsInt("group", 0);
inside my patternTokenizer file.