Python Keras Tokenizer 方法究竟做了什么?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/51956000/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 19:59:26  来源:igfitidea点击:

What does Keras Tokenizer method exactly do?

pythonkerasnlp

提问by Hyman Fleeting

On occasion, circumstances require us to do the following:

有时,情况需要我们执行以下操作:

from keras.preprocessing.text import Tokenizer
tokenizer = Tokenizer(num_words=my_max)

Then, invariably, we chant this mantra:

然后,我们总是念诵这个咒语:

tokenizer.fit_on_texts(text) 
sequences = tokenizer.texts_to_sequences(text)

While I (more or less) understand what the total effect is, I can't figure out what each one does separately, regardless of how much research I do (including, obviously, the documentation). I don't think I've ever seen one without the other.

虽然我(或多或少)了解总体效果是什么,但我无法弄清楚每个人分别做了什么,无论我做了多少研究(显然,包括文档)。我想我从来没有见过一个没有另一个。

So what does each do? Are there any circumstances where you would use either one without the other? If not, why aren't they simply combined into something like:

那么每个人做什么呢?在任何情况下,您都可以使用其中一种而不使用另一种吗?如果没有,为什么不将它们简单地组合成类似的东西:

sequences = tokenizer.fit_on_texts_to_sequences(text)

Apologies if I'm missing something obvious, but I'm pretty new at this.

如果我遗漏了一些明显的东西,我很抱歉,但我对此很陌生。

回答by nuric

From the source code:

源代码

  1. fit_on_textsUpdates internal vocabulary based on a list of texts.This method creates the vocabulary index based on word frequency. So if you give it something like, "The cat sat on the mat." It will create a dictionary s.t. word_index["the"] = 1; word_index["cat"] = 2it is word -> index dictionary so every word gets a unique integer value. 0 is reserved for padding. So lower integer means more frequent word (often the first few are stop words because they appear a lot).
  2. texts_to_sequencesTransforms each text in texts to a sequence of integers.So it basically takes each word in the text and replaces it with its corresponding integer value from the word_indexdictionary. Nothing more, nothing less, certainly no magic involved.
  1. fit_on_texts根据文本列表更新内部词汇。该方法根据词频创建词汇索引。所以如果你给它一些类似的话,“猫坐在垫子上。” 它将创建一个字典 st word_index["the"] = 1; word_index["cat"] = 2it is word -> index dictionary 所以每个单词都会得到一个唯一的整数值。0 保留用于填充。所以较低的整数意味着更频繁的词(通常前几个是停用词,因为它们出现很多)。
  2. texts_to_sequences将 texts 中的每个文本转换为整数序列。因此,它基本上将文本中的每个单词替换为word_index字典中相应的整数值。不多也不少,当然不涉及魔法。

Why don't combine them?Because you almost always fit onceand convert to sequences many times. You will fit on your training corpus once and use that exact same word_indexdictionary at train / eval / testing / prediction time to convert actual text into sequences to feed them to the network. So it makes sense to keep those methods separate.

为什么不把它们结合起来?因为您几乎总是适合一次多次转换为序列。您将适合您的训练语料库一次,并word_index在训练/评估/测试/预测时间使用完全相同的字典将实际文本转换为序列以将它们提供给网络。因此,将这些方法分开是有意义的。

回答by KPMG

Adding more to above answers with examples will help in better understanding:

通过示例在上述答案中添加更多内容将有助于更好地理解:

Example 1:

示例 1

t  = Tokenizer()
fit_text = "The earth is an awesome place live"
t.fit_on_texts(fit_text)
test_text = "The earth is an great place live"
sequences = t.texts_to_sequences(test_text)

print("sequences : ",sequences,'\n')

print("word_index : ",t.word_index)
#[] specifies : 1. space b/w the words in the test_text    2. letters that have not occured in fit_text

Output :

       sequences :  [[3], [4], [1], [], [1], [2], [8], [3], [4], [], [5], [6], [], [2], [9], [], [], [8], [1], [2], [3], [], [13], [7], [2], [14], [1], [], [7], [5], [15], [1]] 

       word_index :  {'e': 1, 'a': 2, 't': 3, 'h': 4, 'i': 5, 's': 6, 'l': 7, 'r': 8, 'n': 9, 'w': 10, 'o': 11, 'm': 12, 'p': 13, 'c': 14, 'v': 15}

Example 2:

示例 2

t  = Tokenizer()
fit_text = ["The earth is an awesome place live"]
t.fit_on_texts(fit_text)

#fit_on_texts fits on sentences when list of sentences is passed to fit_on_texts() function. 
#ie - fit_on_texts( [ sent1, sent2, sent3,....sentN ] )

#Similarly, list of sentences/single sentence in a list must be passed into texts_to_sequences.
test_text1 = "The earth is an great place live"
test_text2 = "The is my program"
sequences = t.texts_to_sequences([test_text1, test_text2])

print('sequences : ',sequences,'\n')

print('word_index : ',t.word_index)
#texts_to_sequences() returns list of list. ie - [ [] ]

Output:

        sequences :  [[1, 2, 3, 4, 6, 7], [1, 3]] 

        word_index :  {'the': 1, 'earth': 2, 'is': 3, 'an': 4, 'awesome': 5, 'place': 6, 'live': 7}

回答by Sundarraj N

Lets see what this line of code does.

让我们看看这行代码做了什么。

tokenizer.fit_on_texts(text)

tokenizer.fit_on_texts(文本)

For example, consider the sentence " The earth is an awesome place live"

例如,考虑句子“The earth is an awesome place live”

tokenizer.fit_on_texts("The earth is an awesome place live")fits [[1,2,3,4,5,6,7]] where 3 -> "is" , 6 -> "place", so on.

tokenizer.fit_on_texts("The earth is an awesome place live")适合 [[1,2,3,4,5,6,7]] 其中 3 -> "is" , 6 -> "place",依此类推。

sequences = tokenizer.texts_to_sequences("The earth is an great place live")

returns [[1,2,3,4,6,7]].

返回 [[1,2,3,4,6,7]]。

You see what happened here. The word "great" is not fit initially, so it does not recognize the word "great". Meaning, fit_on_text can be used independently on train data and then the fitted vocabulary index can be used to represent a completely new set of word sequence. These are two different processes. Hence the two lines of code.

你看这里发生了什么。“great”这个词一开始不合适,所以它不识别“great”这个词。意思是,fit_on_text 可以在训练数据上独立使用,然后拟合的词汇索引可以用来表示一组全新的词序列。这是两个不同的过程。因此有两行代码。