Python numpy.unique 保留顺序
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/15637336/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
numpy.unique with order preserved
提问by siamii
['b','b','b','a','a','c','c']
numpy.unique gives
numpy.unique 给出
['a','b','c']
How can I get the original order preserved
如何保留原始订单
['b','a','c']
Great answers. Bonus question. Why do none of these methods work with this dataset? http://www.uploadmb.com/dw.php?id=1364341573Here's the question numpy sort wierd behavior
很棒的答案。奖金问题。为什么这些方法都不适用于此数据集?http://www.uploadmb.com/dw.php?id=1364341573这是问题numpy sort wierd 行为
采纳答案by HYRY
unique()is slow, O(Nlog(N)), but you can do this by following code:
unique()很慢,O(Nlog(N)),但您可以通过以下代码执行此操作:
import numpy as np
a = np.array(['b','a','b','b','d','a','a','c','c'])
_, idx = np.unique(a, return_index=True)
print(a[np.sort(idx)])
output:
输出:
['b' 'a' 'd' 'c']
Pandas.unique()is much faster for big array O(N):
Pandas.unique()对于大数组 O(N),速度要快得多:
import pandas as pd
a = np.random.randint(0, 1000, 10000)
%timeit np.unique(a)
%timeit pd.unique(a)
1000 loops, best of 3: 644 us per loop
10000 loops, best of 3: 144 us per loop
回答by YXD
a = ['b','b','b','a','a','c','c']
[a[i] for i in sorted(np.unique(a, return_index=True)[1])]
回答by Fred Foo
Use the return_indexfunctionality of np.unique. That returns the indices at which the elements first occurred in the input. Then argsortthose indices.
使用return_index的功能np.unique。这将返回元素首次出现在输入中的索引。然后argsort是那些指数。
>>> u, ind = np.unique(['b','b','b','a','a','c','c'], return_index=True)
>>> u[np.argsort(ind)]
array(['b', 'a', 'c'],
dtype='|S1')
回答by Jan Spurny
If you're trying to remove duplication of an already sorted iterable, you can use itertools.groupbyfunction:
如果您尝试删除已排序的可迭代对象的重复项,您可以使用itertools.groupby函数:
>>> from itertools import groupby
>>> a = ['b','b','b','a','a','c','c']
>>> [x[0] for x in groupby(a)]
['b', 'a', 'c']
This works more like unix 'uniq' command, because it assumes the list is already sorted. When you try it on unsorted list you will get something like this:
这更像 unix 'uniq' 命令,因为它假设列表已经排序。当你在 unsorted list 上尝试时,你会得到这样的结果:
>>> b = ['b','b','b','a','a','c','c','a','a']
>>> [x[0] for x in groupby(b)]
['b', 'a', 'c', 'a']
回答by Albert
If you want to delete repeated entries, like the Unix tool uniq, this is a solution:
如果你想删除重复的条目,比如 Unix tool uniq,这是一个解决方案:
def uniq(seq):
"""
Like Unix tool uniq. Removes repeated entries.
:param seq: numpy.array
:return: seq
"""
diffs = np.ones_like(seq)
diffs[1:] = seq[1:] - seq[:-1]
idx = diffs.nonzero()
return seq[idx]
回答by DanGoodrick
Use an OrderedDict (faster than a list comprehension)
使用 OrderedDict(比列表理解更快)
from collections import OrderedDict
a = ['b','a','b','a','a','c','c']
list(OrderedDict.fromkeys(a))

