Java 8 Streams:根据不同的属性多次映射同一个对象
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/28508253/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Java 8 Streams: Map the same object multiple times based on different properties
提问by wassgren
I was presented with an interesting problem by a colleague of mine and I was unable to find a neat and pretty Java 8 solution. The problem is to stream through a list of POJOs and then collect them in a map based on multiple properties - the mapping causes the POJO to occur multiple times
我的一位同事向我提出了一个有趣的问题,我无法找到一个简洁漂亮的 Java 8 解决方案。问题是流过一个 POJO 列表,然后根据多个属性将它们收集到一个映射中——映射导致 POJO 出现多次
Imagine the following POJO:
想象一下以下 POJO:
private static class Customer {
public String first;
public String last;
public Customer(String first, String last) {
this.first = first;
this.last = last;
}
public String toString() {
return "Customer(" + first + " " + last + ")";
}
}
Set it up as a List<Customer>
:
将其设置为List<Customer>
:
// The list of customers
List<Customer> customers = Arrays.asList(
new Customer("Johnny", "Puma"),
new Customer("Super", "Mac"));
Alternative 1: Use a Map
outside of the "stream" (or rather outside forEach
).
备选方案 1:使用Map
“流”的外部(或者更确切地说是外部forEach
)。
// Alt 1: not pretty since the resulting map is "outside" of
// the stream. If parallel streams are used it must be
// ConcurrentHashMap
Map<String, Customer> res1 = new HashMap<>();
customers.stream().forEach(c -> {
res1.put(c.first, c);
res1.put(c.last, c);
});
Alternative 2: Create map entries and stream them, then flatMap
them. IMO it is a bit too verbose and not so easy to read.
备选方案 2:创建地图条目并流式传输它们,然后flatMap
它们。海事组织它有点过于冗长,不太容易阅读。
// Alt 2: A bit verbose and "new AbstractMap.SimpleEntry" feels as
// a "hard" dependency to AbstractMap
Map<String, Customer> res2 =
customers.stream()
.map(p -> {
Map.Entry<String, Customer> firstEntry = new AbstractMap.SimpleEntry<>(p.first, p);
Map.Entry<String, Customer> lastEntry = new AbstractMap.SimpleEntry<>(p.last, p);
return Stream.of(firstEntry, lastEntry);
})
.flatMap(Function.identity())
.collect(Collectors.toMap(
Map.Entry::getKey, Map.Entry::getValue));
Alternative 3: This is another one that I came up with the "prettiest" code so far but it uses the three-arg version of reduce
and the third parameter is a bit dodgy as found in this question: Purpose of third argument to 'reduce' function in Java 8 functional programming. Furthermore, reduce
does not seem like a good fit for this problem since it is mutating and parallel streams may not work with the approach below.
备选方案 3:这是我迄今为止提出的“最漂亮”代码的另一个代码,但它使用了三参数版本,reduce
并且第三个参数有点狡猾,如在此问题中发现的那样:“reduce”的第三个参数的目的Java 8 函数式编程中的函数。此外,reduce
似乎不太适合这个问题,因为它正在发生变化,并且并行流可能无法与下面的方法一起使用。
// Alt 3: using reduce. Not so pretty
Map<String, Customer> res3 = customers.stream().reduce(
new HashMap<>(),
(m, p) -> {
m.put(p.first, p);
m.put(p.last, p);
return m;
}, (m1, m2) -> m2 /* <- NOT USED UNLESS PARALLEL */);
If the above code is printed like this:
如果上面的代码是这样打印的:
System.out.println(res1);
System.out.println(res2);
System.out.println(res3);
The result would be:
结果将是:
{Super=Customer(Super Mac), Johnny=Customer(Johnny Puma), Mac=Customer(Super Mac), Puma=Customer(Johnny Puma)}
{Super=Customer(Super Mac), Johnny=Customer(Johnny Puma), Mac=Customer(Super Mac), Puma=Customer(Johnny Puma)}
{Super=Customer(Super Mac), Johnny=Customer(Johnny Puma), Mac=Customer(Super Mac), Puma=Customer(Johnny Puma)}
{Super=Customer(Super Mac), Johnny=Customer(Johnny Puma), Mac=Customer(Super Mac), Puma=Customer(Johnny Puma)}
{Super=Customer(Super Mac), Johnny=Customer(Johnny Puma), Mac=Customer(Super Mac), Puma=Customer(Johnny Puma)}
{Super=Customer(Super Mac), Johnny=Customer(Johnny Puma), Mac=Customer(Super Mac), Puma=Customer(Johnny Puma)}
So, now to my question: How should I, in a Java 8 orderly fashion, stream through the List<Customer>
and then somehow collect it as a Map<String, Customer>
where you split the whole thing as two keys (first
AND last
) i.e. the Customer
is mapped twice. I do not want to use any 3rd party libraries, I do not want to use a map outside of the stream as in alt 1. Are there any other nice alternatives?
所以,现在我的问题是:我应该如何以 Java 8 有序的方式流过List<Customer>
,然后以某种方式将其收集为Map<String, Customer>
将整个事物拆分为两个键(first
AND last
)的地方,即Customer
映射两次。我不想使用任何 3rd 方库,我不想像 alt 1 那样在流之外使用地图。还有其他不错的选择吗?
The full code can be found on hastebinfor simple copy-paste to get the whole thing running.
完整的代码可以在 hastebin上找到,通过简单的复制粘贴来让整个事情运行起来。
采纳答案by Misha
I think your alternatives 2 and 3 can be re-written to be more clear:
我认为您的替代方案 2 和 3 可以重写为更清楚:
Alternative 2:
备选方案2:
Map<String, Customer> res2 = customers.stream()
.flatMap(
c -> Stream.of(c.first, c.last)
.map(k -> new AbstractMap.SimpleImmutableEntry<>(k, c))
).collect(toMap(Map.Entry::getKey, Map.Entry::getValue));
Alternative 3: Your code abuses reduce
by mutating the HashMap. To do mutable reduction, use collect
:
备选方案 3:您的代码reduce
通过改变 HashMap 来滥用。要进行可变减少,请使用collect
:
Map<String, Customer> res3 = customers.stream()
.collect(
HashMap::new,
(m,c) -> {m.put(c.first, c); m.put(c.last, c);},
HashMap::putAll
);
Note that these are not identical. Alternative 2 will throw an exception if there are duplicate keys while Alternative 3 will silently overwrite the entries.
请注意,这些并不相同。如果存在重复的键,则备选方案 2 将引发异常,而备选方案 3 将静默覆盖条目。
If overwriting entries in case of duplicate keys is what you want, I would personally prefer Alternative 3. It is immediately clear to me what it does. It most closely resembles the iterative solution. I would expect it to be more performant as Alternative 2 has to do a bunch of allocations per customer with all that flatmapping.
如果在重复键的情况下覆盖条目是你想要的,我个人更喜欢备选方案 3。我很清楚它的作用。它最类似于迭代解决方案。我希望它的性能更高,因为替代方案 2 必须使用所有平面映射为每个客户进行大量分配。
However, Alternative 2 has a huge advantage over Alternative 3 by separating the production of entries from their aggregation. This gives you a great deal of flexibility. For example, if you want to change Alternative 2 to overwrite entries on duplicate keys instead of throwing an exception, you would simply add (a,b) -> b
to toMap(...)
. If you decide you want to collect matching entries into a list, all you would have to do is replace toMap(...)
with groupingBy(...)
, etc.
然而,替代方案 2 比替代方案 3 具有巨大的优势,因为它将条目的产生与其聚合分开。这为您提供了很大的灵活性。例如,如果您想更改备选方案 2 以覆盖重复键上的条目而不是抛出异常,您只需添加(a,b) -> b
到toMap(...)
. 如果您决定要将匹配的条目收集到列表中,您只需将 替换toMap(...)
为groupingBy(...)
等。