Hbase 的 Java ORM

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/3257034/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-10-30 01:01:57  来源:igfitidea点击:

Java ORM for Hbase

javaormhbase

提问by user392887

Anyone knows a good Java ORM implementation for HBase. This one looks really nice for Ruby

任何人都知道 HBase 的一个很好的 Java ORM 实现。这个对 Ruby 来说真的很好看

http://www.stanford.edu/~sqs/rhino/doc/

http://www.stanford.edu/~sqs/rhino/doc/

But could not find one for Java.

但是找不到一个适用于 Java 的。

Thanks.

谢谢。

回答by vivek mishra

Recently a new release of kundera-2.0.4 which is ORM over Hbase. It provides ample of other things which are very useful, like indexing, cross data store persistence etc.

最近发布了 kundera-2.0.4 的新版本,它是 Hbase 上的 ORM。它提供了大量其他非常有用的东西,比如索引、跨数据存储持久性等。

I suggest give it a try https://github.com/impetus-opensource/Kundera

我建议尝试一下 https://github.com/impetus-opensource/Kundera

Executable jar is at:

可执行 jar 位于:

https://github.com/impetus-opensource/Kundera

https://github.com/impetus-opensource/Kundera

回答by najeeb

Hibernate OGM is a fine solution for non SQL Databases. Try it out.

Hibernate OGM 是非 SQL 数据库的一个很好的解决方案。试试看。

http://www.hibernate.org/subprojects/ogm.html

http://www.hibernate.org/subprojects/ogm.html

回答by imyousuf

The strength of HBase as I see it is in keeping dynamic columns into static column families. From my experience developing applications with HBase I find that it is not as easy as SQL to determine cell qualifiers and values.

在我看来,HBase 的优势在于将动态列保留在静态列族中。根据我使用 HBase 开发应用程序的经验,我发现确定单元格限定符和值不像 SQL 那样容易。

For example, a book as many authors, depending on your access patterns, author edits, app-layer cache implementation you might want to choose to save whole author in the book table (that is author resides in 2 table, author table and book table) or just the author id. Further more the collection of author can be saved into one cell as XML/JSON or individual cells for individual authors.

例如,一本书有很多作者,根据您的访问模式、作者编辑、应用层缓存实现,您可能希望选择将整个作者保存在书表中(即作者位于 2 个表中,作者表和书表) 或只是作者 ID。此外,作者的集合可以保存到一个单元格中作为 XML/JSON 或单个作者的单个单元格。

With this understanding I concluded writing a full-blown ORM such as Hibernate will not only be very difficult might not actually be conclusive. So I took a different approach, much more like as iBatis is to Hibernate.

有了这种理解,我得出的结论是,编写一个完整的 ORM,例如 Hibernate,不仅非常困难,而且实际上可能不会有定论。所以我采用了不同的方法,更像是 iBatis 对 Hibernate 的处理。

Let me try to explain how it works. For this I will use source codes from hereand here.

让我试着解释它是如何工作的。为此,我将使用这里这里的源代码。

  1. The first and foremost task is to implement a ObjectRowConverter interface, in this case SessionDataObjectConverter. The abstract class encapsulates basic best practices as discussed and learnt from the HBase community. The extension basically gives you 100% control on how to convert your object to HBase row and vice-versa. For this only restriction from the API is that your domain objects must implement the interface PersistentDTO which is used internally to create Put, Delete, do byte[] to id object and vice versa.
  2. Next task is to wire the dependencies as done in HBaseImplModule. Please let me know if you interested I will go through the dependency injections.
  1. 首要任务是实现 ObjectRowConverter 接口,在本例中为 SessionDataObjectConverter。抽象类封装了从 HBase 社区讨论和学习的基本最佳实践。该扩展基本上使您可以 100% 控制如何将对象转换为 HBase 行,反之亦然。对于 API 的唯一限制是,您的域对象必须实现接口 PersistentDTO,该接口在内部用于创建 Put、Delete、对 id 对象执行 byte[] 操作,反之亦然。
  2. 下一个任务是在 HBaseImplModule 中连接依赖项。如果您感兴趣,请告诉我,我将完成依赖注入。

And thats it. How they are used are available here. It basically uses CommonReadDao, CommonWriteDao to read and write data to and from HBase. The common read dao implements multithreaded row to object conversion on queries, multithreaded get by ids, get by id and has its Hibernate Criteria like API to query to HBase via Scan (no aggregation functions available). Common write dao implements common write related code with some added facilities, such as optimistic/pessimistic locking, cell override/merge checking entity (non)-existence on save, update, delete etc.

就是这样。此处提供如何使用它们。它基本上使用 CommonReadDao、CommonWriteDao 来读写 HBase 的数据。common read dao 在查询上实现了多线程行到对象的转换,多线程通过 id 获取,通过 id 获取,并有它的 Hibernate Criteria 之类的 API 通过扫描查询 HBase(没有可用的聚合函数)。Common write dao 实现了常见的写相关代码,并添加了一些附加功能,例如乐观/悲观锁定、单元格覆盖/合并检查实体(非)存在于保存、更新、删除等。

This ORM has been developed for our internal purpose and I have been upto my neck and hence can not yet do some documentation. But if you are interested let me know and I willmake time for documentation with priority.

这个 ORM 是为我们的内部目的而开发的,我已经迫不及待了,因此还不能做一些文档。但是,如果您有兴趣,请告诉我,我优先安排时间编写文档。

回答by Cojones

How about datanucleus: you can use JPA or JDO as your API and hbase as the backend store: http://www.datanucleus.org/plugins/store.hbase.html

datanucleus 怎么样:你可以使用 JPA 或 JDO 作为你的 API 和 hbase 作为后端存储:http://www.datanucleus.org/plugins/store.hbase.html

回答by lu wei

you can try this: http://code.google.com/p/hbase-ormlite/. This is a orm for HBase in Java.

你可以试试这个:http: //code.google.com/p/hbase-ormlite/。这是 Java 中 HBase 的一个 orm。

回答by Bohdan

We are using HBase ORM - Surus https://github.com/mushkevych/surus/wiki

我们正在使用 HBase ORM - Surus https://github.com/mushkevych/surus/wiki

Probably worth mentioning

大概值得一提

  • we are using it heavily with Hadoop map/reduce
  • it has extra module that allows you to pump into HBase data from JSON stream (in our case it comes from Python code)
  • 我们在 Hadoop map/reduce 中大量使用它
  • 它有额外的模块,允许你从 JSON 流中抽取 HBase 数据(在我们的例子中它来自 Python 代码)

回答by wlk

There is pigiand parhelyand I have used none of them. IMO HBase is fast key/value store engine, but if you need another layer of abstractions, you should check them out.

pigiparhely,我没有使用它们。IMO HBase 是快速键/值存储引擎,但如果您需要另一层抽象,您应该查看它们。