如何在 MongoDB 中使用 Elasticsearch?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/23846971/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How to use Elasticsearch with MongoDB?
提问by bibin david
I have gone through many blogs and sites about configuring Elasticsearch for MongoDB to index Collections in MongoDB but none of them were straightforward.
我浏览了许多关于为 MongoDB 配置 Elasticsearch 以索引 MongoDB 中的集合的博客和网站,但没有一个是直接的。
Please explain to me a step by step process for installing elasticsearch, which should include:
请向我解释安装elasticsearch的分步过程,其中应包括:
- configuration
- run in the browser
- 配置
- 在浏览器中运行
I am using Node.js with express.js, so please help accordingly.
我正在将 Node.js 与 express.js 一起使用,因此请相应地提供帮助。
回答by Donald Gary
This answer should be enough to get you set up to follow this tutorial on Building a functional search component with MongoDB, Elasticsearch, and AngularJS.
这个答案应该足以让您按照本教程使用 MongoDB、Elasticsearch 和 AngularJS 构建功能搜索组件。
If you're looking to use faceted search with data from an API then Matthiasn's BirdWatch Repois something you might want to look at.
如果您希望对来自 API 的数据使用分面搜索,那么您可能需要查看Matthiasn 的BirdWatch Repo。
So here's how you can setup a single node Elasticsearch "cluster" to index MongoDB for use in a NodeJS, Express app on a fresh EC2 Ubuntu 14.04 instance.
因此,这里介绍了如何设置单节点 Elasticsearch“集群”来索引 MongoDB,以便在新的 EC2 Ubuntu 14.04 实例上的 NodeJS、Express 应用程序中使用。
Make sure everything is up to date.
确保一切都是最新的。
sudo apt-get update
Install NodeJS.
安装 Node.js。
sudo apt-get install nodejs
sudo apt-get install npm
Install MongoDB- These steps are straight from MongoDB docs. Choose whatever version you're comfortable with. I'm sticking with v2.4.9 because it seems to be the most recent version MongoDB-Riversupports without issues.
安装 MongoDB- 这些步骤直接来自 MongoDB 文档。选择您喜欢的任何版本。我坚持使用 v2.4.9,因为它似乎是MongoDB-River支持的最新版本,没有问题。
Import the MongoDB public GPG Key.
导入 MongoDB 公共 GPG 密钥。
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
Update your sources list.
更新您的来源列表。
echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | sudo tee /etc/apt/sources.list.d/mongodb.list
Get the 10gen package.
获取 10gen 包。
sudo apt-get install mongodb-10gen
Then pick your version if you don't want the most recent. If you are setting your environment up on a windows 7 or 8 machine stay away from v2.6 until they work some bugs out with running it as a service.
如果您不想要最新的版本,请选择您的版本。如果您在 Windows 7 或 8 机器上设置环境,请远离 v2.6,直到它们通过将其作为服务运行来解决一些错误。
apt-get install mongodb-10gen=2.4.9
Prevent the version of your MongoDB installation being bumped up when you update.
防止您的 MongoDB 安装版本在更新时被提升。
echo "mongodb-10gen hold" | sudo dpkg --set-selections
Start the MongoDB service.
启动 MongoDB 服务。
sudo service mongodb start
Your database files default to /var/lib/mongo and your log files to /var/log/mongo.
您的数据库文件默认为 /var/lib/mongo,而您的日志文件默认为 /var/log/mongo。
Create a database through the mongo shell and push some dummy data into it.
通过 mongo shell 创建一个数据库并将一些虚拟数据推送到其中。
mongo YOUR_DATABASE_NAME
db.createCollection(YOUR_COLLECTION_NAME)
for (var i = 1; i <= 25; i++) db.YOUR_COLLECTION_NAME.insert( { x : i } )
Now to Convert the standalone MongoDB into a Replica Set.
First Shutdown the process.
首先关闭进程。
mongo YOUR_DATABASE_NAME
use admin
db.shutdownServer()
Now we're running MongoDB as a service, so we don't pass in the "--replSet rs0" option in the command line argument when we restart the mongod process. Instead, we put it in the mongod.conf file.
现在我们将 MongoDB 作为服务运行,因此当我们重新启动 mongod 进程时,我们不会在命令行参数中传入“--replSet rs0”选项。相反,我们将它放在 mongod.conf 文件中。
vi /etc/mongod.conf
Add these lines, subbing for your db and log paths.
添加这些行,为您的数据库和日志路径添加字幕。
replSet=rs0
dbpath=YOUR_PATH_TO_DATA/DB
logpath=YOUR_PATH_TO_LOG/MONGO.LOG
Now open up the mongo shell again to initialize the replica set.
现在再次打开 mongo shell 来初始化副本集。
mongo DATABASE_NAME
config = { "_id" : "rs0", "members" : [ { "_id" : 0, "host" : "127.0.0.1:27017" } ] }
rs.initiate(config)
rs.slaveOk() // allows read operations to run on secondary members.
Now install Elasticsearch. I'm just following this helpful Gist.
现在安装 Elasticsearch。我只是在关注这个有用的Gist。
Make sure Java is installed.
确保已安装 Java。
sudo apt-get install openjdk-7-jre-headless -y
Stick with v1.1.x for now until the Mongo-River plugin bug gets fixed in v1.2.1.
现在坚持使用 v1.1.x,直到 Mongo-River 插件错误在 v1.2.1 中得到修复。
wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.1.1.deb
sudo dpkg -i elasticsearch-1.1.1.deb
curl -L http://github.com/elasticsearch/elasticsearch-servicewrapper/tarball/master | tar -xz
sudo mv *servicewrapper*/service /usr/local/share/elasticsearch/bin/
sudo rm -Rf *servicewrapper*
sudo /usr/local/share/elasticsearch/bin/service/elasticsearch install
sudo ln -s `readlink -f /usr/local/share/elasticsearch/bin/service/elasticsearch` /usr/local/bin/rcelasticsearch
Make sure /etc/elasticsearch/elasticsearch.yml has the following config options enabled if you're only developing on a single node for now:
如果您现在只在单个节点上开发,请确保 /etc/elasticsearch/elasticsearch.yml 启用了以下配置选项:
cluster.name: "MY_CLUSTER_NAME"
node.local: true
Start the Elasticsearch service.
启动 Elasticsearch 服务。
sudo service elasticsearch start
Verify it's working.
验证它是否正常工作。
curl http://localhost:9200
If you see something like this then you're good.
如果你看到这样的东西,那么你很好。
{
"status" : 200,
"name" : "Chi Demon",
"version" : {
"number" : "1.1.2",
"build_hash" : "e511f7b28b77c4d99175905fac65bffbf4c80cf7",
"build_timestamp" : "2014-05-22T12:27:39Z",
"build_snapshot" : false,
"lucene_version" : "4.7"
},
"tagline" : "You Know, for Search"
}
Now install the Elasticsearch plugins so it can play with MongoDB.
现在安装 Elasticsearch 插件,以便它可以与 MongoDB 一起玩。
bin/plugin --install com.github.richardwilly98.elasticsearch/elasticsearch-river-mongodb/1.6.0
bin/plugin --install elasticsearch/elasticsearch-mapper-attachments/1.6.0
These two plugins aren't necessary but they're good for testing queries and visualizing changes to your indexes.
这两个插件不是必需的,但它们适用于测试查询和可视化索引更改。
bin/plugin --install mobz/elasticsearch-head
bin/plugin --install lukas-vlcek/bigdesk
Restart Elasticsearch.
重新启动 Elasticsearch。
sudo service elasticsearch restart
Finally index a collection from MongoDB.
最后从 MongoDB 索引一个集合。
curl -XPUT localhost:9200/_river/DATABASE_NAME/_meta -d '{
"type": "mongodb",
"mongodb": {
"servers": [
{ "host": "127.0.0.1", "port": 27017 }
],
"db": "DATABASE_NAME",
"collection": "ACTUAL_COLLECTION_NAME",
"options": { "secondary_read_preference": true },
"gridfs": false
},
"index": {
"name": "ARBITRARY INDEX NAME",
"type": "ARBITRARY TYPE NAME"
}
}'
Check that your index is in Elasticsearch
检查您的索引是否在 Elasticsearch 中
curl -XGET http://localhost:9200/_aliases
Check your cluster health.
检查您的集群运行状况。
curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'
It's probably yellow with some unassigned shards. We have to tell Elasticsearch what we want to work with.
它可能是黄色的,带有一些未分配的碎片。我们必须告诉 Elasticsearch 我们想要使用什么。
curl -XPUT 'localhost:9200/_settings' -d '{ "index" : { "number_of_replicas" : 0 } }'
Check cluster health again. It should be green now.
再次检查集群健康状况。现在应该是绿色的。
curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'
Go play.
去玩。
回答by tsturzl
Using river can present issues when your operation scales up. River will use a ton of memory when under heavy operation. I recommend implementing your own elasticsearch models, or if you're using mongoose you can build your elasticsearch models right into that or use mongoosasticwhich essentially does this for you.
当您的操作规模扩大时,使用 River 可能会出现问题。在繁重的操作下,River 将使用大量内存。我建议实现您自己的 elasticsearch 模型,或者如果您使用的是mongoose,您可以将您的elasticsearch模型直接构建到其中,或者使用mongoosastic基本上为您完成此操作。
Another disadvantage to Mongodb River is that you'll be stuck using mongodb 2.4.x branch, and ElasticSearch 0.90.x. You'll start to find that you're missing out on a lot of really nice features, and the mongodb river project just doesn't produce a usable product fast enough to keep stable. That said Mongodb River is definitely not something I'd go into production with. It's posed more problems than its worth. It will randomly drop write under heavy load, it will consume lots of memory, and there's no setting to cap that. Additionally, river doesn't update in realtime, it reads oplogs from mongodb, and this can delay updates for as long as 5 minutes in my experience.
Mongodb River 的另一个缺点是您将无法使用 mongodb 2.4.x 分支和 ElasticSearch 0.90.x。您会开始发现您错过了许多非常好的功能,而 mongodb 的 River 项目并没有以足够快的速度生产可用的产品以保持稳定。也就是说 Mongodb River 绝对不是我会投入生产的东西。它带来的问题比它的价值还多。它会在重负载下随机丢弃写入,它会消耗大量内存,并且没有设置限制。此外,river 不会实时更新,它会从 mongodb 读取 oplog,根据我的经验,这可能会延迟更新长达 5 分钟。
We recently had to rewrite a large portion of our project, because its a weekly occurrence that something goes wrong with ElasticSearch. We had even gone as far as to hire a Dev Ops consultant, who also agrees that its best to move away from River.
我们最近不得不重写我们项目的很大一部分,因为每周都会发生一次 ElasticSearch 出现问题。我们甚至聘请了一位 Dev Ops 顾问,他也同意最好离开 River。
UPDATE:Elasticsearch-mongodb-river now supports ES v1.4.0 and mongodb v2.6.x. However, you'll still likely run into performance problems on heavy insert/update operations as this plugin will try to read mongodb's oplogs to sync. If there are a lot of operations since the lock(or latch rather) unlocks, you'll notice extremely high memory usage on your elasticsearch server. If you plan on having a large operation, river is not a good option. The developers of ElasticSearch still recommend you to manage your own indexes by communicating directly with their API using the client library for your language, rather than using river. This isn't really the purpose of river. Twitter-river is a great example of how river should be used. Its essentially a great way to source data from outside sources, but not very reliable for high traffic or internal use.
更新:Elasticsearch-mongodb-river 现在支持 ES v1.4.0 和 mongodb v2.6.x。但是,您仍然可能会在繁重的插入/更新操作中遇到性能问题,因为该插件会尝试读取 mongodb 的 oplog 进行同步。如果自锁(或闩锁)解锁以来有很多操作,您会注意到 elasticsearch 服务器上的内存使用率极高。如果您计划进行大型操作,河流不是一个好的选择。ElasticSearch 的开发人员仍然建议您通过使用您的语言的客户端库直接与其 API 通信来管理自己的索引,而不是使用 river。这并不是河流的真正目的。Twitter-river 是如何使用河流的一个很好的例子。它本质上是从外部来源获取数据的好方法,
Also consider that mongodb-river falls behind in version, as its not maintained by ElasticSearch Organization, its maintained by a thirdparty. Development was stuck on v0.90 branch for a long time after the release of v1.0, and when a version for v1.0 was released it wasn't stable until elasticsearch released v1.3.0. Mongodb versions also fall behind. You may find yourself in a tight spot when you're looking to move to a later version of each, especially with ElasticSearch under such heavy development, with many very anticipated features on the way. Staying up on the latest ElasticSearch has been very important as we rely heavily on constantly improving our search functionality as its a core part of our product.
还要考虑 mongodb-river 版本落后,因为它不是由 ElasticSearch 组织维护,而是由第三方维护。在 v1.0 发布后,开发在 v0.90 分支上停留了很长时间,当发布 v1.0 版本时,它一直不稳定,直到elasticsearch 发布 v1.3.0。Mongodb 版本也落后了。当您希望迁移到每个版本的更高版本时,您可能会发现自己处于困境,尤其是在 ElasticSearch 处于如此繁重的开发阶段,许多非常期待的功能即将推出的情况下。紧跟最新的 ElasticSearch 非常重要,因为我们严重依赖不断改进我们的搜索功能,将其作为我们产品的核心部分。
All in all you'll likely get a better product if you do it yourself. Its not that difficult. Its just another database to manage in your code, and it can easily be dropped in to your existing models without major refactoring.
总而言之,如果您自己动手,您可能会得到更好的产品。它不是那么困难。它只是要在您的代码中管理的另一个数据库,并且可以轻松地将其放入您现有的模型中,而无需进行重大重构。
回答by Lokendra Chauhan
I found mongo-connector useful. It is form Mongo Labs (MongoDB Inc.) and can be used now with Elasticsearch 2.x
我发现 mongo-connector 很有用。它来自 Mongo Labs (MongoDB Inc.),现在可以与 Elasticsearch 2.x 一起使用
Elastic 2.x doc manager: https://github.com/mongodb-labs/elastic2-doc-manager
弹性 2.x 文档管理器:https: //github.com/mongodb-labs/elastic2-doc-manager
mongo-connector creates a pipeline from a MongoDB cluster to one or more target systems, such as Solr, Elasticsearch, or another MongoDB cluster. It synchronizes data in MongoDB to the target then tails the MongoDB oplog, keeping up with operations in MongoDB in real-time. It has been tested with Python 2.6, 2.7, and 3.3+. Detailed documentation is available on the wiki.
mongo-connector 创建一条从 MongoDB 集群到一个或多个目标系统的管道,例如 Solr、Elasticsearch 或另一个 MongoDB 集群。它将 MongoDB 中的数据同步到目标,然后跟踪 MongoDB oplog,实时跟上 MongoDB 中的操作。它已经过 Python 2.6、2.7 和 3.3+ 的测试。wiki 上提供了详细的文档。
https://github.com/mongodb-labs/mongo-connectorhttps://github.com/mongodb-labs/mongo-connector/wiki/Usage%20with%20ElasticSearch
https://github.com/mongodb-labs/mongo-connector https://github.com/mongodb-labs/mongo-connector/wiki/Usage%20with%20ElasticSearch
回答by Lokendra Chauhan
River is a good solution once you want to have a almost real time synchronization and general solution.
一旦你想要一个几乎实时的同步和通用的解决方案,River 是一个很好的解决方案。
If you have data in MongoDB already and want to ship it very easily to Elasticsearch like "one-shot" you can try my package in Node.js https://github.com/itemsapi/elasticbulk.
如果您已经在 MongoDB 中有数据并希望像“一次性”一样轻松地将其发送到 Elasticsearch,您可以在 Node.js https://github.com/itemsapi/elasticbulk 中尝试我的包。
It's using Node.js streams so you can import data from everything what is supporting streams (i.e. MongoDB, PostgreSQL, MySQL, JSON files, etc)
它使用 Node.js 流,因此您可以从支持流的所有内容(即 MongoDB、PostgreSQL、MySQL、JSON 文件等)导入数据
Example for MongoDB to Elasticsearch:
MongoDB 到 Elasticsearch 的示例:
Install packages:
安装软件包:
npm install elasticbulk
npm install mongoose
npm install bluebird
Create script i.e. script.js:
创建脚本即 script.js:
const elasticbulk = require('elasticbulk');
const mongoose = require('mongoose');
const Promise = require('bluebird');
mongoose.connect('mongodb://localhost/your_database_name', {
useMongoClient: true
});
mongoose.Promise = Promise;
var Page = mongoose.model('Page', new mongoose.Schema({
title: String,
categories: Array
}), 'your_collection_name');
// stream query
var stream = Page.find({
}, {title: 1, _id: 0, categories: 1}).limit(1500000).skip(0).batchSize(500).stream();
elasticbulk.import(stream, {
index: 'my_index_name',
type: 'my_type_name',
host: 'localhost:9200',
})
.then(function(res) {
console.log('Importing finished');
})
Ship your data:
发送您的数据:
node script.js
It's not extremely fast but it's working for millions of records (thanks to streams).
它不是非常快,但它可以处理数百万条记录(感谢流)。
回答by Jud
Since mongo-connector now appears dead, my company decided to build a tool for using Mongo change streams to output to Elasticsearch.
由于 mongo-connector 现在看起来已经死了,我的公司决定构建一个使用 Mongo 更改流输出到 Elasticsearch 的工具。
Our initial results look promising. You can check it out at https://github.com/electionsexperts/mongo-stream. We're still early in development, and would welcome suggestions or contributions.
我们的初步结果看起来很有希望。您可以在https://github.com/electionsexperts/mongo-stream 上查看。我们仍处于开发初期,欢迎提出建议或贡献。
回答by Priyanshu Chauhan
Here how to do this on mongodb 3.0. I used this nice blog
这里如何在 mongodb 3.0 上执行此操作。我用了这个不错的博客
- Install mongodb.
- Create data directories:
- 安装 mongodb。
- 创建数据目录:
$ mkdir RANDOM_PATH/node1 $ mkdir RANDOM_PATH/node2> $ mkdir RANDOM_PATH/node3
$ mkdir RANDOM_PATH/node1 $ mkdir RANDOM_PATH/node2> $ mkdir RANDOM_PATH/node3
- Start Mongod instances
- 启动 Mongod 实例
$ mongod --replSet test --port 27021 --dbpath node1 $ mongod --replSet test --port 27022 --dbpath node2 $ mongod --replSet test --port 27023 --dbpath node3
$ mongod --replSet test --port 27021 --dbpath node1 $ mongod --replSet test --port 27022 --dbpath node2 $ mongod --replSet test --port 27023 --dbpath node3
- Configure the Replica Set:
- 配置副本集:
$ mongo config = {_id: 'test', members: [ {_id: 0, host: 'localhost:27021'}, {_id: 1, host: 'localhost:27022'}]}; rs.initiate(config);
$ mongo config = {_id: 'test', members: [ {_id: 0, host: 'localhost:27021'}, {_id: 1, host: 'localhost:27022'}]}; rs.initiate(config);
- Installing Elasticsearch:
- 安装 Elasticsearch:
a. Download and unzip the [latest Elasticsearch][2] distribution b. Run bin/elasticsearch to start the es server. c. Run curl -XGET http://localhost:9200/ to confirm it is working.
a. Download and unzip the [latest Elasticsearch][2] distribution b. Run bin/elasticsearch to start the es server. c. Run curl -XGET http://localhost:9200/ to confirm it is working.
- Installing and configuring the MongoDB River:
- 安装和配置 MongoDB River:
$ bin/plugin --install com.github.richardwilly98.elasticsearch/elasticsearch-river-mongodb
$ bin/plugin --install elasticsearch/elasticsearch-mapper-attachments
$ bin/plugin --install com.github.richardwilly98.elasticsearch/elasticsearch-river-mongodb
$ bin/plugin --install elasticsearch/elasticsearch-mapper-attachments
- Create the “River” and the Index:
- 创建“河流”和索引:
curl -XPUT 'http://localhost:8080/_river/mongodb/_meta' -d '{ "type": "mongodb", "mongodb": { "db": "mydb", "collection": "foo" }, "index": { "name": "name", "type": "random" } }'
curl -XPUT ' http://localhost:8080/_river/mongodb/_meta' -d '{ "type": "mongodb", "mongodb": { "db": "mydb", "collection": "foo" }, "index": { "name": "name", "type": "random" } }'
Test on browser:
在浏览器上测试:
回答by Abhijit Bashetti
Here I found another good option to migrate your MongoDB data to Elasticsearch. A go daemon that syncs mongodb to elasticsearch in realtime. Its the Monstache. Its available at : Monstache
在这里,我找到了另一个将 MongoDB 数据迁移到 Elasticsearch 的好选择。一个实时同步 mongodb 到 elasticsearch 的 go 守护进程。它的Monstache。它可以在: Monstache
Below the initial setp to configure and use it.
在初始 setp 下面配置和使用它。
Step 1:
第1步:
C:\Program Files\MongoDB\Server.0\bin>mongod --smallfiles --oplogSize 50 --replSet test
Step 2 :
第2步 :
C:\Program Files\MongoDB\Server.0\bin>mongo
C:\Program Files\MongoDB\Server.0\bin>mongo
MongoDB shell version v4.0.2
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 4.0.2
Server has startup warnings:
2019-01-18T16:56:44.931+0530 I CONTROL [initandlisten]
2019-01-18T16:56:44.931+0530 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-01-18T16:56:44.931+0530 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2019-01-18T16:56:44.931+0530 I CONTROL [initandlisten]
2019-01-18T16:56:44.931+0530 I CONTROL [initandlisten] ** WARNING: This server is bound to localhost.
2019-01-18T16:56:44.931+0530 I CONTROL [initandlisten] ** Remote systems will be unable to connect to this server.
2019-01-18T16:56:44.931+0530 I CONTROL [initandlisten] ** Start the server with --bind_ip <address> to specify which IP
2019-01-18T16:56:44.931+0530 I CONTROL [initandlisten] ** addresses it should serve responses from, or with --bind_ip_all to
2019-01-18T16:56:44.931+0530 I CONTROL [initandlisten] ** bind to all interfaces. If this behavior is desired, start the
2019-01-18T16:56:44.931+0530 I CONTROL [initandlisten] ** server with --bind_ip 127.0.0.1 to disable this warning.
2019-01-18T16:56:44.931+0530 I CONTROL [initandlisten]
MongoDB Enterprise test:PRIMARY>
Step 3 : Verify the replication.
第 3 步:验证复制。
MongoDB Enterprise test:PRIMARY> rs.status();
{
"set" : "test",
"date" : ISODate("2019-01-18T11:39:00.380Z"),
"myState" : 1,
"term" : NumberLong(2),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1547811537, 1),
"t" : NumberLong(2)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1547811537, 1),
"t" : NumberLong(2)
},
"appliedOpTime" : {
"ts" : Timestamp(1547811537, 1),
"t" : NumberLong(2)
},
"durableOpTime" : {
"ts" : Timestamp(1547811537, 1),
"t" : NumberLong(2)
}
},
"lastStableCheckpointTimestamp" : Timestamp(1547811517, 1),
"members" : [
{
"_id" : 0,
"name" : "localhost:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 736,
"optime" : {
"ts" : Timestamp(1547811537, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2019-01-18T11:38:57Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1547810805, 1),
"electionDate" : ISODate("2019-01-18T11:26:45Z"),
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
}
],
"ok" : 1,
"operationTime" : Timestamp(1547811537, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1547811537, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
MongoDB Enterprise test:PRIMARY>
Step 4.
Download the "https://github.com/rwynn/monstache/releases".
Unzip the download and adjust your PATH variable to include the path to the folder for your platform.
GO to cmd and type "monstache -v"
# 4.13.1
Monstache uses the TOML format for its configuration. Configure the file for migration named config.toml
步骤 4. 下载“ https://github.com/rwynn/monstache/releases”。解压缩下载并调整您的 PATH 变量以包含您平台的文件夹路径。转到 cmd 并键入"monstache -v"
#4.13.1 Monstache 使用 TOML 格式进行配置。配置名为 config.toml 的迁移文件
Step 5.
第 5 步。
My config.toml -->
我的 config.toml -->
mongo-url = "mongodb://127.0.0.1:27017/?replicaSet=test"
elasticsearch-urls = ["http://localhost:9200"]
direct-read-namespaces = [ "admin.users" ]
gzip = true
stats = true
index-stats = true
elasticsearch-max-conns = 4
elasticsearch-max-seconds = 5
elasticsearch-max-bytes = 8000000
dropped-collections = false
dropped-databases = false
resume = true
resume-write-unsafe = true
resume-name = "default"
index-files = false
file-highlighting = false
verbose = true
exit-after-direct-reads = false
index-as-update=true
index-oplog-time=true
Step 6.
第 6 步。
D:-1-19>monstache -f config.toml