Remove Shards From MongbDB
Sharded Cluster
Sharded Cluster
To remove a shard you must ensure the shard’s data is migrated to the remaining shards in the cluster.
In my env, I have a sharded and replicated cluster where each shard has two members i.e. primary and seconday.
To remove a shard, first connect to one
of the cluster’s mongos instances using mongo shell.
[root@xxxx mongodb]# mongo --host xxxx--port 20001
MongoDB shell version: 3.0.7
connecting to: lpdosput00249:20001/test
Server has startup warnings:
2015-11-19T09:59:29.367-0700 I CONTROL
mongos> db.adminCommand( {
listShards: 1 } )
{
"shards" : [
{
"_id" :
"shard0000",
"host" :
"rs0/10.20.176.93:30001,10.20.176.93:31001"
},
{
"_id" :
"shard0001",
"host" :
"rs1/10.20.176.93:30002,10.20.176.93:31002"
},
{
"_id" :
"shard0002",
"host" :
"rs2/10.20.176.93:30003,10.20.176.93:31003"
}
],
"ok" : 1
}
To successfully migrate data from a shard, the balancer process must be enabled.
Check the balancer state using the
sh.getBalancerState() helper in the mongo shell
mongos> sh.getBalancerState()
true
Now,
one needs to determine the shard to remove. Make sure you pick up the right
name, otherwise you will end removing wrong shard. This operation is IO
intensive and will take time proportional to the volume of data to be
moved.
mongos> use admin
switched to db admin
mongos> db.runCommand( { removeShard:
"shard0000" } )
{
"msg" : "draining started successfully",
"state" : "started",
"shard" : "shard0000",
"note" : "you need to drop or movePrimary these databases",
"dbsToMove" : [
"testdb"
],
"ok" : 1
}
mongos>
As you can see the message that the draining of data is successfully started. This begins “draining” chunks from the shard you are removing to other shards in the cluster.
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("564659a6ecf5ebdaa8b6fec9")
}
shards:
{ "_id" : "shard0000", "host" :
"rs0/10.20.176.93:30001,10.20.176.93:31001",
"draining" : true }
{
"_id" : "shard0001", "host" :
"rs1/10.20.176.93:30002,10.20.176.93:31002" }
{
"_id" : "shard0002", "host" :
"rs2/10.20.176.93:30003,10.20.176.93:31003" }
balancer:
Currently
enabled: yes
Currently
running: no
Failed
balancer rounds in last 5 attempts: 3
Last
reported error: ReplicaSetMonitor no master found for set: rs0
Time of
Reported error: Wed Nov 18 2015 16:02:28 GMT-0700 (MST)
Migration
Results for the last 24 hours:
No recent migrations
databases:
{
"_id" : "admin", "partitioned" :
false, "primary" : "config" }
{
"_id" : "testdb", "partitioned" :
true, "primary" : "shard0000" }
{
"_id" : "test_db", "partitioned" :
true, "primary" : "shard0001" }
{
"_id" : "apiv3", "partitioned" :
false, "primary" : "shard0001" }
{
"_id" : "mmsdbpings", "partitioned" :
false, "primary" : "shard0001" }
{
"_id" : "test", "partitioned" :
false, "primary" : "shard0001" }
mongos>
To check
the progress of the migration at any stage in the process, run removeShard from
the admin database again.
mongos>
db.runCommand( { removeShard: "shard0000" } )
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" :
NumberLong(0),
"dbs" :
NumberLong(1)
},
"note" : "you need to drop or movePrimary
these databases",
"dbsToMove" : [
"testdb"
],
"ok" : 1
}
Now try to move primary shard for the testdb database.
mongos> db.runCommand( { movePrimary: "testdb", to: "shard0001" })
{
"primary " :
"shard0001:rs1/10.20.176.93:30002,10.20.176.93:31002",
"ok" : 1
}
check
the status of the drain operation...
mongos>
db.runCommand( { removeShard: "shard0000" } )
{
"msg" :
"removeshard completed successfully",
"state" :
"completed",
"shard" :
"shard0000",
"ok" : 1
}
Now
if you check the status of the sharded cluster you will see only two shards
available.
mongos>
sh.status()
---
Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" :
ObjectId("564659a6ecf5ebdaa8b6fec9")
}
shards:
{ "_id"
: "shard0001", "host" : "rs1/10.20.176.93:30002,10.20.176.93:31002"
}
{ "_id"
: "shard0002", "host" :
"rs2/10.20.176.93:30003,10.20.176.93:31003" }
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "admin",
"partitioned" : false, "primary" :
"config" }
{ "_id" : "testdb",
"partitioned" : true, "primary" :
"shard0001" }
{ "_id" : "test_db",
"partitioned" : true, "primary" :
"shard0001" }
{ "_id" : "apiv3",
"partitioned" : false, "primary" :
"shard0001" }
{ "_id" : "mmsdbpings",
"partitioned" : false, "primary" :
"shard0001" }
{ "_id" : "test",
"partitioned" : false, "primary" :
"shard0001" }
Hope
this will help you to remove the replicated shards from your cluster. For non
replicated shards the process will remain same.
No comments:
Post a Comment