0-management: Commit failed for operation Start on local node

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

0-management: Commit failed for operation Start on local node

TomK
Hey All,

I'm getting the below error when trying to start a 2 node Gluster cluster.

I had the quorum enabled when I was at version 3.12 .  However with this
version it needed the quorum disabled.  So I did so however now see the
subject error.

Any ideas what I could try next?

--
Thx,
TK.


[2019-09-25 05:17:26.615203] D [MSGID: 0]
[glusterd-utils.c:1136:glusterd_resolve_brick] 0-management: Returning 0
[2019-09-25 05:17:26.615555] D [MSGID: 0]
[glusterd-mgmt.c:243:gd_mgmt_v3_pre_validate_fn] 0-management: OP = 5.
Returning 0
[2019-09-25 05:17:26.616271] D [MSGID: 0]
[glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
mdsgv01 found
[2019-09-25 05:17:26.616305] D [MSGID: 0]
[glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
[2019-09-25 05:17:26.616327] D [MSGID: 0]
[glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning 0
[2019-09-25 05:17:26.617056] I
[glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a
fresh brick process for brick /mnt/p01-d01/glusterv01
[2019-09-25 05:17:26.722717] E [MSGID: 106005]
[glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to
start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
[2019-09-25 05:17:26.722960] D [MSGID: 0]
[glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning -107
[2019-09-25 05:17:26.723006] E [MSGID: 106122]
[glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start
commit failed.
[2019-09-25 05:17:26.723027] D [MSGID: 0]
[glusterd-mgmt.c:444:gd_mgmt_v3_commit_fn] 0-management: OP = 5.
Returning -107
[2019-09-25 05:17:26.723045] E [MSGID: 106122]
[glusterd-mgmt.c:1696:glusterd_mgmt_v3_commit] 0-management: Commit
failed for operation Start on local node
[2019-09-25 05:17:26.723073] D [MSGID: 0]
[glusterd-op-sm.c:5106:glusterd_op_modify_op_ctx] 0-management: op_ctx
modification not required
[2019-09-25 05:17:26.723141] E [MSGID: 106122]
[glusterd-mgmt.c:2466:glusterd_mgmt_v3_initiate_all_phases]
0-management: Commit Op Failed
[2019-09-25 05:17:26.723204] D [MSGID: 0]
[glusterd-locks.c:797:glusterd_mgmt_v3_unlock] 0-management: Trying to
release lock of vol mdsgv01 for f7336db6-22b4-497d-8c2f-04c833a28546 as
mdsgv01_vol
[2019-09-25 05:17:26.723239] D [MSGID: 0]
[glusterd-locks.c:846:glusterd_mgmt_v3_unlock] 0-management: Lock for
vol mdsgv01 successfully released
[2019-09-25 05:17:26.723273] D [MSGID: 0]
[glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
mdsgv01 found
[2019-09-25 05:17:26.723326] D [MSGID: 0]
[glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
[2019-09-25 05:17:26.723360] D [MSGID: 0]
[glusterd-locks.c:464:glusterd_multiple_mgmt_v3_unlock] 0-management:
Returning 0

==> /var/log/glusterfs/cmd_history.log <==
[2019-09-25 05:17:26.723390]  : volume start mdsgv01 : FAILED : Commit
failed on localhost. Please check log file for details.

==> /var/log/glusterfs/glusterd.log <==
[2019-09-25 05:17:26.723479] D [MSGID: 0]
[glusterd-rpc-ops.c:199:glusterd_op_send_cli_response] 0-management:
Returning 0



[root@mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol
volume management
     type mgmt/glusterd
     option working-directory /var/lib/glusterd
     option transport-type socket,rdma
     option transport.socket.keepalive-time 10
     option transport.socket.keepalive-interval 2
     option transport.socket.read-fail-log off
     option ping-timeout 0
     option event-threads 1
     option rpc-auth-allow-insecure on
     # option cluster.server-quorum-type server
     # option cluster.quorum-type auto
     option server.event-threads 8
     option client.event-threads 8
     option performance.write-behind-window-size 8MB
     option performance.io-thread-count 16
     option performance.cache-size 1GB
     option nfs.trusted-sync on
     option storage.owner-uid 36
     option storage.owner-uid 36
     option cluster.data-self-heal-algorithm full
     option performance.low-prio-threads 32
     option features.shard-block-size 512MB
     option features.shard on
end-volume
[root@mdskvm-p01 glusterfs]#


[root@mdskvm-p01 glusterfs]# gluster volume info

Volume Name: mdsgv01
Type: Replicate
Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0
Status: Stopped
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02
Brick2: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
Options Reconfigured:
storage.owner-gid: 36
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-uid: 36
cluster.server-quorum-type: none
cluster.quorum-type: none
server.event-threads: 8
client.event-threads: 8
performance.write-behind-window-size: 8MB
performance.io-thread-count: 16
performance.cache-size: 1GB
nfs.trusted-sync: on
server.allow-insecure: on
performance.readdir-ahead: on
diagnostics.brick-log-level: DEBUG
diagnostics.brick-sys-log-level: INFO
diagnostics.client-log-level: DEBUG
[root@mdskvm-p01 glusterfs]#


_______________________________________________

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
[hidden email]
https://lists.gluster.org/mailman/listinfo/gluster-devel

Reply | Threaded
Open this post in threaded view
|

Re: 0-management: Commit failed for operation Start on local node

Sanju Rakonde
Hi, The below errors indicate that brick process is failed to start. Please attach brick log.

[glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a 
fresh brick process for brick /mnt/p01-d01/glusterv01
[2019-09-25 05:17:26.722717] E [MSGID: 106005] 
[glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to 
start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
[2019-09-25 05:17:26.722960] D [MSGID: 0] 
[glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning -107
[2019-09-25 05:17:26.723006] E [MSGID: 106122] 
[glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start 
commit failed.

On Wed, Sep 25, 2019 at 11:00 AM TomK <[hidden email]> wrote:
Hey All,

I'm getting the below error when trying to start a 2 node Gluster cluster.

I had the quorum enabled when I was at version 3.12 .  However with this
version it needed the quorum disabled.  So I did so however now see the
subject error.

Any ideas what I could try next?

--
Thx,
TK.


[2019-09-25 05:17:26.615203] D [MSGID: 0]
[glusterd-utils.c:1136:glusterd_resolve_brick] 0-management: Returning 0
[2019-09-25 05:17:26.615555] D [MSGID: 0]
[glusterd-mgmt.c:243:gd_mgmt_v3_pre_validate_fn] 0-management: OP = 5.
Returning 0
[2019-09-25 05:17:26.616271] D [MSGID: 0]
[glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
mdsgv01 found
[2019-09-25 05:17:26.616305] D [MSGID: 0]
[glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
[2019-09-25 05:17:26.616327] D [MSGID: 0]
[glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning 0
[2019-09-25 05:17:26.617056] I
[glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a
fresh brick process for brick /mnt/p01-d01/glusterv01
[2019-09-25 05:17:26.722717] E [MSGID: 106005]
[glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to
start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
[2019-09-25 05:17:26.722960] D [MSGID: 0]
[glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning -107
[2019-09-25 05:17:26.723006] E [MSGID: 106122]
[glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start
commit failed.
[2019-09-25 05:17:26.723027] D [MSGID: 0]
[glusterd-mgmt.c:444:gd_mgmt_v3_commit_fn] 0-management: OP = 5.
Returning -107
[2019-09-25 05:17:26.723045] E [MSGID: 106122]
[glusterd-mgmt.c:1696:glusterd_mgmt_v3_commit] 0-management: Commit
failed for operation Start on local node
[2019-09-25 05:17:26.723073] D [MSGID: 0]
[glusterd-op-sm.c:5106:glusterd_op_modify_op_ctx] 0-management: op_ctx
modification not required
[2019-09-25 05:17:26.723141] E [MSGID: 106122]
[glusterd-mgmt.c:2466:glusterd_mgmt_v3_initiate_all_phases]
0-management: Commit Op Failed
[2019-09-25 05:17:26.723204] D [MSGID: 0]
[glusterd-locks.c:797:glusterd_mgmt_v3_unlock] 0-management: Trying to
release lock of vol mdsgv01 for f7336db6-22b4-497d-8c2f-04c833a28546 as
mdsgv01_vol
[2019-09-25 05:17:26.723239] D [MSGID: 0]
[glusterd-locks.c:846:glusterd_mgmt_v3_unlock] 0-management: Lock for
vol mdsgv01 successfully released
[2019-09-25 05:17:26.723273] D [MSGID: 0]
[glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
mdsgv01 found
[2019-09-25 05:17:26.723326] D [MSGID: 0]
[glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
[2019-09-25 05:17:26.723360] D [MSGID: 0]
[glusterd-locks.c:464:glusterd_multiple_mgmt_v3_unlock] 0-management:
Returning 0

==> /var/log/glusterfs/cmd_history.log <==
[2019-09-25 05:17:26.723390]  : volume start mdsgv01 : FAILED : Commit
failed on localhost. Please check log file for details.

==> /var/log/glusterfs/glusterd.log <==
[2019-09-25 05:17:26.723479] D [MSGID: 0]
[glusterd-rpc-ops.c:199:glusterd_op_send_cli_response] 0-management:
Returning 0



[root@mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol
volume management
     type mgmt/glusterd
     option working-directory /var/lib/glusterd
     option transport-type socket,rdma
     option transport.socket.keepalive-time 10
     option transport.socket.keepalive-interval 2
     option transport.socket.read-fail-log off
     option ping-timeout 0
     option event-threads 1
     option rpc-auth-allow-insecure on
     # option cluster.server-quorum-type server
     # option cluster.quorum-type auto
     option server.event-threads 8
     option client.event-threads 8
     option performance.write-behind-window-size 8MB
     option performance.io-thread-count 16
     option performance.cache-size 1GB
     option nfs.trusted-sync on
     option storage.owner-uid 36
     option storage.owner-uid 36
     option cluster.data-self-heal-algorithm full
     option performance.low-prio-threads 32
     option features.shard-block-size 512MB
     option features.shard on
end-volume
[root@mdskvm-p01 glusterfs]#


[root@mdskvm-p01 glusterfs]# gluster volume info

Volume Name: mdsgv01
Type: Replicate
Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0
Status: Stopped
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02
Brick2: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
Options Reconfigured:
storage.owner-gid: 36
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-uid: 36
cluster.server-quorum-type: none
cluster.quorum-type: none
server.event-threads: 8
client.event-threads: 8
performance.write-behind-window-size: 8MB
performance.io-thread-count: 16
performance.cache-size: 1GB
nfs.trusted-sync: on
server.allow-insecure: on
performance.readdir-ahead: on
diagnostics.brick-log-level: DEBUG
diagnostics.brick-sys-log-level: INFO
diagnostics.client-log-level: DEBUG
[root@mdskvm-p01 glusterfs]#


_______________________________________________

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
[hidden email]
https://lists.gluster.org/mailman/listinfo/gluster-devel



--
Thanks,
Sanju

_______________________________________________

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
[hidden email]
https://lists.gluster.org/mailman/listinfo/gluster-devel

Reply | Threaded
Open this post in threaded view
|

Re: 0-management: Commit failed for operation Start on local node

TomK
Attached.


On 9/25/2019 5:08 AM, Sanju Rakonde wrote:

> Hi, The below errors indicate that brick process is failed to start.
> Please attach brick log.
>
> [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a
> fresh brick process for brick /mnt/p01-d01/glusterv01
> [2019-09-25 05:17:26.722717] E [MSGID: 106005]
> [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to
> start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
> [2019-09-25 05:17:26.722960] D [MSGID: 0]
> [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning -107
> [2019-09-25 05:17:26.723006] E [MSGID: 106122]
> [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start
> commit failed.
>
> On Wed, Sep 25, 2019 at 11:00 AM TomK <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     Hey All,
>
>     I'm getting the below error when trying to start a 2 node Gluster
>     cluster.
>
>     I had the quorum enabled when I was at version 3.12 .  However with
>     this
>     version it needed the quorum disabled.  So I did so however now see the
>     subject error.
>
>     Any ideas what I could try next?
>
>     --
>     Thx,
>     TK.
>
>
>     [2019-09-25 05:17:26.615203] D [MSGID: 0]
>     [glusterd-utils.c:1136:glusterd_resolve_brick] 0-management: Returning 0
>     [2019-09-25 05:17:26.615555] D [MSGID: 0]
>     [glusterd-mgmt.c:243:gd_mgmt_v3_pre_validate_fn] 0-management: OP = 5.
>     Returning 0
>     [2019-09-25 05:17:26.616271] D [MSGID: 0]
>     [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
>     mdsgv01 found
>     [2019-09-25 05:17:26.616305] D [MSGID: 0]
>     [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
>     [2019-09-25 05:17:26.616327] D [MSGID: 0]
>     [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning 0
>     [2019-09-25 05:17:26.617056] I
>     [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a
>     fresh brick process for brick /mnt/p01-d01/glusterv01
>     [2019-09-25 05:17:26.722717] E [MSGID: 106005]
>     [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to
>     start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
>     [2019-09-25 05:17:26.722960] D [MSGID: 0]
>     [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning
>     -107
>     [2019-09-25 05:17:26.723006] E [MSGID: 106122]
>     [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start
>     commit failed.
>     [2019-09-25 05:17:26.723027] D [MSGID: 0]
>     [glusterd-mgmt.c:444:gd_mgmt_v3_commit_fn] 0-management: OP = 5.
>     Returning -107
>     [2019-09-25 05:17:26.723045] E [MSGID: 106122]
>     [glusterd-mgmt.c:1696:glusterd_mgmt_v3_commit] 0-management: Commit
>     failed for operation Start on local node
>     [2019-09-25 05:17:26.723073] D [MSGID: 0]
>     [glusterd-op-sm.c:5106:glusterd_op_modify_op_ctx] 0-management: op_ctx
>     modification not required
>     [2019-09-25 05:17:26.723141] E [MSGID: 106122]
>     [glusterd-mgmt.c:2466:glusterd_mgmt_v3_initiate_all_phases]
>     0-management: Commit Op Failed
>     [2019-09-25 05:17:26.723204] D [MSGID: 0]
>     [glusterd-locks.c:797:glusterd_mgmt_v3_unlock] 0-management: Trying to
>     release lock of vol mdsgv01 for f7336db6-22b4-497d-8c2f-04c833a28546 as
>     mdsgv01_vol
>     [2019-09-25 05:17:26.723239] D [MSGID: 0]
>     [glusterd-locks.c:846:glusterd_mgmt_v3_unlock] 0-management: Lock for
>     vol mdsgv01 successfully released
>     [2019-09-25 05:17:26.723273] D [MSGID: 0]
>     [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
>     mdsgv01 found
>     [2019-09-25 05:17:26.723326] D [MSGID: 0]
>     [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
>     [2019-09-25 05:17:26.723360] D [MSGID: 0]
>     [glusterd-locks.c:464:glusterd_multiple_mgmt_v3_unlock] 0-management:
>     Returning 0
>
>     ==> /var/log/glusterfs/cmd_history.log <==
>     [2019-09-25 05:17:26.723390]  : volume start mdsgv01 : FAILED : Commit
>     failed on localhost. Please check log file for details.
>
>     ==> /var/log/glusterfs/glusterd.log <==
>     [2019-09-25 05:17:26.723479] D [MSGID: 0]
>     [glusterd-rpc-ops.c:199:glusterd_op_send_cli_response] 0-management:
>     Returning 0
>
>
>
>     [root@mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol
>     volume management
>           type mgmt/glusterd
>           option working-directory /var/lib/glusterd
>           option transport-type socket,rdma
>           option transport.socket.keepalive-time 10
>           option transport.socket.keepalive-interval 2
>           option transport.socket.read-fail-log off
>           option ping-timeout 0
>           option event-threads 1
>           option rpc-auth-allow-insecure on
>           # option cluster.server-quorum-type server
>           # option cluster.quorum-type auto
>           option server.event-threads 8
>           option client.event-threads 8
>           option performance.write-behind-window-size 8MB
>           option performance.io-thread-count 16
>           option performance.cache-size 1GB
>           option nfs.trusted-sync on
>           option storage.owner-uid 36
>           option storage.owner-uid 36
>           option cluster.data-self-heal-algorithm full
>           option performance.low-prio-threads 32
>           option features.shard-block-size 512MB
>           option features.shard on
>     end-volume
>     [root@mdskvm-p01 glusterfs]#
>
>
>     [root@mdskvm-p01 glusterfs]# gluster volume info
>
>     Volume Name: mdsgv01
>     Type: Replicate
>     Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0
>     Status: Stopped
>     Snapshot Count: 0
>     Number of Bricks: 1 x 2 = 2
>     Transport-type: tcp
>     Bricks:
>     Brick1: mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02
>     Brick2: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
>     Options Reconfigured:
>     storage.owner-gid: 36
>     cluster.data-self-heal-algorithm: full
>     performance.low-prio-threads: 32
>     features.shard-block-size: 512MB
>     features.shard: on
>     storage.owner-uid: 36
>     cluster.server-quorum-type: none
>     cluster.quorum-type: none
>     server.event-threads: 8
>     client.event-threads: 8
>     performance.write-behind-window-size: 8MB
>     performance.io-thread-count: 16
>     performance.cache-size: 1GB
>     nfs.trusted-sync: on
>     server.allow-insecure: on
>     performance.readdir-ahead: on
>     diagnostics.brick-log-level: DEBUG
>     diagnostics.brick-sys-log-level: INFO
>     diagnostics.client-log-level: DEBUG
>     [root@mdskvm-p01 glusterfs]#
>
>
>     _______________________________________________
>
>     Community Meeting Calendar:
>
>     APAC Schedule -
>     Every 2nd and 4th Tuesday at 11:30 AM IST
>     Bridge: https://bluejeans.com/118564314
>
>     NA/EMEA Schedule -
>     Every 1st and 3rd Tuesday at 01:00 PM EDT
>     Bridge: https://bluejeans.com/118564314
>
>     Gluster-devel mailing list
>     [hidden email] <mailto:[hidden email]>
>     https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>
> --
> Thanks,
> Sanju

--
Thx,
TK.

_______________________________________________

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
[hidden email]
https://lists.gluster.org/mailman/listinfo/gluster-devel


glusterd-logs.tar.gz (914K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: 0-management: Commit failed for operation Start on local node

TomK


Brick log for specific gluster start command attempt (full log attached):

[2019-09-25 10:53:37.847426] I [MSGID: 100030] [glusterfsd.c:2847:main]
0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 6.5
(args: /usr/sbin/glusterfsd -s mdskvm-p01.nix.mds.xyz --volfile-id
mdsgv01.mdskvm-p01.nix.mds.xyz.mnt-p01-d01-glusterv01 -p
/var/run/gluster/vols/mdsgv01/mdskvm-p01.nix.mds.xyz-mnt-p01-d01-glusterv01.pid
-S /var/run/gluster/defbdb699838d53b.socket --brick-name
/mnt/p01-d01/glusterv01 -l
/var/log/glusterfs/bricks/mnt-p01-d01-glusterv01.log --xlator-option
*-posix.glusterd-uuid=f7336db6-22b4-497d-8c2f-04c833a28546
--process-name brick --brick-port 49155 --xlator-option
mdsgv01-server.listen-port=49155)
[2019-09-25 10:53:37.848508] I [glusterfsd.c:2556:daemonize]
0-glusterfs: Pid of current running process is 23133
[2019-09-25 10:53:37.858381] I [socket.c:902:__socket_server_bind]
0-socket.glusterfsd: closing (AF_UNIX) reuse check socket 9
[2019-09-25 10:53:37.865940] I [MSGID: 101190]
[event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 0
[2019-09-25 10:53:37.866054] I [glusterfsd-mgmt.c:2443:mgmt_rpc_notify]
0-glusterfsd-mgmt: disconnected from remote-host: mdskvm-p01.nix.mds.xyz
[2019-09-25 10:53:37.866043] I [MSGID: 101190]
[event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2019-09-25 10:53:37.866083] I [glusterfsd-mgmt.c:2463:mgmt_rpc_notify]
0-glusterfsd-mgmt: Exhausted all volfile servers
[2019-09-25 10:53:37.866454] W [glusterfsd.c:1570:cleanup_and_exit]
(-->/lib64/libgfrpc.so.0(+0xf1d3) [0x7f9680ee91d3]
-->/usr/sbin/glusterfsd(+0x12fef) [0x55ca25710fef]
-->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55ca2570901b] ) 0-:
received signum (1), shutting down
[2019-09-25 10:53:37.872399] I
[socket.c:3754:socket_submit_outgoing_msg] 0-glusterfs: not connected
(priv->connected = 0)
[2019-09-25 10:53:37.872445] W [rpc-clnt.c:1704:rpc_clnt_submit]
0-glusterfs: failed to submit rpc-request (unique: 0, XID: 0x2 Program:
Gluster Portmap, ProgVers: 1, Proc: 5) to rpc-transport (glusterfs)
[2019-09-25 10:53:37.872534] W [glusterfsd.c:1570:cleanup_and_exit]
(-->/lib64/libgfrpc.so.0(+0xf1d3) [0x7f9680ee91d3]
-->/usr/sbin/glusterfsd(+0x12fef) [0x55ca25710fef]
-->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55ca2570901b] ) 0-:
received signum (1), shutting down





On 9/25/2019 6:48 AM, TomK wrote:

> Attached.
>
>
> On 9/25/2019 5:08 AM, Sanju Rakonde wrote:
>> Hi, The below errors indicate that brick process is failed to start.
>> Please attach brick log.
>>
>> [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a
>> fresh brick process for brick /mnt/p01-d01/glusterv01
>> [2019-09-25 05:17:26.722717] E [MSGID: 106005]
>> [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to
>> start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
>> [2019-09-25 05:17:26.722960] D [MSGID: 0]
>> [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning -107
>> [2019-09-25 05:17:26.723006] E [MSGID: 106122]
>> [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start
>> commit failed.
>>
>> On Wed, Sep 25, 2019 at 11:00 AM TomK <[hidden email]
>> <mailto:[hidden email]>> wrote:
>>
>>     Hey All,
>>
>>     I'm getting the below error when trying to start a 2 node Gluster
>>     cluster.
>>
>>     I had the quorum enabled when I was at version 3.12 .  However with
>>     this
>>     version it needed the quorum disabled.  So I did so however now
>> see the
>>     subject error.
>>
>>     Any ideas what I could try next?
>>
>>     --     Thx,
>>     TK.
>>
>>
>>     [2019-09-25 05:17:26.615203] D [MSGID: 0]
>>     [glusterd-utils.c:1136:glusterd_resolve_brick] 0-management:
>> Returning 0
>>     [2019-09-25 05:17:26.615555] D [MSGID: 0]
>>     [glusterd-mgmt.c:243:gd_mgmt_v3_pre_validate_fn] 0-management: OP
>> = 5.
>>     Returning 0
>>     [2019-09-25 05:17:26.616271] D [MSGID: 0]
>>     [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
>>     mdsgv01 found
>>     [2019-09-25 05:17:26.616305] D [MSGID: 0]
>>     [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management:
>> Returning 0
>>     [2019-09-25 05:17:26.616327] D [MSGID: 0]
>>     [glusterd-utils.c:6327:glusterd_brick_start] 0-management:
>> returning 0
>>     [2019-09-25 05:17:26.617056] I
>>     [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a
>>     fresh brick process for brick /mnt/p01-d01/glusterv01
>>     [2019-09-25 05:17:26.722717] E [MSGID: 106005]
>>     [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to
>>     start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
>>     [2019-09-25 05:17:26.722960] D [MSGID: 0]
>>     [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning
>>     -107
>>     [2019-09-25 05:17:26.723006] E [MSGID: 106122]
>>     [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start
>>     commit failed.
>>     [2019-09-25 05:17:26.723027] D [MSGID: 0]
>>     [glusterd-mgmt.c:444:gd_mgmt_v3_commit_fn] 0-management: OP = 5.
>>     Returning -107
>>     [2019-09-25 05:17:26.723045] E [MSGID: 106122]
>>     [glusterd-mgmt.c:1696:glusterd_mgmt_v3_commit] 0-management: Commit
>>     failed for operation Start on local node
>>     [2019-09-25 05:17:26.723073] D [MSGID: 0]
>>     [glusterd-op-sm.c:5106:glusterd_op_modify_op_ctx] 0-management:
>> op_ctx
>>     modification not required
>>     [2019-09-25 05:17:26.723141] E [MSGID: 106122]
>>     [glusterd-mgmt.c:2466:glusterd_mgmt_v3_initiate_all_phases]
>>     0-management: Commit Op Failed
>>     [2019-09-25 05:17:26.723204] D [MSGID: 0]
>>     [glusterd-locks.c:797:glusterd_mgmt_v3_unlock] 0-management:
>> Trying to
>>     release lock of vol mdsgv01 for
>> f7336db6-22b4-497d-8c2f-04c833a28546 as
>>     mdsgv01_vol
>>     [2019-09-25 05:17:26.723239] D [MSGID: 0]
>>     [glusterd-locks.c:846:glusterd_mgmt_v3_unlock] 0-management: Lock for
>>     vol mdsgv01 successfully released
>>     [2019-09-25 05:17:26.723273] D [MSGID: 0]
>>     [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
>>     mdsgv01 found
>>     [2019-09-25 05:17:26.723326] D [MSGID: 0]
>>     [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management:
>> Returning 0
>>     [2019-09-25 05:17:26.723360] D [MSGID: 0]
>>     [glusterd-locks.c:464:glusterd_multiple_mgmt_v3_unlock] 0-management:
>>     Returning 0
>>
>>     ==> /var/log/glusterfs/cmd_history.log <==
>>     [2019-09-25 05:17:26.723390]  : volume start mdsgv01 : FAILED :
>> Commit
>>     failed on localhost. Please check log file for details.
>>
>>     ==> /var/log/glusterfs/glusterd.log <==
>>     [2019-09-25 05:17:26.723479] D [MSGID: 0]
>>     [glusterd-rpc-ops.c:199:glusterd_op_send_cli_response] 0-management:
>>     Returning 0
>>
>>
>>
>>     [root@mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol
>>     volume management
>>           type mgmt/glusterd
>>           option working-directory /var/lib/glusterd
>>           option transport-type socket,rdma
>>           option transport.socket.keepalive-time 10
>>           option transport.socket.keepalive-interval 2
>>           option transport.socket.read-fail-log off
>>           option ping-timeout 0
>>           option event-threads 1
>>           option rpc-auth-allow-insecure on
>>           # option cluster.server-quorum-type server
>>           # option cluster.quorum-type auto
>>           option server.event-threads 8
>>           option client.event-threads 8
>>           option performance.write-behind-window-size 8MB
>>           option performance.io-thread-count 16
>>           option performance.cache-size 1GB
>>           option nfs.trusted-sync on
>>           option storage.owner-uid 36
>>           option storage.owner-uid 36
>>           option cluster.data-self-heal-algorithm full
>>           option performance.low-prio-threads 32
>>           option features.shard-block-size 512MB
>>           option features.shard on
>>     end-volume
>>     [root@mdskvm-p01 glusterfs]#
>>
>>
>>     [root@mdskvm-p01 glusterfs]# gluster volume info
>>
>>     Volume Name: mdsgv01
>>     Type: Replicate
>>     Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0
>>     Status: Stopped
>>     Snapshot Count: 0
>>     Number of Bricks: 1 x 2 = 2
>>     Transport-type: tcp
>>     Bricks:
>>     Brick1: mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02
>>     Brick2: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
>>     Options Reconfigured:
>>     storage.owner-gid: 36
>>     cluster.data-self-heal-algorithm: full
>>     performance.low-prio-threads: 32
>>     features.shard-block-size: 512MB
>>     features.shard: on
>>     storage.owner-uid: 36
>>     cluster.server-quorum-type: none
>>     cluster.quorum-type: none
>>     server.event-threads: 8
>>     client.event-threads: 8
>>     performance.write-behind-window-size: 8MB
>>     performance.io-thread-count: 16
>>     performance.cache-size: 1GB
>>     nfs.trusted-sync: on
>>     server.allow-insecure: on
>>     performance.readdir-ahead: on
>>     diagnostics.brick-log-level: DEBUG
>>     diagnostics.brick-sys-log-level: INFO
>>     diagnostics.client-log-level: DEBUG
>>     [root@mdskvm-p01 glusterfs]#
>>
>>
>>     _______________________________________________
>>
>>     Community Meeting Calendar:
>>
>>     APAC Schedule -
>>     Every 2nd and 4th Tuesday at 11:30 AM IST
>>     Bridge: https://bluejeans.com/118564314
>>
>>     NA/EMEA Schedule -
>>     Every 1st and 3rd Tuesday at 01:00 PM EDT
>>     Bridge: https://bluejeans.com/118564314
>>
>>     Gluster-devel mailing list
>>     [hidden email] <mailto:[hidden email]>
>>     https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>>
>>
>> --
>> Thanks,
>> Sanju
>
>
>
> _______________________________________________
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/118564314
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/118564314
>
> Gluster-devel mailing list
> [hidden email]
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>

--
Thx,
TK.

_______________________________________________

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
[hidden email]
https://lists.gluster.org/mailman/listinfo/gluster-devel


glusterd-brick.tar.gz (38K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: 0-management: Commit failed for operation Start on local node

TomK
Mind you, I just upgraded from 3.12 to 6.X.

On 9/25/2019 6:56 AM, TomK wrote:

>
>
> Brick log for specific gluster start command attempt (full log attached):
>
> [2019-09-25 10:53:37.847426] I [MSGID: 100030] [glusterfsd.c:2847:main]
> 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 6.5
> (args: /usr/sbin/glusterfsd -s mdskvm-p01.nix.mds.xyz --volfile-id
> mdsgv01.mdskvm-p01.nix.mds.xyz.mnt-p01-d01-glusterv01 -p
> /var/run/gluster/vols/mdsgv01/mdskvm-p01.nix.mds.xyz-mnt-p01-d01-glusterv01.pid
> -S /var/run/gluster/defbdb699838d53b.socket --brick-name
> /mnt/p01-d01/glusterv01 -l
> /var/log/glusterfs/bricks/mnt-p01-d01-glusterv01.log --xlator-option
> *-posix.glusterd-uuid=f7336db6-22b4-497d-8c2f-04c833a28546
> --process-name brick --brick-port 49155 --xlator-option
> mdsgv01-server.listen-port=49155)
> [2019-09-25 10:53:37.848508] I [glusterfsd.c:2556:daemonize]
> 0-glusterfs: Pid of current running process is 23133
> [2019-09-25 10:53:37.858381] I [socket.c:902:__socket_server_bind]
> 0-socket.glusterfsd: closing (AF_UNIX) reuse check socket 9
> [2019-09-25 10:53:37.865940] I [MSGID: 101190]
> [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 0
> [2019-09-25 10:53:37.866054] I [glusterfsd-mgmt.c:2443:mgmt_rpc_notify]
> 0-glusterfsd-mgmt: disconnected from remote-host: mdskvm-p01.nix.mds.xyz
> [2019-09-25 10:53:37.866043] I [MSGID: 101190]
> [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 1
> [2019-09-25 10:53:37.866083] I [glusterfsd-mgmt.c:2463:mgmt_rpc_notify]
> 0-glusterfsd-mgmt: Exhausted all volfile servers
> [2019-09-25 10:53:37.866454] W [glusterfsd.c:1570:cleanup_and_exit]
> (-->/lib64/libgfrpc.so.0(+0xf1d3) [0x7f9680ee91d3]
> -->/usr/sbin/glusterfsd(+0x12fef) [0x55ca25710fef]
> -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55ca2570901b] ) 0-:
> received signum (1), shutting down
> [2019-09-25 10:53:37.872399] I
> [socket.c:3754:socket_submit_outgoing_msg] 0-glusterfs: not connected
> (priv->connected = 0)
> [2019-09-25 10:53:37.872445] W [rpc-clnt.c:1704:rpc_clnt_submit]
> 0-glusterfs: failed to submit rpc-request (unique: 0, XID: 0x2 Program:
> Gluster Portmap, ProgVers: 1, Proc: 5) to rpc-transport (glusterfs)
> [2019-09-25 10:53:37.872534] W [glusterfsd.c:1570:cleanup_and_exit]
> (-->/lib64/libgfrpc.so.0(+0xf1d3) [0x7f9680ee91d3]
> -->/usr/sbin/glusterfsd(+0x12fef) [0x55ca25710fef]
> -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55ca2570901b] ) 0-:
> received signum (1), shutting down
>
>
>
>
>
> On 9/25/2019 6:48 AM, TomK wrote:
>> Attached.
>>
>>
>> On 9/25/2019 5:08 AM, Sanju Rakonde wrote:
>>> Hi, The below errors indicate that brick process is failed to start.
>>> Please attach brick log.
>>>
>>> [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a
>>> fresh brick process for brick /mnt/p01-d01/glusterv01
>>> [2019-09-25 05:17:26.722717] E [MSGID: 106005]
>>> [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to
>>> start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
>>> [2019-09-25 05:17:26.722960] D [MSGID: 0]
>>> [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning
>>> -107
>>> [2019-09-25 05:17:26.723006] E [MSGID: 106122]
>>> [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start
>>> commit failed.
>>>
>>> On Wed, Sep 25, 2019 at 11:00 AM TomK <[hidden email]
>>> <mailto:[hidden email]>> wrote:
>>>
>>>     Hey All,
>>>
>>>     I'm getting the below error when trying to start a 2 node Gluster
>>>     cluster.
>>>
>>>     I had the quorum enabled when I was at version 3.12 .  However with
>>>     this
>>>     version it needed the quorum disabled.  So I did so however now
>>> see the
>>>     subject error.
>>>
>>>     Any ideas what I could try next?
>>>
>>>     --     Thx,
>>>     TK.
>>>
>>>
>>>     [2019-09-25 05:17:26.615203] D [MSGID: 0]
>>>     [glusterd-utils.c:1136:glusterd_resolve_brick] 0-management:
>>> Returning 0
>>>     [2019-09-25 05:17:26.615555] D [MSGID: 0]
>>>     [glusterd-mgmt.c:243:gd_mgmt_v3_pre_validate_fn] 0-management: OP
>>> = 5.
>>>     Returning 0
>>>     [2019-09-25 05:17:26.616271] D [MSGID: 0]
>>>     [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
>>>     mdsgv01 found
>>>     [2019-09-25 05:17:26.616305] D [MSGID: 0]
>>>     [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management:
>>> Returning 0
>>>     [2019-09-25 05:17:26.616327] D [MSGID: 0]
>>>     [glusterd-utils.c:6327:glusterd_brick_start] 0-management:
>>> returning 0
>>>     [2019-09-25 05:17:26.617056] I
>>>     [glusterd-utils.c:6312:glusterd_brick_start] 0-management:
>>> starting a
>>>     fresh brick process for brick /mnt/p01-d01/glusterv01
>>>     [2019-09-25 05:17:26.722717] E [MSGID: 106005]
>>>     [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to
>>>     start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
>>>     [2019-09-25 05:17:26.722960] D [MSGID: 0]
>>>     [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning
>>>     -107
>>>     [2019-09-25 05:17:26.723006] E [MSGID: 106122]
>>>     [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume
>>> start
>>>     commit failed.
>>>     [2019-09-25 05:17:26.723027] D [MSGID: 0]
>>>     [glusterd-mgmt.c:444:gd_mgmt_v3_commit_fn] 0-management: OP = 5.
>>>     Returning -107
>>>     [2019-09-25 05:17:26.723045] E [MSGID: 106122]
>>>     [glusterd-mgmt.c:1696:glusterd_mgmt_v3_commit] 0-management: Commit
>>>     failed for operation Start on local node
>>>     [2019-09-25 05:17:26.723073] D [MSGID: 0]
>>>     [glusterd-op-sm.c:5106:glusterd_op_modify_op_ctx] 0-management:
>>> op_ctx
>>>     modification not required
>>>     [2019-09-25 05:17:26.723141] E [MSGID: 106122]
>>>     [glusterd-mgmt.c:2466:glusterd_mgmt_v3_initiate_all_phases]
>>>     0-management: Commit Op Failed
>>>     [2019-09-25 05:17:26.723204] D [MSGID: 0]
>>>     [glusterd-locks.c:797:glusterd_mgmt_v3_unlock] 0-management:
>>> Trying to
>>>     release lock of vol mdsgv01 for
>>> f7336db6-22b4-497d-8c2f-04c833a28546 as
>>>     mdsgv01_vol
>>>     [2019-09-25 05:17:26.723239] D [MSGID: 0]
>>>     [glusterd-locks.c:846:glusterd_mgmt_v3_unlock] 0-management: Lock
>>> for
>>>     vol mdsgv01 successfully released
>>>     [2019-09-25 05:17:26.723273] D [MSGID: 0]
>>>     [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
>>>     mdsgv01 found
>>>     [2019-09-25 05:17:26.723326] D [MSGID: 0]
>>>     [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management:
>>> Returning 0
>>>     [2019-09-25 05:17:26.723360] D [MSGID: 0]
>>>     [glusterd-locks.c:464:glusterd_multiple_mgmt_v3_unlock]
>>> 0-management:
>>>     Returning 0
>>>
>>>     ==> /var/log/glusterfs/cmd_history.log <==
>>>     [2019-09-25 05:17:26.723390]  : volume start mdsgv01 : FAILED :
>>> Commit
>>>     failed on localhost. Please check log file for details.
>>>
>>>     ==> /var/log/glusterfs/glusterd.log <==
>>>     [2019-09-25 05:17:26.723479] D [MSGID: 0]
>>>     [glusterd-rpc-ops.c:199:glusterd_op_send_cli_response] 0-management:
>>>     Returning 0
>>>
>>>
>>>
>>>     [root@mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol
>>>     volume management
>>>           type mgmt/glusterd
>>>           option working-directory /var/lib/glusterd
>>>           option transport-type socket,rdma
>>>           option transport.socket.keepalive-time 10
>>>           option transport.socket.keepalive-interval 2
>>>           option transport.socket.read-fail-log off
>>>           option ping-timeout 0
>>>           option event-threads 1
>>>           option rpc-auth-allow-insecure on
>>>           # option cluster.server-quorum-type server
>>>           # option cluster.quorum-type auto
>>>           option server.event-threads 8
>>>           option client.event-threads 8
>>>           option performance.write-behind-window-size 8MB
>>>           option performance.io-thread-count 16
>>>           option performance.cache-size 1GB
>>>           option nfs.trusted-sync on
>>>           option storage.owner-uid 36
>>>           option storage.owner-uid 36
>>>           option cluster.data-self-heal-algorithm full
>>>           option performance.low-prio-threads 32
>>>           option features.shard-block-size 512MB
>>>           option features.shard on
>>>     end-volume
>>>     [root@mdskvm-p01 glusterfs]#
>>>
>>>
>>>     [root@mdskvm-p01 glusterfs]# gluster volume info
>>>
>>>     Volume Name: mdsgv01
>>>     Type: Replicate
>>>     Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0
>>>     Status: Stopped
>>>     Snapshot Count: 0
>>>     Number of Bricks: 1 x 2 = 2
>>>     Transport-type: tcp
>>>     Bricks:
>>>     Brick1: mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02
>>>     Brick2: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
>>>     Options Reconfigured:
>>>     storage.owner-gid: 36
>>>     cluster.data-self-heal-algorithm: full
>>>     performance.low-prio-threads: 32
>>>     features.shard-block-size: 512MB
>>>     features.shard: on
>>>     storage.owner-uid: 36
>>>     cluster.server-quorum-type: none
>>>     cluster.quorum-type: none
>>>     server.event-threads: 8
>>>     client.event-threads: 8
>>>     performance.write-behind-window-size: 8MB
>>>     performance.io-thread-count: 16
>>>     performance.cache-size: 1GB
>>>     nfs.trusted-sync: on
>>>     server.allow-insecure: on
>>>     performance.readdir-ahead: on
>>>     diagnostics.brick-log-level: DEBUG
>>>     diagnostics.brick-sys-log-level: INFO
>>>     diagnostics.client-log-level: DEBUG
>>>     [root@mdskvm-p01 glusterfs]#
>>>
>>>
>>>     _______________________________________________
>>>
>>>     Community Meeting Calendar:
>>>
>>>     APAC Schedule -
>>>     Every 2nd and 4th Tuesday at 11:30 AM IST
>>>     Bridge: https://bluejeans.com/118564314
>>>
>>>     NA/EMEA Schedule -
>>>     Every 1st and 3rd Tuesday at 01:00 PM EDT
>>>     Bridge: https://bluejeans.com/118564314
>>>
>>>     Gluster-devel mailing list
>>>     [hidden email] <mailto:[hidden email]>
>>>     https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>
>>>
>>>
>>> --
>>> Thanks,
>>> Sanju
>>
>>
>>
>> _______________________________________________
>>
>> Community Meeting Calendar:
>>
>> APAC Schedule -
>> Every 2nd and 4th Tuesday at 11:30 AM IST
>> Bridge: https://bluejeans.com/118564314
>>
>> NA/EMEA Schedule -
>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>> Bridge: https://bluejeans.com/118564314
>>
>> Gluster-devel mailing list
>> [hidden email]
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> _______________________________________________
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/118564314
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/118564314
>
> Gluster-devel mailing list
> [hidden email]
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>


--
Thx,
TK.
_______________________________________________

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
[hidden email]
https://lists.gluster.org/mailman/listinfo/gluster-devel

Reply | Threaded
Open this post in threaded view
|

Re: 0-management: Commit failed for operation Start on local node

TomK

This issue looked nearly identical to:

https://bugzilla.redhat.com/show_bug.cgi?id=1702316

so tried:

option transport.socket.listen-port 24007

And it worked:

[root@mdskvm-p01 glusterfs]# systemctl stop glusterd
[root@mdskvm-p01 glusterfs]# history|grep server-quorum
  3149  gluster volume set mdsgv01 cluster.server-quorum-type none
  3186  history|grep server-quorum
[root@mdskvm-p01 glusterfs]# gluster volume set mdsgv01
transport.socket.listen-port 24007
Connection failed. Please check if gluster daemon is operational.
[root@mdskvm-p01 glusterfs]# systemctl start glusterd
[root@mdskvm-p01 glusterfs]# gluster volume set mdsgv01
transport.socket.listen-port 24007
volume set: failed: option : transport.socket.listen-port does not exist
Did you mean transport.keepalive or ...listen-backlog?
[root@mdskvm-p01 glusterfs]#
[root@mdskvm-p01 glusterfs]# netstat -pnltu
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address
State       PID/Program name
tcp        0      0 0.0.0.0:16514           0.0.0.0:*
LISTEN      4562/libvirtd
tcp        0      0 0.0.0.0:24007           0.0.0.0:*
LISTEN      24193/glusterd
tcp        0      0 0.0.0.0:2223            0.0.0.0:*
LISTEN      4277/sshd
tcp        0      0 0.0.0.0:111             0.0.0.0:*
LISTEN      1/systemd
tcp        0      0 0.0.0.0:51760           0.0.0.0:*
LISTEN      4479/rpc.statd
tcp        0      0 0.0.0.0:54322           0.0.0.0:*
LISTEN      13229/python
tcp        0      0 0.0.0.0:22              0.0.0.0:*
LISTEN      4279/sshd
tcp6       0      0 :::54811                :::*
LISTEN      4479/rpc.statd
tcp6       0      0 :::16514                :::*
LISTEN      4562/libvirtd
tcp6       0      0 :::2223                 :::*
LISTEN      4277/sshd
tcp6       0      0 :::111                  :::*
LISTEN      3357/rpcbind
tcp6       0      0 :::54321                :::*
LISTEN      13225/python2
tcp6       0      0 :::22                   :::*
LISTEN      4279/sshd
udp        0      0 0.0.0.0:24009           0.0.0.0:*
        4281/python2
udp        0      0 0.0.0.0:38873           0.0.0.0:*
        4479/rpc.statd
udp        0      0 0.0.0.0:111             0.0.0.0:*
        1/systemd
udp        0      0 127.0.0.1:323           0.0.0.0:*
        3361/chronyd
udp        0      0 127.0.0.1:839           0.0.0.0:*
        4479/rpc.statd
udp        0      0 0.0.0.0:935             0.0.0.0:*
        3357/rpcbind
udp6       0      0 :::46947                :::*
        4479/rpc.statd
udp6       0      0 :::111                  :::*
        3357/rpcbind
udp6       0      0 ::1:323                 :::*
        3361/chronyd
udp6       0      0 :::935                  :::*
        3357/rpcbind
[root@mdskvm-p01 glusterfs]# gluster volume start mdsgv01
volume start: mdsgv01: success
[root@mdskvm-p01 glusterfs]# gluster volume info

Volume Name: mdsgv01
Type: Replicate
Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02
Brick2: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
Options Reconfigured:
storage.owner-gid: 36
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-uid: 36
cluster.server-quorum-type: none
cluster.quorum-type: none
server.event-threads: 8
client.event-threads: 8
performance.write-behind-window-size: 8MB
performance.io-thread-count: 16
performance.cache-size: 1GB
nfs.trusted-sync: on
server.allow-insecure: on
performance.readdir-ahead: on
diagnostics.brick-log-level: DEBUG
diagnostics.brick-sys-log-level: INFO
diagnostics.client-log-level: DEBUG
[root@mdskvm-p01 glusterfs]# gluster volume status
Status of volume: mdsgv01
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/g
lusterv01                                   49152     0          Y
24487
NFS Server on localhost                     N/A       N/A        N       N/A
Self-heal Daemon on localhost               N/A       N/A        Y
24515

Task Status of Volume mdsgv01
------------------------------------------------------------------------------
There are no active volume tasks

[root@mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol
volume management
     type mgmt/glusterd
     option working-directory /var/lib/glusterd
     option transport-type socket,rdma
     option transport.socket.keepalive-time 10
     option transport.socket.keepalive-interval 2
     option transport.socket.read-fail-log off
     option ping-timeout 0
     option event-threads 1
     option rpc-auth-allow-insecure on
     option cluster.server-quorum-type none
     option cluster.quorum-type none
     # option cluster.server-quorum-type server
     # option cluster.quorum-type auto
     option server.event-threads 8
     option client.event-threads 8
     option performance.write-behind-window-size 8MB
     option performance.io-thread-count 16
     option performance.cache-size 1GB
     option nfs.trusted-sync on
     option storage.owner-uid 36
     option storage.owner-uid 36
     option cluster.data-self-heal-algorithm full
     option performance.low-prio-threads 32
     option features.shard-block-size 512MB
     option features.shard on
     option transport.socket.listen-port 24007
end-volume
[root@mdskvm-p01 glusterfs]#


Cheers,
TK


On 9/25/2019 7:05 AM, TomK wrote:

> Mind you, I just upgraded from 3.12 to 6.X.
>
> On 9/25/2019 6:56 AM, TomK wrote:
>>
>>
>> Brick log for specific gluster start command attempt (full log attached):
>>
>> [2019-09-25 10:53:37.847426] I [MSGID: 100030]
>> [glusterfsd.c:2847:main] 0-/usr/sbin/glusterfsd: Started running
>> /usr/sbin/glusterfsd version 6.5 (args: /usr/sbin/glusterfsd -s
>> mdskvm-p01.nix.mds.xyz --volfile-id
>> mdsgv01.mdskvm-p01.nix.mds.xyz.mnt-p01-d01-glusterv01 -p
>> /var/run/gluster/vols/mdsgv01/mdskvm-p01.nix.mds.xyz-mnt-p01-d01-glusterv01.pid
>> -S /var/run/gluster/defbdb699838d53b.socket --brick-name
>> /mnt/p01-d01/glusterv01 -l
>> /var/log/glusterfs/bricks/mnt-p01-d01-glusterv01.log --xlator-option
>> *-posix.glusterd-uuid=f7336db6-22b4-497d-8c2f-04c833a28546
>> --process-name brick --brick-port 49155 --xlator-option
>> mdsgv01-server.listen-port=49155)
>> [2019-09-25 10:53:37.848508] I [glusterfsd.c:2556:daemonize]
>> 0-glusterfs: Pid of current running process is 23133
>> [2019-09-25 10:53:37.858381] I [socket.c:902:__socket_server_bind]
>> 0-socket.glusterfsd: closing (AF_UNIX) reuse check socket 9
>> [2019-09-25 10:53:37.865940] I [MSGID: 101190]
>> [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started
>> thread with index 0
>> [2019-09-25 10:53:37.866054] I
>> [glusterfsd-mgmt.c:2443:mgmt_rpc_notify] 0-glusterfsd-mgmt:
>> disconnected from remote-host: mdskvm-p01.nix.mds.xyz
>> [2019-09-25 10:53:37.866043] I [MSGID: 101190]
>> [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started
>> thread with index 1
>> [2019-09-25 10:53:37.866083] I
>> [glusterfsd-mgmt.c:2463:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted
>> all volfile servers
>> [2019-09-25 10:53:37.866454] W [glusterfsd.c:1570:cleanup_and_exit]
>> (-->/lib64/libgfrpc.so.0(+0xf1d3) [0x7f9680ee91d3]
>> -->/usr/sbin/glusterfsd(+0x12fef) [0x55ca25710fef]
>> -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55ca2570901b] ) 0-:
>> received signum (1), shutting down
>> [2019-09-25 10:53:37.872399] I
>> [socket.c:3754:socket_submit_outgoing_msg] 0-glusterfs: not connected
>> (priv->connected = 0)
>> [2019-09-25 10:53:37.872445] W [rpc-clnt.c:1704:rpc_clnt_submit]
>> 0-glusterfs: failed to submit rpc-request (unique: 0, XID: 0x2
>> Program: Gluster Portmap, ProgVers: 1, Proc: 5) to rpc-transport
>> (glusterfs)
>> [2019-09-25 10:53:37.872534] W [glusterfsd.c:1570:cleanup_and_exit]
>> (-->/lib64/libgfrpc.so.0(+0xf1d3) [0x7f9680ee91d3]
>> -->/usr/sbin/glusterfsd(+0x12fef) [0x55ca25710fef]
>> -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55ca2570901b] ) 0-:
>> received signum (1), shutting down
>>
>>
>>
>>
>>
>> On 9/25/2019 6:48 AM, TomK wrote:
>>> Attached.
>>>
>>>
>>> On 9/25/2019 5:08 AM, Sanju Rakonde wrote:
>>>> Hi, The below errors indicate that brick process is failed to start.
>>>> Please attach brick log.
>>>>
>>>> [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a
>>>> fresh brick process for brick /mnt/p01-d01/glusterv01
>>>> [2019-09-25 05:17:26.722717] E [MSGID: 106005]
>>>> [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to
>>>> start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
>>>> [2019-09-25 05:17:26.722960] D [MSGID: 0]
>>>> [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning
>>>> -107
>>>> [2019-09-25 05:17:26.723006] E [MSGID: 106122]
>>>> [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start
>>>> commit failed.
>>>>
>>>> On Wed, Sep 25, 2019 at 11:00 AM TomK <[hidden email]
>>>> <mailto:[hidden email]>> wrote:
>>>>
>>>>     Hey All,
>>>>
>>>>     I'm getting the below error when trying to start a 2 node Gluster
>>>>     cluster.
>>>>
>>>>     I had the quorum enabled when I was at version 3.12 .  However with
>>>>     this
>>>>     version it needed the quorum disabled.  So I did so however now
>>>> see the
>>>>     subject error.
>>>>
>>>>     Any ideas what I could try next?
>>>>
>>>>     --     Thx,
>>>>     TK.
>>>>
>>>>
>>>>     [2019-09-25 05:17:26.615203] D [MSGID: 0]
>>>>     [glusterd-utils.c:1136:glusterd_resolve_brick] 0-management:
>>>> Returning 0
>>>>     [2019-09-25 05:17:26.615555] D [MSGID: 0]
>>>>     [glusterd-mgmt.c:243:gd_mgmt_v3_pre_validate_fn] 0-management:
>>>> OP = 5.
>>>>     Returning 0
>>>>     [2019-09-25 05:17:26.616271] D [MSGID: 0]
>>>>     [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
>>>>     mdsgv01 found
>>>>     [2019-09-25 05:17:26.616305] D [MSGID: 0]
>>>>     [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management:
>>>> Returning 0
>>>>     [2019-09-25 05:17:26.616327] D [MSGID: 0]
>>>>     [glusterd-utils.c:6327:glusterd_brick_start] 0-management:
>>>> returning 0
>>>>     [2019-09-25 05:17:26.617056] I
>>>>     [glusterd-utils.c:6312:glusterd_brick_start] 0-management:
>>>> starting a
>>>>     fresh brick process for brick /mnt/p01-d01/glusterv01
>>>>     [2019-09-25 05:17:26.722717] E [MSGID: 106005]
>>>>     [glusterd-utils.c:6317:glusterd_brick_start] 0-management:
>>>> Unable to
>>>>     start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
>>>>     [2019-09-25 05:17:26.722960] D [MSGID: 0]
>>>>     [glusterd-utils.c:6327:glusterd_brick_start] 0-management:
>>>> returning
>>>>     -107
>>>>     [2019-09-25 05:17:26.723006] E [MSGID: 106122]
>>>>     [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume
>>>> start
>>>>     commit failed.
>>>>     [2019-09-25 05:17:26.723027] D [MSGID: 0]
>>>>     [glusterd-mgmt.c:444:gd_mgmt_v3_commit_fn] 0-management: OP = 5.
>>>>     Returning -107
>>>>     [2019-09-25 05:17:26.723045] E [MSGID: 106122]
>>>>     [glusterd-mgmt.c:1696:glusterd_mgmt_v3_commit] 0-management: Commit
>>>>     failed for operation Start on local node
>>>>     [2019-09-25 05:17:26.723073] D [MSGID: 0]
>>>>     [glusterd-op-sm.c:5106:glusterd_op_modify_op_ctx] 0-management:
>>>> op_ctx
>>>>     modification not required
>>>>     [2019-09-25 05:17:26.723141] E [MSGID: 106122]
>>>>     [glusterd-mgmt.c:2466:glusterd_mgmt_v3_initiate_all_phases]
>>>>     0-management: Commit Op Failed
>>>>     [2019-09-25 05:17:26.723204] D [MSGID: 0]
>>>>     [glusterd-locks.c:797:glusterd_mgmt_v3_unlock] 0-management:
>>>> Trying to
>>>>     release lock of vol mdsgv01 for
>>>> f7336db6-22b4-497d-8c2f-04c833a28546 as
>>>>     mdsgv01_vol
>>>>     [2019-09-25 05:17:26.723239] D [MSGID: 0]
>>>>     [glusterd-locks.c:846:glusterd_mgmt_v3_unlock] 0-management:
>>>> Lock for
>>>>     vol mdsgv01 successfully released
>>>>     [2019-09-25 05:17:26.723273] D [MSGID: 0]
>>>>     [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
>>>>     mdsgv01 found
>>>>     [2019-09-25 05:17:26.723326] D [MSGID: 0]
>>>>     [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management:
>>>> Returning 0
>>>>     [2019-09-25 05:17:26.723360] D [MSGID: 0]
>>>>     [glusterd-locks.c:464:glusterd_multiple_mgmt_v3_unlock]
>>>> 0-management:
>>>>     Returning 0
>>>>
>>>>     ==> /var/log/glusterfs/cmd_history.log <==
>>>>     [2019-09-25 05:17:26.723390]  : volume start mdsgv01 : FAILED :
>>>> Commit
>>>>     failed on localhost. Please check log file for details.
>>>>
>>>>     ==> /var/log/glusterfs/glusterd.log <==
>>>>     [2019-09-25 05:17:26.723479] D [MSGID: 0]
>>>>     [glusterd-rpc-ops.c:199:glusterd_op_send_cli_response]
>>>> 0-management:
>>>>     Returning 0
>>>>
>>>>
>>>>
>>>>     [root@mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol
>>>>     volume management
>>>>           type mgmt/glusterd
>>>>           option working-directory /var/lib/glusterd
>>>>           option transport-type socket,rdma
>>>>           option transport.socket.keepalive-time 10
>>>>           option transport.socket.keepalive-interval 2
>>>>           option transport.socket.read-fail-log off
>>>>           option ping-timeout 0
>>>>           option event-threads 1
>>>>           option rpc-auth-allow-insecure on
>>>>           # option cluster.server-quorum-type server
>>>>           # option cluster.quorum-type auto
>>>>           option server.event-threads 8
>>>>           option client.event-threads 8
>>>>           option performance.write-behind-window-size 8MB
>>>>           option performance.io-thread-count 16
>>>>           option performance.cache-size 1GB
>>>>           option nfs.trusted-sync on
>>>>           option storage.owner-uid 36
>>>>           option storage.owner-uid 36
>>>>           option cluster.data-self-heal-algorithm full
>>>>           option performance.low-prio-threads 32
>>>>           option features.shard-block-size 512MB
>>>>           option features.shard on
>>>>     end-volume
>>>>     [root@mdskvm-p01 glusterfs]#
>>>>
>>>>
>>>>     [root@mdskvm-p01 glusterfs]# gluster volume info
>>>>
>>>>     Volume Name: mdsgv01
>>>>     Type: Replicate
>>>>     Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0
>>>>     Status: Stopped
>>>>     Snapshot Count: 0
>>>>     Number of Bricks: 1 x 2 = 2
>>>>     Transport-type: tcp
>>>>     Bricks:
>>>>     Brick1: mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02
>>>>     Brick2: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
>>>>     Options Reconfigured:
>>>>     storage.owner-gid: 36
>>>>     cluster.data-self-heal-algorithm: full
>>>>     performance.low-prio-threads: 32
>>>>     features.shard-block-size: 512MB
>>>>     features.shard: on
>>>>     storage.owner-uid: 36
>>>>     cluster.server-quorum-type: none
>>>>     cluster.quorum-type: none
>>>>     server.event-threads: 8
>>>>     client.event-threads: 8
>>>>     performance.write-behind-window-size: 8MB
>>>>     performance.io-thread-count: 16
>>>>     performance.cache-size: 1GB
>>>>     nfs.trusted-sync: on
>>>>     server.allow-insecure: on
>>>>     performance.readdir-ahead: on
>>>>     diagnostics.brick-log-level: DEBUG
>>>>     diagnostics.brick-sys-log-level: INFO
>>>>     diagnostics.client-log-level: DEBUG
>>>>     [root@mdskvm-p01 glusterfs]#
>>>>
>>>>
>>>>     _______________________________________________
>>>>
>>>>     Community Meeting Calendar:
>>>>
>>>>     APAC Schedule -
>>>>     Every 2nd and 4th Tuesday at 11:30 AM IST
>>>>     Bridge: https://bluejeans.com/118564314
>>>>
>>>>     NA/EMEA Schedule -
>>>>     Every 1st and 3rd Tuesday at 01:00 PM EDT
>>>>     Bridge: https://bluejeans.com/118564314
>>>>
>>>>     Gluster-devel mailing list
>>>>     [hidden email] <mailto:[hidden email]>
>>>>     https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>>
>>>>
>>>>
>>>> --
>>>> Thanks,
>>>> Sanju
>>>
>>>
>>>
>>> _______________________________________________
>>>
>>> Community Meeting Calendar:
>>>
>>> APAC Schedule -
>>> Every 2nd and 4th Tuesday at 11:30 AM IST
>>> Bridge: https://bluejeans.com/118564314
>>>
>>> NA/EMEA Schedule -
>>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>>> Bridge: https://bluejeans.com/118564314
>>>
>>> Gluster-devel mailing list
>>> [hidden email]
>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>
>>
>>
>>
>> _______________________________________________
>>
>> Community Meeting Calendar:
>>
>> APAC Schedule -
>> Every 2nd and 4th Tuesday at 11:30 AM IST
>> Bridge: https://bluejeans.com/118564314
>>
>> NA/EMEA Schedule -
>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>> Bridge: https://bluejeans.com/118564314
>>
>> Gluster-devel mailing list
>> [hidden email]
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
>


--
Thx,
TK.
_______________________________________________

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
[hidden email]
https://lists.gluster.org/mailman/listinfo/gluster-devel

Reply | Threaded
Open this post in threaded view
|

Re: 0-management: Commit failed for operation Start on local node

Sanju Rakonde
Great that you have managed to figure out the issue.

On Wed, Sep 25, 2019 at 4:47 PM TomK <[hidden email]> wrote:

This issue looked nearly identical to:

https://bugzilla.redhat.com/show_bug.cgi?id=1702316

so tried:

option transport.socket.listen-port 24007

And it worked:

[root@mdskvm-p01 glusterfs]# systemctl stop glusterd
[root@mdskvm-p01 glusterfs]# history|grep server-quorum
  3149  gluster volume set mdsgv01 cluster.server-quorum-type none
  3186  history|grep server-quorum
[root@mdskvm-p01 glusterfs]# gluster volume set mdsgv01
transport.socket.listen-port 24007
Connection failed. Please check if gluster daemon is operational.
[root@mdskvm-p01 glusterfs]# systemctl start glusterd
[root@mdskvm-p01 glusterfs]# gluster volume set mdsgv01
transport.socket.listen-port 24007
volume set: failed: option : transport.socket.listen-port does not exist
Did you mean transport.keepalive or ...listen-backlog?
[root@mdskvm-p01 glusterfs]#
[root@mdskvm-p01 glusterfs]# netstat -pnltu
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address
State       PID/Program name
tcp        0      0 0.0.0.0:16514           0.0.0.0:*
LISTEN      4562/libvirtd
tcp        0      0 0.0.0.0:24007           0.0.0.0:*
LISTEN      24193/glusterd
tcp        0      0 0.0.0.0:2223            0.0.0.0:*
LISTEN      4277/sshd
tcp        0      0 0.0.0.0:111             0.0.0.0:*
LISTEN      1/systemd
tcp        0      0 0.0.0.0:51760           0.0.0.0:*
LISTEN      4479/rpc.statd
tcp        0      0 0.0.0.0:54322           0.0.0.0:*
LISTEN      13229/python
tcp        0      0 0.0.0.0:22              0.0.0.0:*
LISTEN      4279/sshd
tcp6       0      0 :::54811                :::*
LISTEN      4479/rpc.statd
tcp6       0      0 :::16514                :::*
LISTEN      4562/libvirtd
tcp6       0      0 :::2223                 :::*
LISTEN      4277/sshd
tcp6       0      0 :::111                  :::*
LISTEN      3357/rpcbind
tcp6       0      0 :::54321                :::*
LISTEN      13225/python2
tcp6       0      0 :::22                   :::*
LISTEN      4279/sshd
udp        0      0 0.0.0.0:24009           0.0.0.0:*
        4281/python2
udp        0      0 0.0.0.0:38873           0.0.0.0:*
        4479/rpc.statd
udp        0      0 0.0.0.0:111             0.0.0.0:*
        1/systemd
udp        0      0 127.0.0.1:323           0.0.0.0:*
        3361/chronyd
udp        0      0 127.0.0.1:839           0.0.0.0:*
        4479/rpc.statd
udp        0      0 0.0.0.0:935             0.0.0.0:*
        3357/rpcbind
udp6       0      0 :::46947                :::*
        4479/rpc.statd
udp6       0      0 :::111                  :::*
        3357/rpcbind
udp6       0      0 ::1:323                 :::*
        3361/chronyd
udp6       0      0 :::935                  :::*
        3357/rpcbind
[root@mdskvm-p01 glusterfs]# gluster volume start mdsgv01
volume start: mdsgv01: success
[root@mdskvm-p01 glusterfs]# gluster volume info

Volume Name: mdsgv01
Type: Replicate
Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02
Brick2: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
Options Reconfigured:
storage.owner-gid: 36
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-uid: 36
cluster.server-quorum-type: none
cluster.quorum-type: none
server.event-threads: 8
client.event-threads: 8
performance.write-behind-window-size: 8MB
performance.io-thread-count: 16
performance.cache-size: 1GB
nfs.trusted-sync: on
server.allow-insecure: on
performance.readdir-ahead: on
diagnostics.brick-log-level: DEBUG
diagnostics.brick-sys-log-level: INFO
diagnostics.client-log-level: DEBUG
[root@mdskvm-p01 glusterfs]# gluster volume status
Status of volume: mdsgv01
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/g
lusterv01                                   49152     0          Y
24487
NFS Server on localhost                     N/A       N/A        N       N/A
Self-heal Daemon on localhost               N/A       N/A        Y
24515

Task Status of Volume mdsgv01
------------------------------------------------------------------------------
There are no active volume tasks

[root@mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol
volume management
     type mgmt/glusterd
     option working-directory /var/lib/glusterd
     option transport-type socket,rdma
     option transport.socket.keepalive-time 10
     option transport.socket.keepalive-interval 2
     option transport.socket.read-fail-log off
     option ping-timeout 0
     option event-threads 1
     option rpc-auth-allow-insecure on
     option cluster.server-quorum-type none
     option cluster.quorum-type none
     # option cluster.server-quorum-type server
     # option cluster.quorum-type auto
     option server.event-threads 8
     option client.event-threads 8
     option performance.write-behind-window-size 8MB
     option performance.io-thread-count 16
     option performance.cache-size 1GB
     option nfs.trusted-sync on
     option storage.owner-uid 36
     option storage.owner-uid 36
     option cluster.data-self-heal-algorithm full
     option performance.low-prio-threads 32
     option features.shard-block-size 512MB
     option features.shard on
     option transport.socket.listen-port 24007
end-volume
[root@mdskvm-p01 glusterfs]#


Cheers,
TK


On 9/25/2019 7:05 AM, TomK wrote:
> Mind you, I just upgraded from 3.12 to 6.X.
>
> On 9/25/2019 6:56 AM, TomK wrote:
>>
>>
>> Brick log for specific gluster start command attempt (full log attached):
>>
>> [2019-09-25 10:53:37.847426] I [MSGID: 100030]
>> [glusterfsd.c:2847:main] 0-/usr/sbin/glusterfsd: Started running
>> /usr/sbin/glusterfsd version 6.5 (args: /usr/sbin/glusterfsd -s
>> mdskvm-p01.nix.mds.xyz --volfile-id
>> mdsgv01.mdskvm-p01.nix.mds.xyz.mnt-p01-d01-glusterv01 -p
>> /var/run/gluster/vols/mdsgv01/mdskvm-p01.nix.mds.xyz-mnt-p01-d01-glusterv01.pid
>> -S /var/run/gluster/defbdb699838d53b.socket --brick-name
>> /mnt/p01-d01/glusterv01 -l
>> /var/log/glusterfs/bricks/mnt-p01-d01-glusterv01.log --xlator-option
>> *-posix.glusterd-uuid=f7336db6-22b4-497d-8c2f-04c833a28546
>> --process-name brick --brick-port 49155 --xlator-option
>> mdsgv01-server.listen-port=49155)
>> [2019-09-25 10:53:37.848508] I [glusterfsd.c:2556:daemonize]
>> 0-glusterfs: Pid of current running process is 23133
>> [2019-09-25 10:53:37.858381] I [socket.c:902:__socket_server_bind]
>> 0-socket.glusterfsd: closing (AF_UNIX) reuse check socket 9
>> [2019-09-25 10:53:37.865940] I [MSGID: 101190]
>> [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started
>> thread with index 0
>> [2019-09-25 10:53:37.866054] I
>> [glusterfsd-mgmt.c:2443:mgmt_rpc_notify] 0-glusterfsd-mgmt:
>> disconnected from remote-host: mdskvm-p01.nix.mds.xyz
>> [2019-09-25 10:53:37.866043] I [MSGID: 101190]
>> [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started
>> thread with index 1
>> [2019-09-25 10:53:37.866083] I
>> [glusterfsd-mgmt.c:2463:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted
>> all volfile servers
>> [2019-09-25 10:53:37.866454] W [glusterfsd.c:1570:cleanup_and_exit]
>> (-->/lib64/libgfrpc.so.0(+0xf1d3) [0x7f9680ee91d3]
>> -->/usr/sbin/glusterfsd(+0x12fef) [0x55ca25710fef]
>> -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55ca2570901b] ) 0-:
>> received signum (1), shutting down
>> [2019-09-25 10:53:37.872399] I
>> [socket.c:3754:socket_submit_outgoing_msg] 0-glusterfs: not connected
>> (priv->connected = 0)
>> [2019-09-25 10:53:37.872445] W [rpc-clnt.c:1704:rpc_clnt_submit]
>> 0-glusterfs: failed to submit rpc-request (unique: 0, XID: 0x2
>> Program: Gluster Portmap, ProgVers: 1, Proc: 5) to rpc-transport
>> (glusterfs)
>> [2019-09-25 10:53:37.872534] W [glusterfsd.c:1570:cleanup_and_exit]
>> (-->/lib64/libgfrpc.so.0(+0xf1d3) [0x7f9680ee91d3]
>> -->/usr/sbin/glusterfsd(+0x12fef) [0x55ca25710fef]
>> -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55ca2570901b] ) 0-:
>> received signum (1), shutting down
>>
>>
>>
>>
>>
>> On 9/25/2019 6:48 AM, TomK wrote:
>>> Attached.
>>>
>>>
>>> On 9/25/2019 5:08 AM, Sanju Rakonde wrote:
>>>> Hi, The below errors indicate that brick process is failed to start.
>>>> Please attach brick log.
>>>>
>>>> [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a
>>>> fresh brick process for brick /mnt/p01-d01/glusterv01
>>>> [2019-09-25 05:17:26.722717] E [MSGID: 106005]
>>>> [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to
>>>> start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
>>>> [2019-09-25 05:17:26.722960] D [MSGID: 0]
>>>> [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning
>>>> -107
>>>> [2019-09-25 05:17:26.723006] E [MSGID: 106122]
>>>> [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start
>>>> commit failed.
>>>>
>>>> On Wed, Sep 25, 2019 at 11:00 AM TomK <[hidden email]
>>>> <mailto:[hidden email]>> wrote:
>>>>
>>>>     Hey All,
>>>>
>>>>     I'm getting the below error when trying to start a 2 node Gluster
>>>>     cluster.
>>>>
>>>>     I had the quorum enabled when I was at version 3.12 .  However with
>>>>     this
>>>>     version it needed the quorum disabled.  So I did so however now
>>>> see the
>>>>     subject error.
>>>>
>>>>     Any ideas what I could try next?
>>>>
>>>>     --     Thx,
>>>>     TK.
>>>>
>>>>
>>>>     [2019-09-25 05:17:26.615203] D [MSGID: 0]
>>>>     [glusterd-utils.c:1136:glusterd_resolve_brick] 0-management:
>>>> Returning 0
>>>>     [2019-09-25 05:17:26.615555] D [MSGID: 0]
>>>>     [glusterd-mgmt.c:243:gd_mgmt_v3_pre_validate_fn] 0-management:
>>>> OP = 5.
>>>>     Returning 0
>>>>     [2019-09-25 05:17:26.616271] D [MSGID: 0]
>>>>     [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
>>>>     mdsgv01 found
>>>>     [2019-09-25 05:17:26.616305] D [MSGID: 0]
>>>>     [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management:
>>>> Returning 0
>>>>     [2019-09-25 05:17:26.616327] D [MSGID: 0]
>>>>     [glusterd-utils.c:6327:glusterd_brick_start] 0-management:
>>>> returning 0
>>>>     [2019-09-25 05:17:26.617056] I
>>>>     [glusterd-utils.c:6312:glusterd_brick_start] 0-management:
>>>> starting a
>>>>     fresh brick process for brick /mnt/p01-d01/glusterv01
>>>>     [2019-09-25 05:17:26.722717] E [MSGID: 106005]
>>>>     [glusterd-utils.c:6317:glusterd_brick_start] 0-management:
>>>> Unable to
>>>>     start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
>>>>     [2019-09-25 05:17:26.722960] D [MSGID: 0]
>>>>     [glusterd-utils.c:6327:glusterd_brick_start] 0-management:
>>>> returning
>>>>     -107
>>>>     [2019-09-25 05:17:26.723006] E [MSGID: 106122]
>>>>     [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume
>>>> start
>>>>     commit failed.
>>>>     [2019-09-25 05:17:26.723027] D [MSGID: 0]
>>>>     [glusterd-mgmt.c:444:gd_mgmt_v3_commit_fn] 0-management: OP = 5.
>>>>     Returning -107
>>>>     [2019-09-25 05:17:26.723045] E [MSGID: 106122]
>>>>     [glusterd-mgmt.c:1696:glusterd_mgmt_v3_commit] 0-management: Commit
>>>>     failed for operation Start on local node
>>>>     [2019-09-25 05:17:26.723073] D [MSGID: 0]
>>>>     [glusterd-op-sm.c:5106:glusterd_op_modify_op_ctx] 0-management:
>>>> op_ctx
>>>>     modification not required
>>>>     [2019-09-25 05:17:26.723141] E [MSGID: 106122]
>>>>     [glusterd-mgmt.c:2466:glusterd_mgmt_v3_initiate_all_phases]
>>>>     0-management: Commit Op Failed
>>>>     [2019-09-25 05:17:26.723204] D [MSGID: 0]
>>>>     [glusterd-locks.c:797:glusterd_mgmt_v3_unlock] 0-management:
>>>> Trying to
>>>>     release lock of vol mdsgv01 for
>>>> f7336db6-22b4-497d-8c2f-04c833a28546 as
>>>>     mdsgv01_vol
>>>>     [2019-09-25 05:17:26.723239] D [MSGID: 0]
>>>>     [glusterd-locks.c:846:glusterd_mgmt_v3_unlock] 0-management:
>>>> Lock for
>>>>     vol mdsgv01 successfully released
>>>>     [2019-09-25 05:17:26.723273] D [MSGID: 0]
>>>>     [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
>>>>     mdsgv01 found
>>>>     [2019-09-25 05:17:26.723326] D [MSGID: 0]
>>>>     [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management:
>>>> Returning 0
>>>>     [2019-09-25 05:17:26.723360] D [MSGID: 0]
>>>>     [glusterd-locks.c:464:glusterd_multiple_mgmt_v3_unlock]
>>>> 0-management:
>>>>     Returning 0
>>>>
>>>>     ==> /var/log/glusterfs/cmd_history.log <==
>>>>     [2019-09-25 05:17:26.723390]  : volume start mdsgv01 : FAILED :
>>>> Commit
>>>>     failed on localhost. Please check log file for details.
>>>>
>>>>     ==> /var/log/glusterfs/glusterd.log <==
>>>>     [2019-09-25 05:17:26.723479] D [MSGID: 0]
>>>>     [glusterd-rpc-ops.c:199:glusterd_op_send_cli_response]
>>>> 0-management:
>>>>     Returning 0
>>>>
>>>>
>>>>
>>>>     [root@mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol
>>>>     volume management
>>>>           type mgmt/glusterd
>>>>           option working-directory /var/lib/glusterd
>>>>           option transport-type socket,rdma
>>>>           option transport.socket.keepalive-time 10
>>>>           option transport.socket.keepalive-interval 2
>>>>           option transport.socket.read-fail-log off
>>>>           option ping-timeout 0
>>>>           option event-threads 1
>>>>           option rpc-auth-allow-insecure on
>>>>           # option cluster.server-quorum-type server
>>>>           # option cluster.quorum-type auto
>>>>           option server.event-threads 8
>>>>           option client.event-threads 8
>>>>           option performance.write-behind-window-size 8MB
>>>>           option performance.io-thread-count 16
>>>>           option performance.cache-size 1GB
>>>>           option nfs.trusted-sync on
>>>>           option storage.owner-uid 36
>>>>           option storage.owner-uid 36
>>>>           option cluster.data-self-heal-algorithm full
>>>>           option performance.low-prio-threads 32
>>>>           option features.shard-block-size 512MB
>>>>           option features.shard on
>>>>     end-volume
>>>>     [root@mdskvm-p01 glusterfs]#
>>>>
>>>>
>>>>     [root@mdskvm-p01 glusterfs]# gluster volume info
>>>>
>>>>     Volume Name: mdsgv01
>>>>     Type: Replicate
>>>>     Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0
>>>>     Status: Stopped
>>>>     Snapshot Count: 0
>>>>     Number of Bricks: 1 x 2 = 2
>>>>     Transport-type: tcp
>>>>     Bricks:
>>>>     Brick1: mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02
>>>>     Brick2: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
>>>>     Options Reconfigured:
>>>>     storage.owner-gid: 36
>>>>     cluster.data-self-heal-algorithm: full
>>>>     performance.low-prio-threads: 32
>>>>     features.shard-block-size: 512MB
>>>>     features.shard: on
>>>>     storage.owner-uid: 36
>>>>     cluster.server-quorum-type: none
>>>>     cluster.quorum-type: none
>>>>     server.event-threads: 8
>>>>     client.event-threads: 8
>>>>     performance.write-behind-window-size: 8MB
>>>>     performance.io-thread-count: 16
>>>>     performance.cache-size: 1GB
>>>>     nfs.trusted-sync: on
>>>>     server.allow-insecure: on
>>>>     performance.readdir-ahead: on
>>>>     diagnostics.brick-log-level: DEBUG
>>>>     diagnostics.brick-sys-log-level: INFO
>>>>     diagnostics.client-log-level: DEBUG
>>>>     [root@mdskvm-p01 glusterfs]#
>>>>
>>>>
>>>>     _______________________________________________
>>>>
>>>>     Community Meeting Calendar:
>>>>
>>>>     APAC Schedule -
>>>>     Every 2nd and 4th Tuesday at 11:30 AM IST
>>>>     Bridge: https://bluejeans.com/118564314
>>>>
>>>>     NA/EMEA Schedule -
>>>>     Every 1st and 3rd Tuesday at 01:00 PM EDT
>>>>     Bridge: https://bluejeans.com/118564314
>>>>
>>>>     Gluster-devel mailing list
>>>>     [hidden email] <mailto:[hidden email]>
>>>>     https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>>
>>>>
>>>>
>>>> --
>>>> Thanks,
>>>> Sanju
>>>
>>>
>>>
>>> _______________________________________________
>>>
>>> Community Meeting Calendar:
>>>
>>> APAC Schedule -
>>> Every 2nd and 4th Tuesday at 11:30 AM IST
>>> Bridge: https://bluejeans.com/118564314
>>>
>>> NA/EMEA Schedule -
>>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>>> Bridge: https://bluejeans.com/118564314
>>>
>>> Gluster-devel mailing list
>>> [hidden email]
>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>
>>
>>
>>
>> _______________________________________________
>>
>> Community Meeting Calendar:
>>
>> APAC Schedule -
>> Every 2nd and 4th Tuesday at 11:30 AM IST
>> Bridge: https://bluejeans.com/118564314
>>
>> NA/EMEA Schedule -
>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>> Bridge: https://bluejeans.com/118564314
>>
>> Gluster-devel mailing list
>> [hidden email]
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
>


--
Thx,
TK.


--
Thanks,
Sanju

_______________________________________________

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
[hidden email]
https://lists.gluster.org/mailman/listinfo/gluster-devel