Announcing Gluster release 8.1

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Announcing Gluster release 8.1

Rinku Kothiya
Hi,

The Gluster community is pleased to announce the release of Gluster8.1 (packages available at [1]).
Release notes for the release can be found at [2].

Major changes, features, improvements and limitations addressed in this release:

 - Performance improvement over the creation of large files - VM disks in oVirt by bringing down trivial lookups of non-existent shards. Issue (#1425)
 - Fsync in the replication module uses eager-lock functionality which improves the performance of VM workloads with an improvement of more than 50% in small-block of approximately 4kb with write heavy workloads. Issue (#1253)


Thanks,
Gluster community

References:

[1] Packages for 8.1:
https://download.gluster.org/pub/gluster/glusterfs/8/8.1/

[2] Release notes for 8.1:
https://docs.gluster.org/en/latest/release-notes/8.1/

_______________________________________________

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
[hidden email]
https://lists.gluster.org/mailman/listinfo/gluster-devel

Reply | Threaded
Open this post in threaded view
|

Inode vs. inode table locking

Dmitry Antipov
Hello all,

I'm trying to debug & fix one of possible issues reported by TSAN:

WARNING: ThreadSanitizer: data race (pid=1366262)
   Read of size 8 at 0x7b5800020800 by thread T17 (mutexes: write M17352):
     #0 __inode_ctx_get2 /home/antipov/glusterfs/libglusterfs/src/inode.c:2144 (libglusterfs.so.0+0x5cb96)
     #1 __inode_ctx_get0 /home/antipov/glusterfs/libglusterfs/src/inode.c:2169 (libglusterfs.so.0+0x5cd45)
     #2 pl_inode_get /home/antipov/glusterfs/xlators/features/locks/src/common.c:439 (locks.so+0x5c2b)
     #3 pl_flush /home/antipov/glusterfs/xlators/features/locks/src/posix.c:1836 (locks.so+0x24aac)
     #4 default_flush /home/antipov/glusterfs/libglusterfs/src/defaults.c:2531 (libglusterfs.so.0+0x1a4c50)
     #5 default_flush /home/antipov/glusterfs/libglusterfs/src/defaults.c:2531 (libglusterfs.so.0+0x1a4c50)
     #6 leases_flush /home/antipov/glusterfs/xlators/features/leases/src/leases.c:892 (leases.so+0x20f08)
     #7 default_flush /home/antipov/glusterfs/libglusterfs/src/defaults.c:2531 (libglusterfs.so.0+0x1a4c50)
     #8 default_flush_resume /home/antipov/glusterfs/libglusterfs/src/defaults.c:1815 (libglusterfs.so.0+0x18c8f5)
     #9 call_resume_wind /home/antipov/glusterfs/libglusterfs/src/call-stub.c:1932 (libglusterfs.so.0+0x691af)
     #10 call_resume /home/antipov/glusterfs/libglusterfs/src/call-stub.c:2392 (libglusterfs.so.0+0x8fd19)
     #11 iot_worker /home/antipov/glusterfs/xlators/performance/io-threads/src/io-threads.c:232 (io-threads.so+0x66a2)
     #12 <null> <null> (libtsan.so.0+0x2d33f)

   Previous write of size 8 at 0x7b5800020800 by thread T11 (mutexes: write M16793):
     #0 __inode_get_xl_index /home/antipov/glusterfs/libglusterfs/src/inode.c:453 (libglusterfs.so.0+0x574ac)
     #1 __inode_ref /home/antipov/glusterfs/libglusterfs/src/inode.c:578 (libglusterfs.so.0+0x57a78)
     #2 inode_ref /home/antipov/glusterfs/libglusterfs/src/inode.c:620 (libglusterfs.so.0+0x57c17)
     #3 pl_inode_setlk /home/antipov/glusterfs/xlators/features/locks/src/inodelk.c:782 (locks.so+0x76efe)
     #4 pl_common_inodelk /home/antipov/glusterfs/xlators/features/locks/src/inodelk.c:1050 (locks.so+0x78283)
     #5 pl_inodelk /home/antipov/glusterfs/xlators/features/locks/src/inodelk.c:1094 (locks.so+0x78b67)
     #6 ro_inodelk /home/antipov/glusterfs/xlators/features/read-only/src/read-only-common.c:108 (worm.so+0x3f75)
     #7 ro_inodelk /home/antipov/glusterfs/xlators/features/read-only/src/read-only-common.c:108 (read-only.so+0x38b5)
     #8 default_inodelk /home/antipov/glusterfs/libglusterfs/src/defaults.c:2865 (libglusterfs.so.0+0x1a9708)
     #9 default_inodelk /home/antipov/glusterfs/libglusterfs/src/defaults.c:2865 (libglusterfs.so.0+0x1a9708)
     #10 default_inodelk_resume /home/antipov/glusterfs/libglusterfs/src/defaults.c:2086 (libglusterfs.so.0+0x196ed0)
     #11 call_resume_wind /home/antipov/glusterfs/libglusterfs/src/call-stub.c:1992 (libglusterfs.so.0+0x69c79)
     #12 call_resume /home/antipov/glusterfs/libglusterfs/src/call-stub.c:2392 (libglusterfs.so.0+0x8fd19)
     #13 iot_worker /home/antipov/glusterfs/xlators/performance/io-threads/src/io-threads.c:232 (io-threads.so+0x66a2)
     #14 <null> <null> (libtsan.so.0+0x2d33f)

and can't get an idea behind locking. In particular, why inode_ref() takes inode->table->lock but not
inode->lock? In this example, both __inode_ref() and pl_inode_get() may change inode internals, but
first one assumes inode table lock held and second one only assumes the lock of inode itself.

Dmitry
_______________________________________________

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
[hidden email]
https://lists.gluster.org/mailman/listinfo/gluster-devel

Reply | Threaded
Open this post in threaded view
|

Re: Inode vs. inode table locking

Changwei Ge
Hello,


On 8/28/20 6:03 PM, Dmitry Antipov wrote:

> Hello all,
>
> I'm trying to debug & fix one of possible issues reported by TSAN:
>
> WARNING: ThreadSanitizer: data race (pid=1366262)
>    Read of size 8 at 0x7b5800020800 by thread T17 (mutexes: write M17352):
>      #0 __inode_ctx_get2
> /home/antipov/glusterfs/libglusterfs/src/inode.c:2144
> (libglusterfs.so.0+0x5cb96)
>      #1 __inode_ctx_get0
> /home/antipov/glusterfs/libglusterfs/src/inode.c:2169
> (libglusterfs.so.0+0x5cd45)
>      #2 pl_inode_get
> /home/antipov/glusterfs/xlators/features/locks/src/common.c:439
> (locks.so+0x5c2b)
>      #3 pl_flush
> /home/antipov/glusterfs/xlators/features/locks/src/posix.c:1836
> (locks.so+0x24aac)
>      #4 default_flush
> /home/antipov/glusterfs/libglusterfs/src/defaults.c:2531
> (libglusterfs.so.0+0x1a4c50)
>      #5 default_flush
> /home/antipov/glusterfs/libglusterfs/src/defaults.c:2531
> (libglusterfs.so.0+0x1a4c50)
>      #6 leases_flush
> /home/antipov/glusterfs/xlators/features/leases/src/leases.c:892
> (leases.so+0x20f08)
>      #7 default_flush
> /home/antipov/glusterfs/libglusterfs/src/defaults.c:2531
> (libglusterfs.so.0+0x1a4c50)
>      #8 default_flush_resume
> /home/antipov/glusterfs/libglusterfs/src/defaults.c:1815
> (libglusterfs.so.0+0x18c8f5)
>      #9 call_resume_wind
> /home/antipov/glusterfs/libglusterfs/src/call-stub.c:1932
> (libglusterfs.so.0+0x691af)
>      #10 call_resume
> /home/antipov/glusterfs/libglusterfs/src/call-stub.c:2392
> (libglusterfs.so.0+0x8fd19)
>      #11 iot_worker
> /home/antipov/glusterfs/xlators/performance/io-threads/src/io-threads.c:232
> (io-threads.so+0x66a2)
>      #12 <null> <null> (libtsan.so.0+0x2d33f)
>
>    Previous write of size 8 at 0x7b5800020800 by thread T11 (mutexes:
> write M16793):
>      #0 __inode_get_xl_index
> /home/antipov/glusterfs/libglusterfs/src/inode.c:453
> (libglusterfs.so.0+0x574ac)
>      #1 __inode_ref /home/antipov/glusterfs/libglusterfs/src/inode.c:578
> (libglusterfs.so.0+0x57a78)
>      #2 inode_ref /home/antipov/glusterfs/libglusterfs/src/inode.c:620
> (libglusterfs.so.0+0x57c17)
>      #3 pl_inode_setlk
> /home/antipov/glusterfs/xlators/features/locks/src/inodelk.c:782
> (locks.so+0x76efe)
>      #4 pl_common_inodelk
> /home/antipov/glusterfs/xlators/features/locks/src/inodelk.c:1050
> (locks.so+0x78283)
>      #5 pl_inodelk
> /home/antipov/glusterfs/xlators/features/locks/src/inodelk.c:1094
> (locks.so+0x78b67)
>      #6 ro_inodelk
> /home/antipov/glusterfs/xlators/features/read-only/src/read-only-common.c:108
> (worm.so+0x3f75)
>      #7 ro_inodelk
> /home/antipov/glusterfs/xlators/features/read-only/src/read-only-common.c:108
> (read-only.so+0x38b5)
>      #8 default_inodelk
> /home/antipov/glusterfs/libglusterfs/src/defaults.c:2865
> (libglusterfs.so.0+0x1a9708)
>      #9 default_inodelk
> /home/antipov/glusterfs/libglusterfs/src/defaults.c:2865
> (libglusterfs.so.0+0x1a9708)
>      #10 default_inodelk_resume
> /home/antipov/glusterfs/libglusterfs/src/defaults.c:2086
> (libglusterfs.so.0+0x196ed0)
>      #11 call_resume_wind
> /home/antipov/glusterfs/libglusterfs/src/call-stub.c:1992
> (libglusterfs.so.0+0x69c79)
>      #12 call_resume
> /home/antipov/glusterfs/libglusterfs/src/call-stub.c:2392
> (libglusterfs.so.0+0x8fd19)
>      #13 iot_worker
> /home/antipov/glusterfs/xlators/performance/io-threads/src/io-threads.c:232
> (io-threads.so+0x66a2)
>      #14 <null> <null> (libtsan.so.0+0x2d33f)
>
> and can't get an idea behind locking. In particular, why inode_ref()
> takes inode->table->lock but not
> inode->lock? In this example, both __inode_ref() and pl_inode_get() may

I once did some work trying to reduce such contention.
And Gluster developers had several discussion threads then from both my
MR[1] in Gerrit and Github issue[2]. I think those threads could give
you some answers about locking order.

Unfortunately, inodes and its life cycle management are quite essential,
which we must handle very carefully. It is not merged. But you can try
if it can solve your problem.

        -Changwei

[1]: https://github.com/gluster/glusterfs/issues/715
[2]: https://review.gluster.org/#/c/glusterfs/+/23685/

> change inode internals, but
> first one assumes inode table lock held and second one only assumes the
> lock of inode itself.
>
> Dmitry
> _______________________________________________
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
>
>
>
> Gluster-devel mailing list
> [hidden email]
> https://lists.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
[hidden email]
https://lists.gluster.org/mailman/listinfo/gluster-devel