OSError: [Errno 24] Too many open files

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

OSError: [Errno 24] Too many open files

duplicity-talk mailing list
Date: Tue, 21 Jul 2020 11:19:21 +0200

>On 21.07.2020 10:46, Diagon via Duplicity-talk wrote:
>> Date: Mon, 20 Jul 2020 11:48:04 +0200
>>
>>> On 20.07.2020 07:16, Diagon via Duplicity-talk wrote:
>>>> Ubuntu 16.04, and yes I'm still running 0.7.19. Everything was fine until my cron scheduled backups stopped without warning on June 6th, as I just discovered.
>>>>
>>>> Command line that looks something like:
>>>>
>>>> PASSPHRASE="xxx" duplicity --log-file /home/me/duplicity.log --backend-retry-delay 60 --asynchronous-upload --name Remote --volsize 50 --full-if-older-than 6M --exclude '**.lock' /home/dev/mydir sftp://[hidden email]/Backup
>>>>
>>>> Error below. Any way around this until I get a new OS installed?
>>>>
>>>> Thanks -
>>>>
>>>> /usr/lib/python2.7/dist-packages/Crypto/Cipher/blockalgo.py:141: FutureWarning: CTR mode needs counter parameter, not IV
>> SNIP
>>>> . File "/usr/lib/python2.7/dist-packages/duplicity/gpginterface.py", line 374, in run
>>>> . create_fhs, attach_fhs)
>>>> . File "/usr/lib/python2.7/dist-packages/duplicity/gpginterface.py", line 402, in _attach_fork_exec
>>>> . pipe = os.pipe()
>>>> . OSError: [Errno 24] Too many open files
>>
>> for now you can work around the issue by raising the open files limit via ulimit
>>> https://linuxhandbook.com/ulimit-command/
>>
>> Is that on the server or on the client that I have to raise that limit?

> on the box duplicity is running on.

>>> would you mind posting a 'duplicity collection-status ...' of your backup. i would guess you some really very very long chain in there. maybe it is time to do a new full?
>>
>> I'm backing up this directory every 20 minutes, but almost all of the time it's only one file changing. So it looks like the following:
>>
>> $ duplicity collection-status sftp://[hidden email]/Backup
>> /usr/lib/python2.7/dist-packages/Crypto/Cipher/blockalgo.py:141: FutureWarning: CTR mode needs counter parameter, not IV
>> self._cipher = factory.new(key, *args, **kwargs)
>> Last full backup date: Wed May 13 22:20:03 2020
>> Collection Status
>> -----------------
>> Connecting with backend: BackendWrapper
>> Archive dir: /home/me/.cache/duplicity/xxxxxxxxxxxxxxxxxxxxxx
>>
>> Found 0 secondary backup chains.
>>
>> Found primary backup chain with matching signature chain:
>> -------------------------
>> Chain start time: Wed May 13 22:20:03 2020
>> Chain end time: Mon Jun 15 17:40:06 2020
>> Number of contained backup sets: 1007
>> Total number of contained volumes: 1052
>> Type of backup set: Time: Num volumes:
>> Full Wed May 13 22:20:03 2020 46
>> Incremental Thu May 14 03:40:04 2020 1
>>
>> <That last line, with different date/times and Num volumes = 1, repeats 1006 times>

> how do you manage to do 1000+ incrementals between 14 May and today (21 July)?

> 1000+ incrementals is a long chain. consider doing a new full. and generally doing them monthly or so.

Alright, I'll do that but this might not be workable on the long term.  Except for one file, most things in that directory change only quite irregularly.  The directory is large enough and my bandwidth small enough that doing a full every month would be onerous; but I do need to track changes carefully.  So I have an incremental every 20 minutes.

/D

_______________________________________________
Duplicity-talk mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
Reply | Threaded
Open this post in threaded view
|

Re: OSError: [Errno 24] Too many open files

duplicity-talk mailing list
Hi Diagon,

For each incremental file open duplicity will use up at least 3 File slots.  Yes, if you increase your 'ulimit -n' number you can get more incrementals, at a cost.  Each incremental is a potential point of failure and with the number of incrementals you have it's only a matter of time until one of those is corrupted by normal errors (network, disk, memory).  Consumer grade electronics normally have minimal to no error checking, especially in memory, so if a bit flips in main memory, hard drive memory, or along a router chain, you may never know this.  Granted, the chances of an error are exceedingly low, one in trillions of operations, but still there.  So, it's a balance of risk vs benefit.  Here's hoping it's always a benefit.

...Ken


On Tue, Jul 21, 2020 at 1:40 PM Diagon via Duplicity-talk <[hidden email]> wrote:
Date: Tue, 21 Jul 2020 11:19:21 +0200
>On 21.07.2020 10:46, Diagon via Duplicity-talk wrote:
>> Date: Mon, 20 Jul 2020 11:48:04 +0200
>>
>>> On 20.07.2020 07:16, Diagon via Duplicity-talk wrote:
>>>> Ubuntu 16.04, and yes I'm still running 0.7.19. Everything was fine until my cron scheduled backups stopped without warning on June 6th, as I just discovered.
>>>>
>>>> Command line that looks something like:
>>>>
>>>> PASSPHRASE="xxx" duplicity --log-file /home/me/duplicity.log --backend-retry-delay 60 --asynchronous-upload --name Remote --volsize 50 --full-if-older-than 6M --exclude '**.lock' /home/dev/mydir sftp://me@.../Backup
>>>>
>>>> Error below. Any way around this until I get a new OS installed?
>>>>
>>>> Thanks -
>>>>
>>>> /usr/lib/python2.7/dist-packages/Crypto/Cipher/blockalgo.py:141: FutureWarning: CTR mode needs counter parameter, not IV
>> SNIP
>>>> . File "/usr/lib/python2.7/dist-packages/duplicity/gpginterface.py", line 374, in run
>>>> . create_fhs, attach_fhs)
>>>> . File "/usr/lib/python2.7/dist-packages/duplicity/gpginterface.py", line 402, in _attach_fork_exec
>>>> . pipe = os.pipe()
>>>> . OSError: [Errno 24] Too many open files
>>
>> for now you can work around the issue by raising the open files limit via ulimit
>>> https://linuxhandbook.com/ulimit-command/
>>
>> Is that on the server or on the client that I have to raise that limit?

> on the box duplicity is running on.

>>> would you mind posting a 'duplicity collection-status ...' of your backup. i would guess you some really very very long chain in there. maybe it is time to do a new full?
>>
>> I'm backing up this directory every 20 minutes, but almost all of the time it's only one file changing. So it looks like the following:
>>
>> $ duplicity collection-status sftp://me@.../Backup
>> /usr/lib/python2.7/dist-packages/Crypto/Cipher/blockalgo.py:141: FutureWarning: CTR mode needs counter parameter, not IV
>> self._cipher = factory.new(key, *args, **kwargs)
>> Last full backup date: Wed May 13 22:20:03 2020
>> Collection Status
>> -----------------
>> Connecting with backend: BackendWrapper
>> Archive dir: /home/me/.cache/duplicity/xxxxxxxxxxxxxxxxxxxxxx
>>
>> Found 0 secondary backup chains.
>>
>> Found primary backup chain with matching signature chain:
>> -------------------------
>> Chain start time: Wed May 13 22:20:03 2020
>> Chain end time: Mon Jun 15 17:40:06 2020
>> Number of contained backup sets: 1007
>> Total number of contained volumes: 1052
>> Type of backup set: Time: Num volumes:
>> Full Wed May 13 22:20:03 2020 46
>> Incremental Thu May 14 03:40:04 2020 1
>>
>> <That last line, with different date/times and Num volumes = 1, repeats 1006 times>

> how do you manage to do 1000+ incrementals between 14 May and today (21 July)?

> 1000+ incrementals is a long chain. consider doing a new full. and generally doing them monthly or so.

Alright, I'll do that but this might not be workable on the long term.  Except for one file, most things in that directory change only quite irregularly.  The directory is large enough and my bandwidth small enough that doing a full every month would be onerous; but I do need to track changes carefully.  So I have an incremental every 20 minutes.

/D

_______________________________________________
Duplicity-talk mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/duplicity-talk

_______________________________________________
Duplicity-talk mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
Reply | Threaded
Open this post in threaded view
|

Re: OSError: [Errno 24] Too many open files

duplicity-talk mailing list
In reply to this post by duplicity-talk mailing list
What you can do, unless you need all 1000 versions of the file is to base the incremental off the full.  You may have to write a script to move some file names around, but if any of those 1000 files becomes corrupted then all subsequent backups of that file will be gone.

You can do versions where you rebase the incrementals at some point in the past (ie last week) again if you need all those changes then maybe duplicity is not a good logging system.

-Scott


> On Jul 21, 2020, at 2:40 PM, Diagon via Duplicity-talk <[hidden email]> wrote:
>
> Date: Tue, 21 Jul 2020 11:19:21 +0200
>
>> how do you manage to do 1000+ incrementals between 14 May and today (21 July)?
>
>> 1000+ incrementals is a long chain. consider doing a new full. and generally doing them monthly or so.
>
> Alright, I'll do that but this might not be workable on the long term.  Except for one file, most things in that directory change only quite irregularly.  The directory is large enough and my bandwidth small enough that doing a full every month would be onerous; but I do need to track changes carefully.  So I have an incremental every 20 minutes.


_______________________________________________
Duplicity-talk mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
Reply | Threaded
Open this post in threaded view
|

Re: OSError: [Errno 24] Too many open files

duplicity-talk mailing list
Am Dienstag, den 21.07.2020, 17:06 -0400 schrieb Scott Hannahs via Duplicity-talk:
> What you can do, unless you need all 1000 versions of the file is to base the incremental off the full

I'd use git and backup only the .git subdirectory

let cron do git commit --all every 20 minutes or even every single minute



_______________________________________________
Duplicity-talk mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
Reply | Threaded
Open this post in threaded view
|

Re: OSError: [Errno 24] Too many open files

duplicity-talk mailing list
In reply to this post by duplicity-talk mailing list
hey Ken,

On 21.07.2020 21:20, Kenneth Loafman via Duplicity-talk wrote:
> For each incremental file open duplicity will use up at least 3 File
> slots.  Yes, if you increase your 'ulimit -n' number you can get more
> incrementals, at a cost.

wasn't that fixed a while back. or to rephrase, why are the pipes are kept open after gpg decrypted the rsync data to be applied to the previous state?

..ede

_______________________________________________
Duplicity-talk mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
Reply | Threaded
Open this post in threaded view
|

Re: OSError: [Errno 24] Too many open files

duplicity-talk mailing list
ede,

It's fixed for most uses.  As soon as we get to the end of a gpg file we issue a waitpid() which harvests the file info.  If you have incrementals that have the same set of files open in all the incrementals you won't get to the end until all are processed.  This sounds like his use case, a true edge case.

...Ken


On Wed, Jul 22, 2020 at 5:02 AM edgar.soldin--- via Duplicity-talk <[hidden email]> wrote:
hey Ken,

On 21.07.2020 21:20, Kenneth Loafman via Duplicity-talk wrote:
> For each incremental file open duplicity will use up at least 3 File
> slots.  Yes, if you increase your 'ulimit -n' number you can get more
> incrementals, at a cost.

wasn't that fixed a while back. or to rephrase, why are the pipes are kept open after gpg decrypted the rsync data to be applied to the previous state?

..ede

_______________________________________________
Duplicity-talk mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/duplicity-talk

_______________________________________________
Duplicity-talk mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/duplicity-talk