New Keyservers and Dumps

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|

New Keyservers and Dumps

Eric Germann
I’ve reworked the keyserver fleet we’d previously deployed and made a blog post [1] about it.  If you’d peered with me before, those have most likely been cleaned out as I diversified the fleet across different cities and rebuilt them.  They are TLS enabled, but just with a standard cert, not an SKS signed cert.  PGP on a Mac seems to work fine.  I’d be curious of reports from other clients about hkps issues or success.

I’m also providing nightly dumps of the PGP database blogged about here [2].

Any question or peering requests can be sent to [hidden email] (PGP ID 0x55D89385152D11CD3B930C39495C22B395C821E4)

Thanks

EKG

[1] https://7layers.semperen.com/content/pgp-keyservers-available-production
[2] https://7layers.semperen.com/content/pgp-keyserver-dumps-now-available

_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: New Keyservers and Dumps

Kristian Fiskerstrand-6
On 08/20/2018 03:26 PM, Eric Germann wrote:
> I’ve reworked the keyserver fleet we’d previously deployed and made a blog post [1] about it.

Are the servers clustered in any way? In my experience each site needs
at least 3 nodes to ensure proper operation (mainly if A and B are
gossipping C can still respond to requests, depending on the amount of
traffic / speed of the node to return more is better)

So clustered setup is more important than large number of individual
servers, as there is no retry functionality in dirmngr.

I'm still looking for more clustered setups to include into hkps pool,
in particular since noticing an interesting feature if only one server
is included, which disables pool behavior in dirmngr and results in TLS
error / generic error due to CA pem not being loaded...

--
----------------------------
Kristian Fiskerstrand
Blog: https://blog.sumptuouscapital.com
Twitter: @krifisk
----------------------------
Public OpenPGP keyblock at hkp://pool.sks-keyservers.net
fpr:94CB AFDD 3034 5109 5618 35AA 0B7F 8B60 E3ED FAE3
----------------------------
"We all die. The goal isn't to live forever, the goal is to create
something that will."
(Chuck Palahniuk)


_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel

signature.asc (499 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: New Keyservers and Dumps

Eric Germann
Since I’ve been rolling these myself, I didn’t know a 3 node cluster was best.

As for the 3, if either putting them behind a LB or doing round-robin, how would the LB or the client know there was a failure on one and move on in the cluster.  Most I’ve seen with multiple (??) boxes use two IP’s behind a CNAME doing RR DNS.

FWIW, no one has complained, so not too sure it’s an issue, at least for now.

I do notice I frequently end up with a significant number of them in the hkp pool.  They do run hkps on LetsEncrypt certs and seem to sync fine, at least to GPGSuite.

Do you have a best-practices deployment doc, because it’s pretty much been trial by fire.  For example, killing the daemon gives you about a 50% chance of blowing up the db.  For the longest time I rebuilt, not knowing an “sks cleandb” would fix it 99% of the time.

Docs seem a bit thin.  I was trying to up pool count since a lot seem to have gone by the wayside, adding some geo-diversity and running one in Africa.  Not sure if there are any others down there.

It’s an interesting experiment.  If it’s an issue let me know and I will shut some/it down.

EKG


> On Aug 23, 2018, at 9:49 AM, Kristian Fiskerstrand <[hidden email]> wrote:
>
> On 08/20/2018 03:26 PM, Eric Germann wrote:
>> I’ve reworked the keyserver fleet we’d previously deployed and made a blog post [1] about it.
>
> Are the servers clustered in any way? In my experience each site needs
> at least 3 nodes to ensure proper operation (mainly if A and B are
> gossipping C can still respond to requests, depending on the amount of
> traffic / speed of the node to return more is better)
>
> So clustered setup is more important than large number of individual
> servers, as there is no retry functionality in dirmngr.
>
> I'm still looking for more clustered setups to include into hkps pool,
> in particular since noticing an interesting feature if only one server
> is included, which disables pool behavior in dirmngr and results in TLS
> error / generic error due to CA pem not being loaded...
>
> --
> ----------------------------
> Kristian Fiskerstrand
> Blog: https://blog.sumptuouscapital.com
> Twitter: @krifisk
> ----------------------------
> Public OpenPGP keyblock at hkp://pool.sks-keyservers.net
> fpr:94CB AFDD 3034 5109 5618 35AA 0B7F 8B60 E3ED FAE3
> ----------------------------
> "We all die. The goal isn't to live forever, the goal is to create
> something that will."
> (Chuck Palahniuk)
>

_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Clustering (Was: New Keyservers and Dumps)

Kiss Gabor (Bitman)
In reply to this post by Kristian Fiskerstrand-6
On Thu, 23 Aug 2018, Kristian Fiskerstrand wrote:

> Are the servers clustered in any way? In my experience each site needs
> at least 3 nodes to ensure proper operation (mainly if A and B are
> gossipping C can still respond to requests, depending on the amount of
> traffic / speed of the node to return more is better)
>
> So clustered setup is more important than large number of individual
> servers, as there is no retry functionality in dirmngr.

A question:
Does an SKS cluster need multiple storage space,
or nodes can share the database?

Cheers

Gabor
--
A mug of beer, please. Shaken, not stirred.

_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel
Reply | Threaded
Open this post in threaded view
|

Re: Clustering (Was: New Keyservers and Dumps)

Michael Jones
I've setup my cluster with separate filesystems, as I believe locks are
created on the bdb so the sks instances would lock each other if they
shared, otherwise i would have used nfs or gluster.

Kind Regards,
Mike


On 24/08/18 10:36, Gabor Kiss wrote:

> On Thu, 23 Aug 2018, Kristian Fiskerstrand wrote:
>
>> Are the servers clustered in any way? In my experience each site needs
>> at least 3 nodes to ensure proper operation (mainly if A and B are
>> gossipping C can still respond to requests, depending on the amount of
>> traffic / speed of the node to return more is better)
>>
>> So clustered setup is more important than large number of individual
>> servers, as there is no retry functionality in dirmngr.
> A question:
> Does an SKS cluster need multiple storage space,
> or nodes can share the database?
>
> Cheers
>
> Gabor


_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel
Reply | Threaded
Open this post in threaded view
|

Re: Clustering (Was: New Keyservers and Dumps)

Kristian Fiskerstrand-6
In reply to this post by Kiss Gabor (Bitman)
On 08/24/2018 11:36 AM, Gabor Kiss wrote:
> A question:
> Does an SKS cluster need multiple storage space,
> or nodes can share the database?

the DB/storage needs to be separate, but it doesn't require multiple VMs
although I tend to just spin up a new one for each node.

--
----------------------------
Kristian Fiskerstrand
Blog: https://blog.sumptuouscapital.com
Twitter: @krifisk
----------------------------
Public OpenPGP keyblock at hkp://pool.sks-keyservers.net
fpr:94CB AFDD 3034 5109 5618 35AA 0B7F 8B60 E3ED FAE3
----------------------------
"My father used to say: ‘Don’t raise your voice, improve your argument.’"
(Desmond Tutu)


_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel

signature.asc (499 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: New Keyservers and Dumps

Kristian Fiskerstrand-6
In reply to this post by Eric Germann
On 08/23/2018 11:49 PM, Eric Germann wrote:
> Since I’ve been rolling these myself, I didn’t know a 3 node cluster was best.
>
> As for the 3, if either putting them behind a LB or doing round-robin, how would the LB or the client know there was a failure on one and move on in the cluster.  Most I’ve seen with multiple (??) boxes use two IP’s behind a CNAME doing RR DNS.

it hops to another server after timeout or due to 5xx message from
upstream, e.g (nginx):

upstream sks_servers
{
        server 192.168.0.55:11372 weight=5;
        server 192.168.0.61:11371 weight=10;
        server 192.168.0.36:11371 weight=10;
}

Adding a cache on the LB further improves responses, as discussed previously
>
> FWIW, no one has complained, so not too sure it’s an issue, at least for now.

I get all the complains, as they say the pool isn't working.

>
> I do notice I frequently end up with a significant number of them in the hkp pool.  They do run hkps on LetsEncrypt certs and seem to sync fine, at least to GPGSuite.

Most traffic goes to hkps pool these days anyways since that is default
in gnupg.

>
> Do you have a best-practices deployment doc, because it’s pretty much been trial by fire.  For example, killing the daemon gives you about a 50% chance of blowing up the db.  For the longest time I rebuilt, not knowing an “sks cleandb” would fix it 99% of the time.

Very few scenarios where you would kill the daemon though, but the
archive of the ML has many discussions, you also have
https://bitbucket.org/skskeyserver/sks-keyserver/wiki/Peering giving
good pointers.

>
> Docs seem a bit thin.  I was trying to up pool count since a lot seem to have gone by the wayside, adding some geo-diversity and running one in Africa.  Not sure if there are any others down there.
>
> It’s an interesting experiment.  If it’s an issue let me know and I will shut some/it down.
>

Its not an issue, but in practice it doesn't necessarily add much value
either, more clustered setups are more important for the ecosystem than
even more individual servers.

> EKG
>
>
>> On Aug 23, 2018, at 9:49 AM, Kristian Fiskerstrand <[hidden email]> wrote:
>>
>> On 08/20/2018 03:26 PM, Eric Germann wrote:
>>> I’ve reworked the keyserver fleet we’d previously deployed and made a blog post [1] about it.
>>
>> Are the servers clustered in any way? In my experience each site needs
>> at least 3 nodes to ensure proper operation (mainly if A and B are
>> gossipping C can still respond to requests, depending on the amount of
>> traffic / speed of the node to return more is better)
>>
>> So clustered setup is more important than large number of individual
>> servers, as there is no retry functionality in dirmngr.
>>
>> I'm still looking for more clustered setups to include into hkps pool,
>> in particular since noticing an interesting feature if only one server
>> is included, which disables pool behavior in dirmngr and results in TLS
>> error / generic error due to CA pem not being loaded...
>>
>> --
>> ----------------------------
>> Kristian Fiskerstrand
>> Blog: https://blog.sumptuouscapital.com
>> Twitter: @krifisk
>> ----------------------------
>> Public OpenPGP keyblock at hkp://pool.sks-keyservers.net
>> fpr:94CB AFDD 3034 5109 5618 35AA 0B7F 8B60 E3ED FAE3
>> ----------------------------
>> "We all die. The goal isn't to live forever, the goal is to create
>> something that will."
>> (Chuck Palahniuk)
>>
>

--
----------------------------
Kristian Fiskerstrand
Blog: https://blog.sumptuouscapital.com
Twitter: @krifisk
----------------------------
Public OpenPGP keyblock at hkp://pool.sks-keyservers.net
fpr:94CB AFDD 3034 5109 5618 35AA 0B7F 8B60 E3ED FAE3
----------------------------
"The laws of Australia prevail in Australia, I can assure you of that.
The laws of mathematics are very commendable, but the only laws that
applies in Australia is the law of Australia."
(Malcolm Turnbull, Prime Minister of Australia).


_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel

signature.asc (499 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Clustering (Was: New Keyservers and Dumps)

Kiss Gabor (Bitman)
In reply to this post by Kristian Fiskerstrand-6
> > Does an SKS cluster need multiple storage space,
> > or nodes can share the database?
>
> the DB/storage needs to be separate, but it doesn't require multiple VMs

Unfortunately it is the disk space what is the bottleneck at me.
However I consult my colleagues.

Thanks.

Gabor

_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel
Reply | Threaded
Open this post in threaded view
|

Re: Clustering (Was: New Keyservers and Dumps)

Alain Wolf-2
In reply to this post by Kristian Fiskerstrand-6
Hi

Am 24.08.2018 um 14:36 wrote Kristian Fiskerstrand:
> On 08/24/2018 11:36 AM, Gabor Kiss wrote:
>> A question:
>> Does an SKS cluster need multiple storage space,
>> or nodes can share the database?
>
> the DB/storage needs to be separate, but it doesn't require multiple VMs
> although I tend to just spin up a new one for each node.
>

So to clarify, I run a Ubuntu-server 18.04 and assuming I have 100+ GB
of free disk-space:

1) I make two additional copies of /var/lib/sks (22GB as of today).

2) I give them each a nodename in sksconf, but leave the hostname as
   it is.

3) I peer all of them with each other in their membership files.

4) I somehow convince systemd to run three instances of sks and
   sks-recon, each with its own working-dir.

5) I tell my Nginx to proxy all three of them.

6) I ask around for peers to my two new instances.


A) Is that it?

B) Would this be useful?


Note 1:
I only one single external IPv4-Address, but a delegated IPv6 prefix. So
IPv4 recon will be limited to one of the three instance.

Note 2:
My server is not in the HKPS pool, and probably will not be in the
foreseeable future.


P.S.

Also, if this is so important, I suggest a description in the SKS Wiki,
similar to what we have for Peering and DumpingKeys.

Also I find it a bit confusing that the sks website talks about
load-balancing and this thread talks about clustering.


Regards
Alain


--
pgpkeys.urown.net 11370 # <[hidden email]> 0x27A69FC9A1744242


_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel

signature.asc (981 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Clustering (Was: New Keyservers and Dumps)

Kristian Fiskerstrand-6


[Sent from my iPad, as it is not a secured device there are no cryptographic keys on this device, meaning this message is sent without an OpenPGP signature. In general you should *not* rely on any information sent over such an unsecure channel, if you find any information controversial or un-expected send a response and request a signed confirmation]

> On 26 Aug 2018, at 18:44, Alain Wolf <[hidden email]> wrote:
>
> Hi
>
> Am 24.08.2018 um 14:36 wrote Kristian Fiskerstrand:
>> On 08/24/2018 11:36 AM, Gabor Kiss wrote:
>>> A question:
>>> Does an SKS cluster need multiple storage space,
>>> or nodes can share the database?
>>
>> the DB/storage needs to be separate, but it doesn't require multiple VMs
>> although I tend to just spin up a new one for each node.
>>
>
> So to clarify, I run a Ubuntu-server 18.04 and assuming I have 100+ GB
> of free disk-space:
>
> 1) I make two additional copies of /var/lib/sks (22GB as of today).
>
> 2) I give them each a nodename in sksconf, but leave the hostname as
>   it is.
>

RIght.. obviously also ports needs to be distinct

> 3) I peer all of them with each other in their membership files.
>
> 4) I somehow convince systemd to run three instances of sks and
>   sks-recon, each with its own working-dir.
>
> 5) I tell my Nginx to proxy all three of them.
>
> 6) I ask around for peers to my two new instances.
>
>
> A) Is that it?

Yup.. that is pretty much it. I also recommend a 10 minute cache on the load balancer

>
> B) Would this be useful?
>
Very much so.. that should be much more reliable
>
> Note 1:
> I only one single external IPv4-Address, but a delegated IPv6 prefix. So
> IPv4 recon will be limited to one of the three instance.

That is what I use myself.. one primary doing external gossipping.. each slave only gossip with master.. one reason for this is you don’t want slaves gossiping with others as that reduces time it is available for respons and you always want at least one node responding.
_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel
Reply | Threaded
Open this post in threaded view
|

Re: Clustering

Fabian A. Santiago
On 2018-08-27 03:18 AM, Kristian Fiskerstrand wrote:

> [Sent from my iPad, as it is not a secured device there are no
> cryptographic keys on this device, meaning this message is sent
> without an OpenPGP signature. In general you should *not* rely on any
> information sent over such an unsecure channel, if you find any
> information controversial or un-expected send a response and request a
> signed confirmation]
>
>> On 26 Aug 2018, at 18:44, Alain Wolf <[hidden email]> wrote:
>>
>> Hi
>>
>> Am 24.08.2018 um 14:36 wrote Kristian Fiskerstrand:
>>> On 08/24/2018 11:36 AM, Gabor Kiss wrote:
>>>> A question:
>>>> Does an SKS cluster need multiple storage space,
>>>> or nodes can share the database?
>>>
>>> the DB/storage needs to be separate, but it doesn't require multiple
>>> VMs
>>> although I tend to just spin up a new one for each node.
>>>
>>
>> So to clarify, I run a Ubuntu-server 18.04 and assuming I have 100+ GB
>> of free disk-space:
>>
>> 1) I make two additional copies of /var/lib/sks (22GB as of today).
>>
>> 2) I give them each a nodename in sksconf, but leave the hostname as
>>   it is.
>>
>
> RIght.. obviously also ports needs to be distinct
>
>> 3) I peer all of them with each other in their membership files.
>>
>> 4) I somehow convince systemd to run three instances of sks and
>>   sks-recon, each with its own working-dir.
>>
>> 5) I tell my Nginx to proxy all three of them.
>>
>> 6) I ask around for peers to my two new instances.
>>
>>
>> A) Is that it?
>
> Yup.. that is pretty much it. I also recommend a 10 minute cache on
> the load balancer
>
>>
>> B) Would this be useful?
>>
> Very much so.. that should be much more reliable
>>
>> Note 1:
>> I only one single external IPv4-Address, but a delegated IPv6 prefix.
>> So
>> IPv4 recon will be limited to one of the three instance.
>
> That is what I use myself.. one primary doing external gossipping..
> each slave only gossip with master.. one reason for this is you don’t
> want slaves gossiping with others as that reduces time it is available
> for respons and you always want at least one node responding.
> _______________________________________________
> Sks-devel mailing list
> [hidden email]
> https://lists.nongnu.org/mailman/listinfo/sks-devel
really....you view an sks cluster to be nothing more than multiple
instances running on one server? interesting....would there be any
advantage to using multiple servers / vm's? or would that then be
overkill?

--
Fabian S.

OpenPGP:

0x643082042DC83E6D94B86C405E3DAA18A1C22D8F (new key)

***

0x3C3FA072ACCB7AC5DB0F723455502B0EEB9070FC (to be retired, still valid)

_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel

signature.asc (853 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Clustering

Kristian Fiskerstrand-6
On 08/27/2018 02:43 PM, Fabian A. Santiago wrote:
> really....you view an sks cluster to be nothing more than multiple
> instances running on one server? interesting....would there be any
> advantage to using multiple servers / vm's? or would that then be overkill?

There would be the usual advantages if there are other outages, e.g
during system upgrade, but for the purposes we're talking it just needs
to be multiple instances.

--
----------------------------
Kristian Fiskerstrand
Blog: https://blog.sumptuouscapital.com
Twitter: @krifisk
----------------------------
Public OpenPGP keyblock at hkp://pool.sks-keyservers.net
fpr:94CB AFDD 3034 5109 5618 35AA 0B7F 8B60 E3ED FAE3
----------------------------
Potius sero quam numquam
Better late then never

_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel