SKS apocalypse mitigation

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
32 messages Options
12
Reply | Threaded
Open this post in threaded view
|

SKS apocalypse mitigation

Andrew Gallagher
Hi, all.

I fear I am reheating an old argument here, but news this week caught my
attention:

https://www.theguardian.com/technology/2018/mar/20/child-abuse-imagery-bitcoin-blockchain-illegal-content

tl;dr: Somebody has uploaded child porn to Bitcoin. That opens the
possibility that *anyone* using Bitcoin could be prosecuted for
possession. Whether this will actually happen or not is unclear, but
similar abuse of SKS is an apocalyptic possibility that has been
discussed before on this list.

I've read Minsky's paper. The reconciliation process is simply a way of
comparing two sets without having to transmit the full contents of each
set. The process is optimised to be highly efficient when the difference
between the sets is small, and gets less efficient as the sets diverge.

Updating the sets on each side is outside the scope of the recon
algorithm, and in SKS it proceeds by a sequence of client pull requests
to the remote server. This is important, because it opens a way to
implement object blacklists in a minimally-disruptive manner.

An SKS server can unilaterally decide not to request any object it likes
from its peers. In combination with a local database cleaner that
deletes existing objects, and a submission filter that prevents them
from being reuploaded, it is entirely technically possible to blacklist
objects from a given system.

The problems start when differences in the blacklists between peers
cause their sets to diverge artificially. The normal reconciliation
process will never resolve these differences and a small amount of extra
work will be expended during each reconciliation. This is not fatal in
itself, as SKS imposes a difference limit beyond which peers will simply
stop reconciling, so the increase in load should be contained.

The trick is to ensure that all the servers in the pool agree (to a
reasonable level) on the blacklist. This could be as simple as a file
hosted at a well known URL that each pool server downloads on a
schedule. The problem then becomes a procedural one - who hosts this,
who decides what goes in it, and what are the criteria?

It has been argued that the current technical inability of SKS operators
to blacklist objects could be used as a legal defence. I'm not convinced
this is tenable even now, and legal trends indicate that it is going to
become less and less tenable as time goes on.

Another effective method that does not require an ongoing management
process would be to blacklist all image IDs - this would also have many
other benefits (I say this as someone who once foolishly added an
enormous image to his key). This would cause a cliff edge in the number
of objects and, unless carefully choreographed, could result in a mass
failure of recon.

One way to prevent this would be to add the blacklist of images in the
code itself during a version bump, but only enable the filter at some
timestamp well in the future - then a few days before the deadline,
increase the version criterion for the pool. That way, all pool members
will move in lockstep and recon interruptions should be temporary and
limited to clock skew.

These two methods are complementary and can be implemented either
together or separately. I think we need to start planning now, before
events take over.

--
Andrew Gallagher


_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel

signature.asc (879 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: SKS apocalypse mitigation

Yaron Minsky
FWIW, while I'm effectively no longer involved in SKS development, I
do agree that this is a problem with the underlying design, and
Andrew's suggestions all sound sensible to me.

On Fri, Mar 23, 2018 at 7:10 AM, Andrew Gallagher <[hidden email]> wrote:

> Hi, all.
>
> I fear I am reheating an old argument here, but news this week caught my
> attention:
>
> https://www.theguardian.com/technology/2018/mar/20/child-abuse-imagery-bitcoin-blockchain-illegal-content
>
> tl;dr: Somebody has uploaded child porn to Bitcoin. That opens the
> possibility that *anyone* using Bitcoin could be prosecuted for
> possession. Whether this will actually happen or not is unclear, but
> similar abuse of SKS is an apocalyptic possibility that has been
> discussed before on this list.
>
> I've read Minsky's paper. The reconciliation process is simply a way of
> comparing two sets without having to transmit the full contents of each
> set. The process is optimised to be highly efficient when the difference
> between the sets is small, and gets less efficient as the sets diverge.
>
> Updating the sets on each side is outside the scope of the recon
> algorithm, and in SKS it proceeds by a sequence of client pull requests
> to the remote server. This is important, because it opens a way to
> implement object blacklists in a minimally-disruptive manner.
>
> An SKS server can unilaterally decide not to request any object it likes
> from its peers. In combination with a local database cleaner that
> deletes existing objects, and a submission filter that prevents them
> from being reuploaded, it is entirely technically possible to blacklist
> objects from a given system.
>
> The problems start when differences in the blacklists between peers
> cause their sets to diverge artificially. The normal reconciliation
> process will never resolve these differences and a small amount of extra
> work will be expended during each reconciliation. This is not fatal in
> itself, as SKS imposes a difference limit beyond which peers will simply
> stop reconciling, so the increase in load should be contained.
>
> The trick is to ensure that all the servers in the pool agree (to a
> reasonable level) on the blacklist. This could be as simple as a file
> hosted at a well known URL that each pool server downloads on a
> schedule. The problem then becomes a procedural one - who hosts this,
> who decides what goes in it, and what are the criteria?
>
> It has been argued that the current technical inability of SKS operators
> to blacklist objects could be used as a legal defence. I'm not convinced
> this is tenable even now, and legal trends indicate that it is going to
> become less and less tenable as time goes on.
>
> Another effective method that does not require an ongoing management
> process would be to blacklist all image IDs - this would also have many
> other benefits (I say this as someone who once foolishly added an
> enormous image to his key). This would cause a cliff edge in the number
> of objects and, unless carefully choreographed, could result in a mass
> failure of recon.
>
> One way to prevent this would be to add the blacklist of images in the
> code itself during a version bump, but only enable the filter at some
> timestamp well in the future - then a few days before the deadline,
> increase the version criterion for the pool. That way, all pool members
> will move in lockstep and recon interruptions should be temporary and
> limited to clock skew.
>
> These two methods are complementary and can be implemented either
> together or separately. I think we need to start planning now, before
> events take over.
>
> --
> Andrew Gallagher
>
>
> _______________________________________________
> Sks-devel mailing list
> [hidden email]
> https://lists.nongnu.org/mailman/listinfo/sks-devel
>

_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel
Reply | Threaded
Open this post in threaded view
|

Re: SKS apocalypse mitigation

Andrew Gallagher
In reply to this post by Andrew Gallagher
On 23/03/18 11:10, Andrew Gallagher wrote:
> Another effective method that does not require an ongoing management
> process would be to blacklist all image IDs

It occurs to me that this would be more wasteful of bandwidth than
blocking objects by their hash, as the server would have to request the
object contents before deciding whether to keep it or not. This is
assuming that recon is calculated on pure hashes with no type hints (I'm
99% sure this is the case, correct me if I'm wrong).

We could minimise this by maintaining a local cache of the hashes of
already-seen image objects. This would be consulted during recon and
submission in the same way as an externally-sourced blacklist.

--
Andrew Gallagher


_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel

signature.asc (879 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: SKS apocalypse mitigation

Daniel Kahn Gillmor-7
In reply to this post by Andrew Gallagher
On Fri 2018-03-23 11:10:49 +0000, Andrew Gallagher wrote:
> Updating the sets on each side is outside the scope of the recon
> algorithm, and in SKS it proceeds by a sequence of client pull requests
> to the remote server. This is important, because it opens a way to
> implement object blacklists in a minimally-disruptive manner.

as both an sks server operator, and as a user of the pool, i do not want
sks server operators to be in the position of managing a blacklist of
specific data.

> The trick is to ensure that all the servers in the pool agree (to a
> reasonable level) on the blacklist. This could be as simple as a file
> hosted at a well known URL that each pool server downloads on a
> schedule. The problem then becomes a procedural one - who hosts this,
> who decides what goes in it, and what are the criteria?

This is a really sticky question, and i don't believe we have a global
consensus on how this should be done.  I don't think this approach is
feasible.

> Another effective method that does not require an ongoing management
> process would be to blacklist all image IDs - this would also have many
> other benefits (I say this as someone who once foolishly added an
> enormous image to his key). This would cause a cliff edge in the number
> of objects and, unless carefully choreographed, could result in a mass
> failure of recon.
>
> One way to prevent this would be to add the blacklist of images in the
> code itself during a version bump, but only enable the filter at some
> timestamp well in the future - then a few days before the deadline,
> increase the version criterion for the pool. That way, all pool members
> will move in lockstep and recon interruptions should be temporary and
> limited to clock skew.
I have no problems with blacklisting User Attribute packets from sks,
and i like Andrew's suggestion of an implementation roll-out, followed
by a "switch on" date for the filter.  I support this proposal.

I've had no luck getting new filters added to sks in the past [0], so
i'd appreciate if someone who *does* have the skills/time/commit access
could propose a patch for this.  I'd be happy to test it.

      --dkg

[0] see for example https://bitbucket.org/skskeyserver/sks-keyserver/pull-request/20/trim-local-certifications-from-any-handled

_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel

signature.asc (233 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: SKS apocalypse mitigation

Alin-Adrian Anton
In reply to this post by Andrew Gallagher
Hello,

Horrible topic, but base64 images or something could also work for regular bank transfers.

One could use the image property to store blacklist data, regular expressions, etc. That key would be a regular one but with a noisy picture. Maybe a web of DIStrust is also a good idea to vote out bad objects. That means using your private key to signal the object for malicious or illegal content, in a protocol which allows you to do that without redownloading the image.

Just some ideas, a very ugly topic anyway, hard to think of, but easy to imagine.

Alin Anton

On 03/23/2018 01:10 PM, Andrew Gallagher wrote:
Hi, all.

I fear I am reheating an old argument here, but news this week caught my
attention:

https://www.theguardian.com/technology/2018/mar/20/child-abuse-imagery-bitcoin-blockchain-illegal-content

tl;dr: Somebody has uploaded child porn to Bitcoin. That opens the
possibility that *anyone* using Bitcoin could be prosecuted for
possession. Whether this will actually happen or not is unclear, but
similar abuse of SKS is an apocalyptic possibility that has been
discussed before on this list.

I've read Minsky's paper. The reconciliation process is simply a way of
comparing two sets without having to transmit the full contents of each
set. The process is optimised to be highly efficient when the difference
between the sets is small, and gets less efficient as the sets diverge.

Updating the sets on each side is outside the scope of the recon
algorithm, and in SKS it proceeds by a sequence of client pull requests
to the remote server. This is important, because it opens a way to
implement object blacklists in a minimally-disruptive manner.

An SKS server can unilaterally decide not to request any object it likes
from its peers. In combination with a local database cleaner that
deletes existing objects, and a submission filter that prevents them
from being reuploaded, it is entirely technically possible to blacklist
objects from a given system.

The problems start when differences in the blacklists between peers
cause their sets to diverge artificially. The normal reconciliation
process will never resolve these differences and a small amount of extra
work will be expended during each reconciliation. This is not fatal in
itself, as SKS imposes a difference limit beyond which peers will simply
stop reconciling, so the increase in load should be contained.

The trick is to ensure that all the servers in the pool agree (to a
reasonable level) on the blacklist. This could be as simple as a file
hosted at a well known URL that each pool server downloads on a
schedule. The problem then becomes a procedural one - who hosts this,
who decides what goes in it, and what are the criteria?

It has been argued that the current technical inability of SKS operators
to blacklist objects could be used as a legal defence. I'm not convinced
this is tenable even now, and legal trends indicate that it is going to
become less and less tenable as time goes on.

Another effective method that does not require an ongoing management
process would be to blacklist all image IDs - this would also have many
other benefits (I say this as someone who once foolishly added an
enormous image to his key). This would cause a cliff edge in the number
of objects and, unless carefully choreographed, could result in a mass
failure of recon.

One way to prevent this would be to add the blacklist of images in the
code itself during a version bump, but only enable the filter at some
timestamp well in the future - then a few days before the deadline,
increase the version criterion for the pool. That way, all pool members
will move in lockstep and recon interruptions should be temporary and
limited to clock skew.

These two methods are complementary and can be implemented either
together or separately. I think we need to start planning now, before
events take over.



_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel

-- 
Sl.univ.dr.ing. Alin-Adrian Anton
Politehnica University of Timisoara 
Department of Computer and Information Technology
2nd Vasile Parvan Ave., 300223 Timisoara, Timis, Romania

_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel
Reply | Threaded
Open this post in threaded view
|

Re: SKS apocalypse mitigation

Kristian Fiskerstrand-6
In reply to this post by Daniel Kahn Gillmor-7
[I previously responded to a specific message not related to this thread
but none the less... ]

On 03/23/2018 03:02 PM, Daniel Kahn Gillmor wrote:
> On Fri 2018-03-23 11:10:49 +0000, Andrew Gallagher wrote:
>> Updating the sets on each side is outside the scope of the recon
>> algorithm, and in SKS it proceeds by a sequence of client pull requests
>> to the remote server. This is important, because it opens a way to
>> implement object blacklists in a minimally-disruptive manner.
>
> as both an sks server operator, and as a user of the pool, i do not want
> sks server operators to be in the position of managing a blacklist of
> specific data.

I would definitely agree with this

>
>> The trick is to ensure that all the servers in the pool agree (to a
>> reasonable level) on the blacklist. This could be as simple as a file
>> hosted at a well known URL that each pool server downloads on a
>> schedule. The problem then becomes a procedural one - who hosts this,
>> who decides what goes in it, and what are the criteria?
>
> This is a really sticky question, and i don't believe we have a global
> consensus on how this should be done.  I don't think this approach is
> feasible.
>
>> Another effective method that does not require an ongoing management
>> process would be to blacklist all image IDs - this would also have many
>> other benefits (I say this as someone who once foolishly added an
>> enormous image to his key). This would cause a cliff edge in the number
>> of objects and, unless carefully choreographed, could result in a mass
>> failure of recon.
>>
>> One way to prevent this would be to add the blacklist of images in the
>> code itself during a version bump, but only enable the filter at some
>> timestamp well in the future - then a few days before the deadline,
>> increase the version criterion for the pool. That way, all pool members
>> will move in lockstep and recon interruptions should be temporary and
>> limited to clock skew.
>
> I have no problems with blacklisting User Attribute packets from sks,
> and i like Andrew's suggestion of an implementation roll-out, followed
> by a "switch on" date for the filter.  I support this proposal.
>
I agree with this as well, UAT generally have very limited value, so if
we introduce a filter to skip all UATs I'm all fine with making that a
requirement across severs in sks-keyservers.net pools. That isn't
something that restricts servers overall, but anyhow...

> I've had no luck getting new filters added to sks in the past [0], so
> i'd appreciate if someone who *does* have the skills/time/commit access
> could propose a patch for this.  I'd be happy to test it.


and here comes at least one of the issues, we're talking about a filter
that responds to a specific alteration; mainly we need to specify a
specific filter for a specific version and move from there, which can be
relatively easy given sufficient time.

>
>       --dkg
>
> [0] see for example https://bitbucket.org/skskeyserver/sks-keyserver/pull-request/20/trim-local-certifications-from-any-handled
>
>
>
> _______________________________________________
> Sks-devel mailing list
> [hidden email]
> https://lists.nongnu.org/mailman/listinfo/sks-devel
>

--
----------------------------
Kristian Fiskerstrand
Blog: https://blog.sumptuouscapital.com
Twitter: @krifisk
----------------------------
Public OpenPGP keyblock at hkp://pool.sks-keyservers.net
fpr:94CB AFDD 3034 5109 5618 35AA 0B7F 8B60 E3ED FAE3
----------------------------
"There is no urge so great as for one man to edit another man's work."
(Mark Twain)


_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel

signature.asc (499 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: SKS apocalypse mitigation

Phil Pennock-17
On 2018-03-24 at 19:01 +0100, Kristian Fiskerstrand wrote:
> I agree with this as well, UAT generally have very limited value, so if
> we introduce a filter to skip all UATs I'm all fine with making that a
> requirement across severs in sks-keyservers.net pools. That isn't
> something that restricts servers overall, but anyhow...

We can do this without incompatibility-triggering filters and without
flag days.

We have KDB and PTree.  Add a third DB, Filtered.

The Filtered does not store the values, only the keys of things "we
don't want".  Value might record a reason and a date, for debugging.

Treat items in Filtered as part of "what we have" for reconciliation to
find the set difference.  That way you never request them.  Return HTTP
"410 Gone" for attempts to retrieve things which are marked Filtered.
That way clients don't try to authenticate and you just say "that might
have once existed, but no longer does".  Include a custom HTTP header
saying "SKS-Filtered: something".

Then it's a policy change to not accept UATs and to mark them as things
to be filtered out instead, and a clean-up tool to walk the existing DBs
and decide what should be in Filtered.  There will be down-time of some
extent, since SKS doesn't like sharing the DBs.

An SKS version which understands "SKS-Filtered:" headers will add an
entry to its own Filtered DB but _not_ delete stuff already in other
DBs.  It should record "I've seen that some peers are unwilling to
provide this to me, I can mark it as unavailable and include it in the
set of things I won't request in future".

Refusing to delete is a protection against someone finding a loophole
where information about other attributes is returned in response for a
request for one attribute, where a bad peer could delete data on your
server.

You won't be asking through reconciliation for something you already
had, thus the deletion prohibition won't be an issue.  You can probably
default to not allowing upload of anything which is in Filtered.  This
provides a DoS opportunity for someone malicious to try to prevent new
signatures flowing.  *shrug*

Each server can update at its own pace, but the pool definitions can be
changed, to encourage a certain pace.  Servers which continue to not
understand 410/SKS-Filtered: will keep asking for keys, becoming more
and more of a burden on others, so there will be incentive to drop
peering before too long.  But returning 410 should be a fast lookup and
the burden not too heavy, so we can afford to give it a 4-6 month window
of interop.

If you want your keys available, always, then take steps to host the
service which makes them available.  WKD, Finger, just an HTTP server,
something.  (Notably, most of these leave a trail of accountability,
unlike the PGP keyservers).  The SKS flood-fill public pool is living on
borrowed time and is not a strategy for continued availability.  We
keyserver operators are running something as a public good, for public
convenience, not operating critical infrastructure.  Disappearance of
public keyservers would be a major inconvenience, but not a disaster.

-Phil

_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel

signature.asc (1015 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: SKS apocalypse mitigation

Andrew Gallagher

> On 25 Mar 2018, at 03:37, Phil Pennock <[hidden email]> wrote:
>
> Disappearance of
> public keyservers would be a major inconvenience, but not a disaster.

Considering that keyservers are currently the only resilient way to distribute key revocations, I’m not sure I would be so sanguine. If I’m hosting my key exclusively on WKD or some other web based service, it would be easy to prevent key revocations from being distributed. Granted, revocation is imperfect at the best of times. But SKS is the best tool we have at the moment, and the ecosystem would be severely damaged without it.

A

_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel
Reply | Threaded
Open this post in threaded view
|

Re: SKS apocalypse mitigation

brent s.
On 03/25/2018 07:39 AM, Andrew Gallagher wrote:

>
>> On 25 Mar 2018, at 03:37, Phil Pennock <[hidden email]> wrote:
>>
>> Disappearance of
>> public keyservers would be a major inconvenience, but not a disaster.
>
> Considering that keyservers are currently the only resilient way to distribute key revocations, I’m not sure I would be so sanguine. If I’m hosting my key exclusively on WKD or some other web based service, it would be easy to prevent key revocations from being distributed. Granted, revocation is imperfect at the best of times. But SKS is the best tool we have at the moment, and the ecosystem would be severely damaged without it.
>
> A
>

I strongly and vehemently agree with both sides.


On a more serious note (albeit somewhat off-topic), and admittedly much
less deplorable a consideration - has the topic of copyrighted material
being distributed in keys (notably in the image data) come up at any point?

I suggest the same mechanism used in this approach should also be
applicable to those instances as well. Under DMCA in the US, keyserver
operators would be liable for this data (as we would be "distributing"
it) and responsible for its removal for compliance. I presume many other
countries have similar copyright laws/stipulations as well.




(Ironically, many if not all of agents for intellectual property
reclamation have PGP keys themselves on our servers, as one of the
stipulations for a DMCA's validity per § 512(c)(3)(A) (found here[0]) is
"A[n] ... electronic signature of a person authorized to act on behalf
of the owner of an exclusive right that is allegedly infringed.")


[0] https://www.law.cornell.edu/uscode/text/17/512

--
brent saner
https://square-r00t.net/
GPG info: https://square-r00t.net/gpg-info


_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel

signature.asc (915 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: SKS apocalypse mitigation

Michael Jones

What if the approach was to either have a web of trust to whitelist users able to upload images, or even more stringent strip all image data.

Is image data essential to operating?

I hardly ever look at the images, and these images could be shared via other means.

The keyservers would continue to operate with keys and revocation keys but no image data?

From memory the image can be removed from any key locally, so there is no reason that on submission it could not be removed.

Doesn't solve all the issues, but would prevent malicious use of our servers in a direct manor.


On 25 Mar 2018 8:12 p.m., "brent s." <[hidden email]> wrote:
On 03/25/2018 07:39 AM, Andrew Gallagher wrote:
>
>> On 25 Mar 2018, at 03:37, Phil Pennock <[hidden email]> wrote:
>>
>> Disappearance of
>> public keyservers would be a major inconvenience, but not a disaster.
>
> Considering that keyservers are currently the only resilient way to distribute key revocations, I’m not sure I would be so sanguine. If I’m hosting my key exclusively on WKD or some other web based service, it would be easy to prevent key revocations from being distributed. Granted, revocation is imperfect at the best of times. But SKS is the best tool we have at the moment, and the ecosystem would be severely damaged without it.
>
> A
>


I strongly and vehemently agree with both sides.


On a more serious note (albeit somewhat off-topic), and admittedly much
less deplorable a consideration - has the topic of copyrighted material
being distributed in keys (notably in the image data) come up at any point?

I suggest the same mechanism used in this approach should also be
applicable to those instances as well. Under DMCA in the US, keyserver
operators would be liable for this data (as we would be "distributing"
it) and responsible for its removal for compliance. I presume many other
countries have similar copyright laws/stipulations as well.




(Ironically, many if not all of agents for intellectual property
reclamation have PGP keys themselves on our servers, as one of the
stipulations for a DMCA's validity per § 512(c)(3)(A) (found here[0]) is
"A[n] ... electronic signature of a person authorized to act on behalf
of the owner of an exclusive right that is allegedly infringed.")


[0] https://www.law.cornell.edu/uscode/text/17/512

--
brent saner
https://square-r00t.net/
GPG info: https://square-r00t.net/gpg-info


_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel



_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel
Reply | Threaded
Open this post in threaded view
|

Re: SKS apocalypse mitigation

H Visage


On 26 Mar 2018, at 01:39 , Michael Jones <[hidden email]> wrote:

What if the approach was to either have a web of trust to whitelist users able to upload images, or even more stringent strip all image data.

Is image data essential to operating?


I’d make the case, that it might actually be not far in the future, that we’ll *need* to remove it to keep the database size(s) intact, and
looking at the current size, I’d argue we’d want to remove the images already.

I hardly ever look at the images, and these images could be shared via other means.

Exactly, sent an email, look at an URL with the signed picture…  
---
Hendrik Visage




_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel

signature.asc (499 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: SKS apocalypse mitigation

Andrew Gallagher
In reply to this post by Phil Pennock-17
Recent discussion has brought me back to thinking about Phil's
suggestion again.

On 25/03/18 03:37, Phil Pennock wrote:
> Treat items in Filtered as part of "what we have" for reconciliation to
> find the set difference.  That way you never request them.  Return HTTP
> "410 Gone" for attempts to retrieve things which are marked Filtered.
> That way clients don't try to authenticate and you just say "that might
> have once existed, but no longer does".  Include a custom HTTP header
> saying "SKS-Filtered: something".

I don't think we need the custom header - 410 might be sufficient.

> Then it's a policy change to not accept UATs and to mark them as things
> to be filtered out instead, and a clean-up tool to walk the existing DBs
> and decide what should be in Filtered.  There will be down-time of some
> extent, since SKS doesn't like sharing the DBs
Policy will have to be applied in multiple places. If the local
administrator changes a policy, then we have to walk the database as
above. If we receive a packet (either during catchup or via a
submission) that matches an existing policy, then we add the hash to the
blacklist (with an explanation) and throw away the packet. We also have
to be able to add and delete blacklist entries independently of general
policy.

It would be best if a running SKS was able to dynamically update its
blacklist and policy without having to shut down for maintenance. This
could be as simple as a config file that is reloaded on a schedule.

> An SKS version which understands "SKS-Filtered:" headers will add an
> entry to its own Filtered DB but _not_ delete stuff already in other
> DBs.  It should record "I've seen that some peers are unwilling to
> provide this to me, I can mark it as unavailable and include it in the
> set of things I won't request in future".

We need to distinguish between "things that we have blacklisted"
(authoritative) and "things that our peers have blacklisted" (cache).

The things that we have blacklisted locally (and presumably deleted) are
treated as present for recon, and "410 Gone" for requests.

The things that our peers have blacklisted (and previously returned 410
Gone for) are treated as present for recon *with that specific peer
only*, but otherwise not treated specially. If we don't have it and have
not locally blacklisted it, we should still request it from other peers
that are willing to serve it. If it violates our own policy then we
blacklist it locally. But we can't take our peer's word for that.

So the reconciliation process against "some-peer.net" operates against
the list of unique hashes from the set:

(SELECT hash FROM local_db) JOIN (SELECT hash FROM local_bl) JOIN
(SELECT hash FROM peer_bl_cache WHERE peer="some-peer.net")

(If we are in sync with "some-peer.net" then they will have generated
the same set, but with the local_bl and peer_bl_cache roles reversed)

But we only return 410 for incoming requests IFF they match:

(SELECT hash FROM local_bl)

If we receive 410 during catchup, then we add a new entry to the
peer_bl_cache: {hash: xxxxx, peer: "some-peer.net"}. All this should do
is ensure that recon against that particular peer stays in sync - it
should not affect the operation of recon with any other peer, nor of
incoming requests.

Since we are keeping a cache of peer blacklists, we have to allow for
cache invalidation. A remote peer might accidentally add a hash to its
blacklist, only to remove it later. We need to walk the peer_bl_cache at
a low rate in the background and HEAD each item just to make sure that
it still returns 410 - otherwise we clear that peer_bl_cache entry and
let it get picked up (if necessary) in the next recon.

I believe the above system should allow for recon to be maintained
separately between peer pairs whose blacklists differ, and for one
server to recon with multiple peers that all have differing blacklists.

---

The first, easier, issue with the above is bootstrapping.

Populating a new SKS server requires a dump of keys to be loaded. This
dump is assumed to be a close approximation to the full set of keys in
the distributed dataset. But with per-node policy restrictions, there is
no such thing as a "full set".

A new server populated by a dump from server A may not even be able to
recon with server A initially, because A's local_bl could be larger than
the maximum difference that the recon algorithm can handle. If A
included a copy of its local_bl with the dump, then the new server can
recon with A immediately. But only with A, because every server's
local_bl will be different.

This problem will extend to any two peers attempting to recon for the
first time. Without a local cache of each other's blacklists, the
difference between the datasets could easily be large enough to
overwhelm the algorithm.

There must therefore be a means of preseeding the local_bl_cache before
first recon with a new peer. This could be done by fetching a recent
blacklist dump from a standard location.

---

The second, harder, issue with the above is eventual consistency.

We assume that every peer will eventually see every packet at some
point. But it is entirely possible that all of my peers will put in
place policies against (say) photo-ids, and therefore I may never see a
photo-id that was not directly submitted to me - even if I have no such
policy myself. I am effectively firewalled behind my peers' policies.

Which then leads to pool consistency issues. If some peers are trapped
behind a policy firewall, not only will they have missing entries, they
may not ever *know* that they have missing entries. And this can break
in both directions simultaneously, as these peers may also contain extra
entries that the rest of the network will never see.

Without policies, indirect connectivity is sufficient for eventual
consistency. This leads to high latencies but is robust up to the point
of complete severance. But we can see that any policy that impedes the
flow of information across the network will potentially break eventual
consistency.

The only general solution is to alter the peering topology. We need to
get rid of membership restrictions for the pool. Any pool member should
be able to recon with any other pool member, ensuring that all members
see all hashes at least once. This would also have performance benefits
even if we don't implement policy blacklists.

--
Andrew Gallagher


_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel

signature.asc (879 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: SKS apocalypse mitigation

Kiss Gabor (Bitman)
> The second, harder, issue with the above is eventual consistency.
>
> We assume that every peer will eventually see every packet at some
> point. But it is entirely possible that all of my peers will put in
> place policies against (say) photo-ids, and therefore I may never see a
> photo-id that was not directly submitted to me - even if I have no such
> policy myself. I am effectively firewalled behind my peers' policies.
>
> Which then leads to pool consistency issues. If some peers are trapped
> behind a policy firewall, not only will they have missing entries, they
> may not ever *know* that they have missing entries. And this can break
> in both directions simultaneously, as these peers may also contain extra
> entries that the rest of the network will never see.

Just a historical note.
Folks, have you noticed the similarity between distribution of
keys and newsfeed? ("News" was very popular communication form
before forums, web2 and high speed internet access.[1])
News admins had to search "good" partners if they wanted to get a rich
subset of newsgroups.
On the top of evolution of news servers you can find INN with a lot of
sophisticated solutions.
Fraction of experiences of decades of news may be useful here too.

[1] https://en.wikipedia.org/wiki/Usenet

Gabor

_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel
Reply | Threaded
Open this post in threaded view
|

Re: SKS apocalypse mitigation

Andrew Gallagher
On 03/05/18 20:18, Gabor Kiss wrote:

>
> Just a historical note.
> Folks, have you noticed the similarity between distribution of
> keys and newsfeed? ("News" was very popular communication form
> before forums, web2 and high speed internet access.[1])
> News admins had to search "good" partners if they wanted to get a rich
> subset of newsgroups.
> On the top of evolution of news servers you can find INN with a lot of
> sophisticated solutions.
> Fraction of experiences of decades of news may be useful here too.
Yes, but this was driven by the limitations of pre-ARPAnet networking,
where you couldn't assume that you could connect directly to an
arbitrary news server. The same limitations also resulted in bang-path
email routing.

But email has long since migrated to a direct-connection delivery
paradigm, and for good reason. Sure, the idea that absolutely anybody
can set up a mail server and start opening connections to yours is a
little scary if you're not used to it. But that's how any internet
service works.

AFAICT, the limitation that SKS servers should only recon with known
peers was introduced as a measure against abuse. But it's a pretty
flimsy anti-abuse system considering that anyone can submit or search
for anything over the HKP interface without restriction.

I think all SKS servers should attempt to recon with as many other
servers as they can find. The tools exist to walk the network from a
known starting point or points and enumerate all responsive hosts. Why
not have each SKS server walk the network and update the in-memory copy
of its membership on an ongoing basis? If a previously unknown server
does try to recon, what's the harm? So long as it recons successfully it
should go into the list with all the rest.

That way the membership file as it exists now is just a starting point,
like the DNS root hints. No more begging on the list for peers. Just
pre-seed your membership file with a selection of the most stable SKS
sites (e.g. the ones coloured blue on the pool status page) and within
an hour you're peering with the entire pool, and them with you.

If any SKS server is found to be abusing trust, then block away. But
let's permit by default and block specific abuse rather than the other
way around. There may be a need for rate-limiting recon at some point,
but I don't think the pool is anywhere near that big yet.

--
Andrew Gallagher


_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel

signature.asc (879 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: SKS apocalypse mitigation

Kiss Gabor (Bitman)
> I think all SKS servers should attempt to recon with as many other
> servers as they can find. The tools exist to walk the network from a
> known starting point or points and enumerate all responsive hosts. Why
> not have each SKS server walk the network and update the in-memory copy
> of its membership on an ongoing basis? If a previously unknown server
> does try to recon, what's the harm? So long as it recons successfully it
> should go into the list with all the rest.
>
> That way the membership file as it exists now is just a starting point,
> like the DNS root hints. No more begging on the list for peers. Just
> pre-seed your membership file with a selection of the most stable SKS
> sites (e.g. the ones coloured blue on the pool status page) and within
> an hour you're peering with the entire pool, and them with you.

Okay, brain storming in progress. :-)

Keep the similarity to the DNS.
Don't collect millions of unwanted keys in advance.
Wait until a user request comes then ask discovered peers
for the key wanted, merge the results then send it back to the user.
Also store the key into local database and provide it for other
key servers if they ask you.

Requests may be "iterative" or "recursive" (words are stolen from DNS).
Users send recursive request: "I don't care how many peers
you ask, but tell me the key with all signatures."
A cross server request is iterative: "Send me what you have, no more."
This is to avoid endless storm of circulating requests.

How to maintain a pool of servers like above? How to measure their
quality?
It is more dificult than simply comparing number of locally
stored keys. There will be a dedicated key PMK. Monitoring station issues
new signatures 3-4 times a day to random subset of pool members
then recursively asks all pool members and aspirants if they
could retrieve all the new sigs on PMK.

To be continued ... :)

Gabor

_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel
Reply | Threaded
Open this post in threaded view
|

Re: SKS apocalypse mitigation

Phil Pennock-17
In reply to this post by Andrew Gallagher
On 2018-05-04 at 17:13 +0100, Andrew Gallagher wrote:
> AFAICT, the limitation that SKS servers should only recon with known
> peers was introduced as a measure against abuse. But it's a pretty
> flimsy anti-abuse system considering that anyone can submit or search
> for anything over the HKP interface without restriction.
>
> I think all SKS servers should attempt to recon with as many other
> servers as they can find.

The SKS reconciliation algorithm scales with the count of the
differences in key-counts.  If you peer with someone with no keys
loaded, it will render your server nearly inoperable.

We've seen this failure mode before.  Repeatedly.  It's part of why I
wrote the initial Peering wiki document.  It's why I walked people
through showing how many keys they have loaded, and is why peering is so
much easier these days: most people who post to sks-devel follow the
guidance and take the hints, and get things sorted out before they post.

This is why we only peer with people we whitelist, and why most people
look for as much demonstration of Clue as they can get before peering,
and it's a large part of why we do see de-peering when actions
demonstrate a lack of trustworthiness.

-Phil

_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel
Reply | Threaded
Open this post in threaded view
|

Re: SKS apocalypse mitigation

Andrew Gallagher

> On 5 May 2018, at 08:48, Phil Pennock <[hidden email]> wrote:
>
> If you peer with someone with no keys
> loaded, it will render your server nearly inoperable.

I was aware that recon would fail in this case but not that the failure mode would be so catastrophic. Is there no test for key difference before recon is attempted?

A

_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel
Reply | Threaded
Open this post in threaded view
|

Re: SKS apocalypse mitigation

Andrew Gallagher
In reply to this post by Kiss Gabor (Bitman)

> On 5 May 2018, at 07:00, Gabor Kiss <[hidden email]> wrote:
>
> Okay, brain storming in progress. :-)

:-)

> Requests may be "iterative" or "recursive" (words are stolen from DNS).
> Users send recursive request: "I don't care how many peers
> you ask, but tell me the key with all signatures."

The DNS has a hierarchical structure that allows the authoritative source for data to be found within a small number of requests that depends on the number of components in the fqdn. There is no such structure in sks, and no way of knowing that all I no has been found, so the *best* case scenario is that every server has to be polled for every request.

> How to maintain a pool of servers like above? How to measure their
> quality?

Sorry, my use of “pool” was inaccurate. I meant to refer to all connected and responsive servers. “Graph” is maybe the better term.

A

_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel
Reply | Threaded
Open this post in threaded view
|

Re: SKS apocalypse mitigation

Phil Pennock-17
In reply to this post by Andrew Gallagher
On 2018-05-05 at 08:53 +0100, Andrew Gallagher wrote:
> > On 5 May 2018, at 08:48, Phil Pennock <[hidden email]> wrote:
> > If you peer with someone with no keys
> > loaded, it will render your server nearly inoperable.
>
> I was aware that recon would fail in this case but not that the failure mode would be so catastrophic. Is there no test for key difference before recon is attempted?

It's the calculation of the key difference which is the problem.  That's
what recon is.

Recon figures out the difference in the keys present.  It's highly
efficient for reasonable deltas in key counts.  Yaron Minskey wrote
papers on the topic, leading to his academic degree; they're linked
from:
  https://bitbucket.org/skskeyserver/sks-keyserver/wiki/Home

After recon figures out what the local server needs, it then requests
those keys using HKP.

While you could modify the protocol to do something like announce a
key-count first, that's still only protection against accidental
misconfiguration: worthwhile and a nice-to-have if there's ever an
incompatible protocol upgrade anyway, to have a safety auto-cutoff to
back up the manual checks people do, but not protection against malice.

Fundamentally, reconciliation between datasets requires computation.
You can add safety cut-offs, and rate-limits per IP and CPU limits per
request and various other things, but none of those help if you're
trying to protect the keyservers from a half of the apocalypse
scenarios.

-Phil

_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel
Reply | Threaded
Open this post in threaded view
|

Re: SKS apocalypse mitigation

Andrew Gallagher

> On 5 May 2018, at 09:03, Phil Pennock <[hidden email]> wrote:
>
> While you could modify the protocol to do something like announce a
> key-count first, that's still only protection against accidental
> misconfiguration

That’s exactly what I’m talking about. Since the majority of the problems that you have experienced seem to be caused by people not setting it up correctly, it would appear to be sensible, and it’s such a simple thing that I’m surprised it hasn’t been implemented.

Yes, of course a malicious actor can take down an sks server, but you don’t need recon to do it...

A

_______________________________________________
Sks-devel mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/sks-devel
12