sks-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Sks-devel] SKS intermittently stalls with 100% CPU & rate-limiting


From: Moritz Wirth
Subject: Re: [Sks-devel] SKS intermittently stalls with 100% CPU & rate-limiting
Date: Sun, 17 Jun 2018 12:33:41 +0200

I have an idea about this, however i am not sure that this is still the
same problem.

The spider who queries the availability of the keyservers requests
/pks/lookup?op=get&search=0x16e0cf8d6b0b9508 - which contains the
problematic key (just look it up..). 

I am not sure that this is the actual problem, but just imagine the
request of the key causes massive load - the request is not answered and
your keyserver is kicked out of the pool.


Am 17.06.18 um 05:53 schrieb Pete Stephenson:
> Thanks.
>
> I then have three more questions:
>
> 1. If this issue is affecting my server to the point of it being booted
> from the pool (since it's stalling near-continuously and can't respond
> toe queries), why are other servers not being similar affected? There's
> lots of servers still in the pool.
> 2. Is there some countermeasure one can use to protect their server? I
> have LimitRequestBody set to 8000000 (8MB) to prevent blatant abuse, but
> clearly something is still annoying the server.
> 3. Any suggestions on how to deal with the unreasonably high-speed
> queries from corporate mail systems? Ideally, they'd run their own
> server locally to handle their huge amount of queries, but I have no
> real way of communicating that with them. I'd love to slow down their
> queries (tarpitting, maybe?) to minimize excess resource consumption
> while still answering their queries as opposed to just cutting them off
> once they hit a rate limit.
>
> Cheers!
> -Pete
>
> On 6/16/2018 5:47 PM, Moritz Wirth wrote:
>> Hi,
>>
>> seems like that is the "problem":
>>
>> https://bitbucket.org/skskeyserver/sks-keyserver/issues/60/denial-of-service-via-large-uid-packets
>> https://bitbucket.org/skskeyserver/sks-keyserver/issues/57/anyone-can-make-any-pgp-key-unimportable
>>
>> Best regards,
>>
>> Moritz
>>
>> Am 17.06.18 um 02:18 schrieb Pete Stephenson:
>>> Hi all,
>>>
>>> My server, ams.sks.heypete.com, has been suffering from periods where
>>> the amount of CPU used by the sks process goes to 100% for a few minutes
>>> at a time. During this time, my Apache reverse proxy produces errors of
>>> the following type (client IP address obfuscated for their privacy):
>>>
>>> [Sun Jun 17 00:00:31.414596 2018] [proxy:error] [pid 4648:tid
>>> 139657505371904] [client CLIENT_IP:40327] AH00898: Error reading from
>>> remote server returned by /pks/lookup
>>>
>>> This happens across a range of client IP addresses, so it doesn't appear
>>> to be a single malicious user. Rather, it seems that something is
>>> causing the sks process to stall and connections to it time out.
>>>
>>> After a minute or two, CPU usage drops to the normal value of a few
>>> percent up to 15%, with queries being promptly answered until the CPU
>>> usage spikes again and things stall out.
>>>
>>> The server is in close sync with its peers, with no particular issues on
>>> the recon side.
>>>
>>> Any ideas what might be causing this? I'm running 1.1.6 on Debian, and
>>> things have generally been working well for several years. For good
>>> measure, I recently deleted the key database and recreated it from a
>>> fresh dump, but that had no effect.
>>>
>>> Potentially related: several clients, evidently corporate mail servers
>>> that query the SKS pool for every email they send or receive, are making
>>> dozens of queries per second to my server. Is it reasonable to impose
>>> rate limits on such clients (e.g. no more than X queries in Y seconds)?
>>> If so, what would reasonable values be for X and Y?
>>>
>>> Thank you.
>>>
>>> Cheers!
>>> -Pete
>>>
>>
>>
>> _______________________________________________
>> Sks-devel mailing list
>> address@hidden
>> https://lists.nongnu.org/mailman/listinfo/sks-devel
>>
>





reply via email to

[Prev in Thread] Current Thread [Next in Thread]