[Novalug] ssh 1000's of login attempts

shawn wilson ag4ve.us@gmail.com
Mon Nov 17 06:01:43 EST 2014


There's also fwknop and similar "firewall knocking" programs. Also,
probably best to not allow password logins.

Past that, repeating the other suggestions of using fail2ban and moving off
port 22 (for your sanity when searching logs if nothing else) is probably
best.

Also, if you are using passwords, you have a big enough user base and
usernames are obvious (first initial + last name, etc), its only a matter
of time - just find the people who work there and go through top 100
passwords or w/e. Just time since you're not limiting and people are
predictable.
On Nov 16, 2014 1:48 PM, "Derek LaHousse via Novalug" <
novalug@firemountain.net> wrote:

> Lemme say, thank you Rich.  Your posts are often quite enlightening.
> To use someone else's words, your ideas are relevant to my interests
> and I would like to subscribe to your newsletter.
>
> On Sat, Nov 15, 2014 at 7:58 AM, Rich Kulawiec via Novalug
> <novalug@firemountain.net> wrote:
> > Here are a few ideas, presuming that you have a firewall available,
> > either external or on-board:
> >
> > First option
> > ------------
> >
> > Enumerate the set of IP addresses which have originated a valid incoming
> > ssh attempt during the last year.  (Provided you've kept logs, this
> should
> > be a straightforward exercise.)  Slap a band-aid on this by making the
> > presumption that each IP address is part of a /24 (even though, of
> course,
> > it could be part of a /27 or a /18 or something else).  Thus the address
> > 1.2.3.4 becomes 1.2.3.0/24.  Sort and uniq the list.  Count the number
> of
> > (pseudo) networks and decide if it's a tractable quantity.  If yes,
> > then configure the firewall to deny access from everywhere *except*
> > that set of /24's.
> >
> > Once you've done that, go back and actually figure out the *real* address
> > allocations that those IP addresses reside in.  "whois -h arin.net
> 1.2.3.4"
> > should show you that.
> >
> >         ("grepcidr" is your friend when you're confronted with a long
> >         list of such addresses and really don't want to look them all
> >         up one at a time.  i.e., run the list through
> >
> >         sort -t "." -k 1,1n -k 2,2n -k 3,3n -k 4,4n
> >
> >         which sorts by IPv4 address.  Pick out the biggest chunks by
> >         eyeball, find the allocation, use grepcidr -v to eliminate all
> >         of addresses in the same block, lather, rinse, repeat.)
> >
> > Possibly quite tedious but the end result will give you a VERY good
> > idea of exactly where your valid incoming ssh connections come from.
> > And since everything else is blocked at the firewall, this should cut
> > down the brute-force attempts -- since they can now only originate from a
> > (relatively) small number of networks.
> >
> > (NOTE: The blindly-presume-it's-a-/24 approach is only a stopgap.  It
> > will likely cause issues, because assuming that a particular host lives
> > in a /24, when in fact it's a dynamically assigned an address from a
> > a /22 -- and that address may or may not be persistent -- means that
> > it may, five minutes from now, pick up an address in the 3/4 of the
> > /22 that you're not allowing.  Oops.  Hence the phrase "slap a band-aid
> > on it" -- this really isn't sound network engineering practice, it's
> > just something to make the bleeding stop momentarily.)
> >
> > The downside -- and it's a big one -- of this approach is that if the
> > set of users is large and the set of addresses from which they log in
> > is large, then it may be infeasible.  But that's a judgment call to
> > be made based on what the history of valid ssh logins tells you. [1]
> > If you have 60 users and they come in from 120 networks, this is do-able.
> > If you have 6000 users and they show up from 12000 networks, probably
> not.
> >
> > And of course tomorrow user U may try to log in from address A that
> > you've never seen before and this will result in no joy.  But see
> > possible mitigation for that several paragraphs below.
> >
> >
> > Second option
> > -------------
> >
> > Bookmark:
> >
> >         http://www.okean.com/asianspamblocks.html
> > and
> >         http://ipdeny.com
> >
> > The former is an extremely well-curated list of network blocks assigned
> > to China and Korea.  Unless you have a *need* to permit network traffic
> > originating in those allocations to reach your host, I recommend using
> > those lists to drop it all on the floor at the firewall.  Not just ssh:
> > ALL of it.  This has proven itself, over the past decade and tens of
> > millions of logged-but-denied attacks later, to be a marvelously
> effective
> > anti-spam anti-DoS anti-abuse anti-attack strategy.  Statistically
> speaking,
> > it's easily the most effective, lowest-cost, lowest-operational-impact
> > defensive strategy that I've deployed. [2]
> >
> > Of course if you have an operational need to permit such traffic, then
> > don't do this.  Or if you have an operational need to permit some traffic
> > (say, HTTP or SMTP or whatever) then permit that -- but drop everything
> else.
> >
> > The latter is a frequently-updated list of network blocks assigned
> > per-country.  (Thus in theory its list of Chinese and Korean network
> > blocks should precisely match okean.com's.  However in practice,
> okean.com
> > displays more attention to detail, which I presume they can do because
> > they're tightly focused on only two countries.)  ipdeny.com thus
> provides
> > lists of "all network allocations in the US" and "all network allocations
> > in France" and so on.  If you only need to allow ssh connections which
> > originate in the US, then block everything else but that.  This isn't
> > a panacea -- obviously -- but it will stuff a sock in all the brute
> > force attempts coming from everywhere else, thus reducing the scope
> > of the problem and perhaps making it tractable for fail2ban and other
> > similar approaches.
> >
> >
> > Third option
> > ------------
> >
> > Use ipdeny.com's blocks to start firewalling out connections to ssh
> > one country at a time.  This is fairly tedious and it risks making
> > the firewall tables large enough to impact performance (or perhaps too
> > large for the firewall implementation).  But if you're being subjected to
> > attacks that mostly originate from one or two or five countries, this
> > might be a viable approach.   The idea isn't to stop them all,
> > it's just to cut the noise down to a dull roar so that other methods
> > become effective.  (You can use grepcidr to ascertain which countries
> > are responsible for most of the brute-force ssh attempts by comparing
> > your reject logs against the list of allocations.)
> >
> > I've firewalled about a dozen countries here.  The net (heh) result of
> > that is what I like to call "lossless compression of incoming traffic".
> >
> >
> > Fourth option
> > -------------
> >
> > This one is orthogonal to the others: this is something that *everyone*
> > should do.  Run, do not walk, to:
> >
> >         http://www.spamhaus.org/drop/
> >
> > and grab the latest copy of the DROP (Don't Route Or Peer) and EDROP
> > lists.  These are blocks of network space that have been verified to
> > be in use by (quoting Spamhaus) "professional spam or cyber-crime
> > operations (used for dissemination of malware, trojan downloaders,
> > botnet controllers".  You do not want any network traffic of any kind
> > from these, ever.  You do not want to send any network traffic of any
> > kind to these, ever.  To borrow a phrase, you will never find a more
> > wretched hive of scum and villainry.  So block them BIDIRECTIONALLY
> > in your firewall, or in your border gateways, or in your perimeter
> > routers, whatever you have: but make them disappear from your view of
> > the Internet.
> >
> > These lists are scrupulously maintained and updated daily.  I've
> > found that it suffices to update my copy once a month or so.
> >
> > (Do note that if you do this, you will break DNS lookups for that
> > set of domains/hosts whose authoritative servers lie within this
> > network space.  That's kind of the idea.  If you need to do research,
> > then punch a hole in the block for the system you're doing it from.
> > And be careful: these are known-hostile networks.)
> >
> >
> > Other options
> > -------------
> >
> > You could move ssh off port 22.  Or use port-knocking.  Or use
> > passive OS fingerprinting (maybe).  Or rate-limit connections to
> > ssh (on whatever port it's on) in the firewall, if yours supports that.
> >
> > You can also mitigate some of the downside of the very first idea above
> > by running ssh on a second port.  Thus "we accept port 22 ssh connections
> > only from the following enumerated set, but we accept port 12345 ssh
> > connections from anywhere".  This gives your users a fallback in case
> > tomorrow one of them tries to log in from a network you've never seen
> > before.  Slightly ugly but beats "you can't connect from there, period".
> > Downside is that sufficiently diligent sufficiently clueful sufficiently
> > motivated attackers will eventually find it.  Bleh.
> >
> >
> > General comment
> > ---------------
> >
> > "Default permit" is dead.  Which is unfortunate, as there was a time
> (1984)
> > when it was a perfectly reasonable strategy.  But in 2014?  No.  HELL no.
> >
> > Please see item #1 in:
> >
> >         The Six Dumbest Ideas in Computer Security
> >
> http://www.ranum.com/security/computer_security/editorials/dumb/index.html
> >
> > which is, out of the tens of thousands of papers, books, articles, etc.
> > that I've read on security, the most succinct, practical, best advice
> > I've seen.  This should be printed out and taped to the forehead of
> > every CSO, CIO and CTO on the planet. [3]
> >
> > Now there are a few -- very few, relatively speaking -- entities which
> > must permit access to certain networks/hosts/ports/protocols on a global
> > basis.  Chances are good that you're not one of them.  If so: stop doing
> > that.  Ruthlessly block everything you possibly can, because there's
> > no point in watching the logs scroll by faster through than you can
> > read them when instead you could be enjoying the quiet afforded by
> > dropping the deluge at the firewall.
> >
> > But if you *are* one of them: I recommend Macallan 18-year for breakfast,
> > as you're going to need it.  There *are* ways to tackle this problem
> > in your case, but -- except for deployment of the DROP list -- they're
> > going to be rather more complicated.
> >
> > Oh.  One last thing.  My choice of firewall is "pf", as found in OpenBSD.
> > It performs extremely well even on old/slow hardware, it's open-source
> > (of course), it has a huge number of features, it supports all kinds of
> > things like connection rate throttling, passive OS fingerprinting,
> > throughput limiting, and so on.  It also has the ability to perform well
> > even when asked to deal with (fairly) large tables.
> >
> > ---rsk
> >
> > [1] I'm a proponent of the concept that one should always perform log
> > analysis in order to discover what "normal" looks like.  For example:
> > how many valid ssh logins a week does the host get?  A day?  An hour?
> > If the average per-day as measured over six months is 220, and the
> > logs show that on a particular day the observed number was 4000,
> > that's Not Normal.  Or if it was 10: that's Not Normal either.
> > Both situations need to be researched and explained.
> >
> > Knowing what's normal for ssh servers or web servers or mail servers
> > helps craft detection methods that flag Not Normal and bring it to
> > the attention of humans.
> >
> > [2] I find it amazing that some people actually expend their bandwidth,
> > their CPU, their memory and their disk repeatedly accepting SMTP traffic
> > -- including the message-body -- and subjecting it to analysis algorithms
> > such as those in SpamAssassin, in attempts to prove to themselves that
> > yes, it's spam before they reject it and hang up.  It's really spam.
> > Just like the last 3,572,291 messages were really spam.  At some point,
> > it becomes easier to just drop packets on the floor, (optionally)
> > log the connection, and move on.
> >
> > [3] It's instructive, I think, to read the reports on major breaches
> > and dataloss incidents -- e.g., Target -- and then read Ranum's piece.
> > It's almost always possible to identify at least one of those six
> > mistakes and often several of them.  JP Morgan spent a kazillion
> > dollars on IT security, but failed because they made mistake #1.
> > **********************************************************************
> > The Novalug mailing list is hosted by firemountain.net.
> >
> > To unsubscribe or change delivery options:
> > http://www.firemountain.net/mailman/listinfo/novalug
> **********************************************************************
> The Novalug mailing list is hosted by firemountain.net.
>
> To unsubscribe or change delivery options:
> http://www.firemountain.net/mailman/listinfo/novalug
>



More information about the Novalug mailing list