Recently (but it is also a recurrent
question) we saw 3 interesting threads about hacking and
security:
href="https://serverfault.com/questions/218005/my-servers-been-hacked-emergency">How
do I deal with a compromised server?.
href="https://serverfault.com/questions/218138/finding-how-a-hacked-server-was-hacked">Finding
how a hacked server was hacked
href="https://serverfault.com/questions/218164/file-permissions-question">File
permissions question
The last one
isn't directly related, but it highlights how easy it is to mess up with a web server
administration.
As there are several things,
that can be done, before something bad
happens, I'd like to have your suggestions in terms of good practices to limit backside
effects of an attack and how to react in the sad case will
happen.
It's not just a matter of securing the
server and the code but also of auditing, logging and counter
measures.
Do you have any good
practices list or do you prefer to rely on software or on experts that continuously
analyze your web server(s) (or nothing at
all)?
If yes, can you share your list and your
ideas/opinions?
UPDATE
I
received several good and interesting
feedback.
I'd like to have a simple list, so
that can be handy for the IT Security administrators but also for the web
factotum
masters.
Even if everybody gave good
and correct answers, at the moment I prefer the one of
Robert as it's the most simple, clear and concise and the
one of sysadmin1138 as it's the most complete and
precise.
But nobody consider the user
perspective and perception, I think it's the first that have to be
considered.
What the user will think when will
visit my hacked site, much more if you own sensible data about them. It's not just a
matter of where to stock data, but how to calm angry
users.
What about data, medias, authorities and
competitors?
There are
two big areas to focus
on:
- Making it
hard to get in. - Creating policies and procedures to
calmly and efficiently handle the event of someone getting in past point
1.
This
is a very complex topic, and a lot of it focuses around making sure you have enough
information to figure out WTF happened after the fact. The abstract bullet points for
simplicity:
- Keep
logs (see also, Security Information Event
Management)- Any authorization
attempts, both successful and failing, preferably with source information
intact. - Firewall access logs (this may have to include
per-server firewalls, if in use). - Webserver access
logs - Database server authentication
logs - Application-specific usage
logs - If possible, the SIEM can throw alerts on suspicious
patterns.
- Any authorization
- Enforce
proper access controls- Ensure rights
are set correctly everywhere, and avoid 'lazy-rights' ("oh just give everyone read")
where possible. - Periodic audits of ACLs to ensure that
procedures are actually being followed, and temporary troubleshooting steps ("give
everyone read, see if it works then") have been correctly removed after troubleshooting
has finished. - All firewall pass-through rules need to be
justified, and audited periodically. - Webserver access
controls need to be audited as well, both webserver and filesystem
ACLs.
- Ensure rights
- Enforce
change-management- Any
changes to the security environment need to be centrally tracked and reviewed by more
than one person. - Patches should be included in this
process. - Having a common OS build (template) will
simplify the environment and make changes easier to track and
apply.
- Any
- Disable guest
accounts. - Ensure all passwords are not set to
defaults.- Off-the-shelf
applications may setup users with predefined passwords. Change
them. - A lot of IT appliances ship with user/password
pairs that are very well known. Change those, even if you log into that thingy only once
a year.
- Off-the-shelf
- Practice
least-privilege. Give users the access they actually
need.- For Admin users, a two-account
setup is wise. One regular account used for email and other office tasks, and a second
for elevated-priv work. VMs make this easier to live
with. - Do NOT encourage regular use of generic
administrator/root accounts, it's hard to track who was doing what when.
- For Admin users, a two-account
A
security-event policy is a must have for all organizations. It greatly reduces the
"running around with our heads cut off" phase of response, as people tend to get
irrational when faced with events such as these. Intrusions are big, scary affairs.
Shame at suffering an intrusion can cause otherwise level-headed sysadmins to start
reacting incorrectly.
All levels of the
organization need to be aware of the policies. The larger the incident, the more likely
upper management will get involved in some way, and having set procedures for handling
things will greatly assist in fending off "help" from on high. It also gives a level of
cover for the technicians directly involved in the incident response, in the form of
procedures for middle-management to interface with the rest of the
organization.
Ideally, your Disaster Recovery
policy has already defined how long certain services may be unavailable before the DR
policy kicks in. This will help incident response, as these kinds of events
are disasters. If the event is of a type where the recovery window
will NOT be met (example: a hot-backup DR site gets a realtime feed of changed data, and
the intruders deleted a bunch of data that got replicated to the DR site before they
were noticed. Therefore, cold recovery procedures will need to be used) then upper
management will need to get involved for the risk-assessment
talks.
Some components of any incident response
plan:
- Identify
the compromised systems and exposed data. - Determine early
on whether or not legal evidence will need to be retained for eventual
prosecution.- If evidence is to be
retained do not touch anything about that system unless absolutely required
to. Do not log in to it. Do not sift through log-files. Do. Not.
Touch. - If evidence is to be retained, the compromised
systems need to be left online but
disconnected until such time as a certified computer forensics
expert can dissect the system in a way compatible with evidence handling rules.
- Powering off a
compromised system can taint the data. - If your storage
system permits this (discrete SAN device) snapshot the affected LUNs before
disconnection and flag them
read-only.
- Powering off a
- Evidence
handling rules are complex and oh so easy to screw up. Don't do it unless you've
received training on them. Most general SysAdmins do NOT have this kind of
training. - If evidence is being retained, treat the loss
of service as a hardware-loss disaster and start recovery
procedures with new
hardware.
- If evidence is to be
- Pre-set rules
for what kinds of disasters requires what kinds of notice. Laws and regulation vary by
locality.- Rules pertaining to
'exposure' and 'proven compromise' do
vary. - Notification rules will require the
Communications department to get involved. - If the
required notice is big enough, top-level management will have to be
involved.
- Rules pertaining to
- Using DR data,
determine how much "WTF just happened" time can be spent before getting the service back
on line becomes a higher priority.
- Service-recovery times may require
the work of figuring out what happened to be subordinated. If so, then take a drive
image of the affected device for dissection after services are restored (this is not an
evidentiary copy, it's for the techs to reverse
engineer). - Plan your service-recovery tasks to include a
complete rebuild of the affected system, not just cleaning up the
mess. - In some cases service-recovery times are tight
enough that disk images need to be taken immediately after identifying a compromise has
occurred and legal evidence is not to be retained. Once the service is rebuilt, the work
of figuring out what happened can
start.
- Service-recovery times may require
- Sift
through logfiles for information relating to how the attacker got in and what they may
have done once in. - Sift through changed files for
information relating to how they got in, and what they did once they got
in. - Sift through firewall logs for information about
where they came from, where they might have sent data to, and how much of it may have
been sent.
Having
policies and procedures in place before a compromise, and well
known by the people who will be implementing them in the event of a compromise, is
something that just needs doing. It provides everyone with a response framework at a
time when people won't be thinking straight. Upper management can thunder and boom about
lawsuits and criminal charges, but actually bringing a case together is an
expensive process and knowing that beforehand can help damp the
fury.
I also note that these sorts of events do
need to be factored into the overall Disaster Response plan. A compromise will be very
likely to trigger the 'lost hardware' response policy and also likely to trigger the
'data loss' response. Knowing your service recovery times helps set expectation for how
long the security response team can have for pouring over the actual compromised system
(if not keeping legal evidence) before it's needed in the service-recovery.
Comments
Post a Comment