HARING
PRACTICAL PROTECTION IT SECURITY MAGAZINE
Nelo) ag
ARE YOU GETTING THE,MOST OUT OF YOUR IPS?
[ans
Peta ea te yet
Peta ene
NOTES OF THE NETWORK ADMINISTRATOR
IMPROVING YOUR CUSTOM SNORT RULESHARINS
IT SECURITY MAGAZINE,
PRACTICAL PROTECTION
HARINS ="
Editor in Chit: Amat Letner
mata lotr sotware. comp!
‘olin lesinskod sofware com pl
Editorial Advisory Board: Rabacea Wyn, Michal Munt
DTP: ronausz Pogroszowsi
[An Draco: mses Porm
ones pograzenstontwre. can pl
Proofreaders: Bary McClain, Mark Lohman, Graham Hit
Top Batatestrs: Rebecca iy, Bob Fozen, Cars Ayal, Stave
(Goce Maton Dumas, Andy Aven
‘Special Thanks othe daa testers and Proofreaders who Papo
Unt te ese, Winout ther aetancs tere wouls no be
akin magazine
CEO: Eva Lozowoka
(na lozowickag@sotar compl
Production rector: Ane Kus
antag hight ony
Jalna lesinska@hakins org
‘Subscription: ona Brzoric
ana beri @olare com pt
Publisher: Sofware Prass Sp. 2.0.9.
Phone: 9173382631
vwwnrakind orgfon
‘Wat every effort has baen made to ensure he high aualty of
the maga, the odors make no waren, expres or impl,
[A vade mara presented inthe magazine were used only for
Informative purposes.
Alright trade marks presaradn the magazine are
reserved by tbe companies which own them
‘Tocrete graphs and dagrama we Used saxty. prog
by ee Soar
“The ators use automate OTP sya ACCIMCIS
Natnematcal mules crested uy Design Solonce Math Type™
DISCLAIMER!
The techniques described in our articles may only
bbe used in private, local networks. The editors
hold ne responsibility for misuse of the presented
techniques or consequent data loss.
AH ARING
scarterkit
Dear Readers,
As you already know Snort is the most widely
deployed IDS/IPS technology worldwide.
Developed by Sourcefire, Snort combines the
benefits of signature, protocol, and anomaly
— based inspection.
In Snort Special Issue Leon Ward, Joel Elser,
Kishin Fatnani, Shivang Bhagat and Rishita
Anubhai provide insight into writing Snort rules
and into deployment of this IDS/PS.
With the end of the year inevitably
approaching, it's high time to briefly reflect on
2010 and enter 2011 with new solutions and ideas
for the foreseeable future.
‘Some of them are provided by KK Mookhey in
"How to get the most out of your IPS?” And annual
Conference on Nagios and OSS Monitoring is to
be looked forward too.
Wishing you wonderful Christmas,
Haking Team
TOOLS
4 Uptime IT Systems Management Review
by Michael Munt
6 Notes of the Network Administrator
by Doug Chick
| recently used SNORT and another program | like EtherApe
to detect a major intrusion on my network. Within minutes
millions of people were on my private fiber network.
Once I isolated the problem | immediately connected my
Internet provider. Like with many ISPs they denied it and
recommended | look at my routing tables, Ifyou are a network
manager then you know in very many cases you must provide
proof to your ISP before they are willing to provide you with
support. In this case | recorded the event showing that there
was hundreds of thousands, perhaps even a milion people
was passing traffic on my network. I sent the logs, and a video
‘of my SNORT and EtherApe displays and emailed them to
the ISP. | then shutdown the two interfaces on my router and
waited for a return call. The call came quickly to.
wewchaking org8 Writing Snort Rules
by Kishin Fatnani
‘Though Snort can also be used for packet logging, sniffing
or as an IPS, however in this article we wil look more into
the concept of rules by which Snort detects interesting trafic
‘or us, basically the kind of traffic we are looking for, ike @
network attack, a policy violation or may be traffic from a
network application or device that you are troubleshooting
14 Collection and Exploration of Large Data
by Luca Deri
Collecting and exploring monitoring data is becoming
increasingly challenging as networks become larger
and faster. Solutions based on both SQL-databases and
specialized binary formats do not scale well as the amount
‘of monitoring information increases. In this article 1 would
like to approach to the problem by using a bitmap database
that allows to implementation of an efficient solution for both
data collection and retrieval. NetFlow and sFlow are the
current standards for building trafic monitoring applications.
Both are based on the concept of a tratfic probe (or agent
In the sFlow parlance) that analyses network traffic and,
produces statistics, known as flows, which are delivered to a
central data collector. As the number of flows can be pretty
extremely high, both standards use sampling mechanisms
in order to reduce the workload on both of the probe and
collectors.
ADVANCED
48 Improving your custom Snort rules
by Leon Ward
Wile itis easy to create a custom Snort rue, do you know
if you are actually making @ good one or not? This atte
introduces some common mistakes | find in custom Snort
rules and the potential implications of those mistakes.
The Snort IPS engine has changed substantaly over the
last ten years. Packet processing speed has improved,
IP defragmentation and. stream reassembly functions
have evolved, the connection and state tracking engine
has matured, but there is one thing that keeps getting lft
behind, Custom rule-sets, With each revision of Snort new
features are added that enhance the detection capabilty and
aid in packet processing performance of the Snort engin.
24 An Unsupervised IDS False Alarm
Reduction System —- SMART
by Gina Tihai and Maria Papadaki
Signature-based (or rule-based) network IDSs are widely used
in many organisations to detect known attacks (Dubrawsky,
2009). A common misconception about IDSs is that they are,
Plug-and-Play devices that can be installed and then allowed
‘to run autonomously. In reality, this is far from the truth,
wwrwhaking.org
ee ANERY
30 Content Modifiers: Keep it Specific
by Joel Esler
Witnout going off the deep-end here and discussing every
single Snort rule keyword, | just wanted to touch on a fow
modifiers that people sometimes misunderstand, These
modifiers are not keywords of themselves, but rather they
apply as modifiers to another keyword. That Keyword is
content, The content keyword is one ofthe easiest pieces of
the Snort rules language as all it does is look fora particular
string, The modifiers that | am talking about are: 1. Offset,
2. Depth, 3. Distance, 4. Within, 5. nocase, 6. hit 7.
rawbytes.
DEFENSE
34 Deploying Snort as WAF
(Web Application Firewall)
by Shivang Bhagat and Rishita Anubhai
In today’s environment, web applications are becoming a
popular attack point with attack agents. Atack agent can
bbe a human altacker or an automated worm. It tries to
explore vulnerabilities over HTTP(S) and exploit it for a given.
‘opportunity. The web application landscape is also changing
‘and more complexities. are goting added, it provides
‘openings for vulnerabilities and possible expotations.
HTTP trac is no longer restricted to name-value pais and
traditional HTML only. thas evolved with Web 2.0 and RIA,
it allows JSON, AMF, XML and various other structures, It
has become a platform for robust and advanced business
application hosting and usage. It is imperative to secure
business applications against all possible attack vectors
and to maintain security of information and access. In this,
aticle we wil try to understand Sor from HTTP standpoint
‘and how we can protect applications for some of the popular
attack vectors lke XSS or SOL injections by using It
40 Are you getting the most out of your IPS?
by K. K Mookhey
Picture this: @ muli-ilion dollar global telecom giant has
invested millions of dollars into building a state-of-the-art
Security Operations Center. They have @ huge display
screen being monitored 24/7 by a team of specialists who
= 50 we are fold — have received extensive training in the
specific technologies used, as well as in the overall incident
management framework. They've deployed a high-end
intrusion prevention system (\PS) which feeds into their
‘Security incident Management (SIM) system. A review of
the procedures and Service Level Agreement (SLA) of the
‘SOC team signed wit the rest ofthe business reveals that
they are ready to respond 24/7 and have committed that
Within 2 hours of @ serious attack they will respond to any
serious attacks. On paper it all looks impressive and too
00d to be true.
SNORT HARING I:TOOLS
up.time IT Systems
Management Review
When it comes to the performance and availability of your
IT infrastructure and applications, deep, and easy-to-use
monitoring is a must.
than ever, as applications and services span many
environments (cloud, virtual and physical) and
infrastructures (Windows, UNIX, Linux, VMware, etc)
‘Additionally, IT infrastructure is now global and monitoring
from one tool, instead of many point tools, is essential to
drive down costs while increasing performance.
up.time’s Single pane of Glass dashboard provides a
deep, easy-to-use, affordable and complete IT systems
management and monitoring solution designed for
mid-enterprise companies. Every license in up.time’s
comprehensive, cross platform management and
‘monitoring suite includes unlimited access to:
Ts monitoring and reporting is more complex
+ Server Monitoring
+ Virtual Server Monitoring
+ Cloud Monitoring
+ Co-Location Monitoring
+ Network Monitoring
+ SLA Monitoring & Management
+ Virtualization & Consolidation
+ Capacity Planning
+ Application Monitoring
+ Application Transaction Monitoring
+ Proactive Outage Avoidance
+ IT Process Automation
(One of the highly beneficial capabilities of the uptime
suite is access to Service Level Management. Most
departments require SLA’s (Service Level Agreement)
for their equipment and applications. uptime makes
it very easy to define and document agreed SLAs,
then link them through to the appropriate infrastructure
service. up.time also provides the ability to automate
responses to issues, removing the possibility of human
error while greatly decreasing the Mean-Time-To-
Repair. In fact, uptime goes a step further and lets
HARING SNORT
SLA for
ou uptime
SLA for
Payroll
SLA for SLA for
Email ERP
SLA for
CRM
$395 per Windows Server
$695 per UNIX Server
$695 per ESX Server (no charge per instance or VM)
Alkin-One: No additional charges for modules or
applications.
UL: http://wwwuptimesoftware.com/
administrators proactively automate responses based
‘on thresholds, allowing uptime to solve problems
before they happen. It's not just physical, virtual and
cloud based servers that up.ime monitors, it also
provides application and application transaction
monitoring across Email, CRM, ERP, Web, and even
custom applications (including any 3rd party commercial
software or in-house developed applications).
In addition to all of the above, up.time is extremely
advance with its reporting. This area is a major asset
of the up.time suite and it is immediately apparent that
wewchaking orgup.time IT Systems Management Review
the reporting has had a good deal of thought and time
spent on it. The reporting is both deep and very easy to
create. Administrators can generate and save reports
in different formats and quickly send (or set automated
daily, weekly or monthly sends) via email to a single
user or an entire group.
When we say up.time is easy to use, we really mean
it Installation of up.time was a dream, very simple,
straightforward and easy to do. The entire process
only takes a few mouse clicks. If you decide to go
with the VMware appliance option, this is even easier
as it comes as a pre-installed appliance that is can be
imported into any virtual infrastructure.
Managing the monitored environment is achieved
through a web portal which is simple, clean and easy
to read (unlike other monitoring solutions that appear
to have far, far too many menu's and options). Once
you enter a small amount of information, the ‘My Portal
home page is displayed. This page provides a summary
list of the current alerts that you have configured
together with saved reports and support links. All the
tutorials are web based and, again, very clean and
concise. The end result is that you are up and running
with this product very quickly.
Everything about the product screams simplicity
and yet it's extremely deep in its monitoring and
reporting capabilities. Compared to other tools, up.time
is very refreshing. It's certainly powerful enough
for large enterprises to monitor over 5,000 servers
and applications globally, and yet its affordable for
the small and mid enterprise companies to monitor
between 25 and 500 servers and applications. The help
documentation is included and available through your
browser locally on the server.
uptime uses one of the friendliest licensing models
in the industry, with its per-physical-server licensing
across the board, even in virtual environments. All you
need to do is count the number of physical servers you
want to monitor, and that’s it. Everything is included,
no modules, hidden extras, application charges or
management packs.
There is so much depth to this product that | can’t
comment on it all within the scope of this article. If this
sounds interesting, up.time makes trying it for yourself
incredibly easy. | suggest downloading the trial and
taking it for a test drive. | think you'll be as impressed
as | was. In fact, this product is so good, I'm starting
to recommend that my clients review their monitoring
needs and consider trialing in up.time.
MICHAEL MUNT
up times
Complete IT Infrastructure and Application Monitoring that’s
DOA ee
(© Deep: Granular Monitoring and Reporting ac
(VMware, UNIX, Windows, Linux, Novell and more) and locations.
oar
Cees Eee
minutes or less. No consult
Cee ence)
Ca
Per-Physical-Server Pricing, even in virtual environments.
Poses ta ne too has
eee ee er ree es ee)
Ge eer aes eae enue la
ro
Pn
Syren Cen?
Solve your Systems Management needs at www.uptimesoftware.com
® Learn: ® Watch: ® Evaluate:
White Papers & Product Tour 30-Day Risk-Free
up-time Case Studies ‘and Videos Enterprise Trial
cause downline is nat an antonBASICS
Doug Chick printable:
Notes of the Network Administrator
| have computer networking friends that work with various
departments of the government, corporations and private
companies that are very aware of the possible threats to
their computers and networks.
What you will learn...
overview of 10S
+ overview of Snort
What you should know...
+ basic knowledge of TCP/IP
hey are and have been taking serious steps to
| secure their systems, despite litle interest or
concern fram company or agency managers.
Because of this lack of concern, many network security,
managers must take it upon themselves to secure their
networks, but with litle or no budget, they must rely or
Open Source software to do it
One such software | use is SNORT. SNORT is an
open source network intrusion prevention and detection
program (NIDS). This is a must have in any network
managers security toolbox. If you are a Linux fan, then
I'm sure you already know about SNORT, as it comes
preinstalled with such Linux installs as Back/Track 4 and
Knoppix-STD. Note there is also a Windows install
SNORT, monitors, captures and analysis incoming
packages and scans for a pattem of intrusion. In other
words, it looks for specific packet signatures used by
hackers, and automated hacking programs. Snort can
detect attacks, probes, operating system fingerprinting,
buffer overflows, port scans, and server message block
scans. (In other words; all incoming network traffic.)
| recently used SNORT and another program | like
EtherApe to detect a major intrusion on my network.
Within minutes millions of people were on my private
fiber network. Once | isolated the problem | immediately,
connected my Intemet provider. Like with many ISPs they
denied it and recommended I look at my routing tables.
If you are a network manager then you know in very
many cases you must provide proof to your ISP before
HARING SNORT
thoy are willing to provide you with support. In this case
| recorded the event showing that there was hundreds of
thousands, perhaps even a million people was passing
traffic on my network. | sent the logs, and a video of my
SNORT and EtherApe displays and emailed them to the
ISP. | then shutdown the two interfaces on my router and
waited for a retum call. The call came quickly too.
The ISP's main core router was hacked, and th
routes were re-directed. Two hours later all the ISPs
network engineers were called in. | stopped it on my
end by shutting down the two interfaces it was coming
in from, but it took them two more days to correct it
‘on their end. | have redundant circuits from another
provider, so | simply used those. The direct impact to
me was minimal. Stil, with the flood of hundreds of
Figure 1
wewchaking orgthousands of people directed to my private network.
with over twenty offices connected, | am still waiting to
discover any long term damage. Privately one of the
techs from my ISP told me later that they thought the
intrusion came from China,
With the help of SNORT and EtherApe | was
immediately alerted to a flood of unwanted traffic to my
network. To me this reaffirmed the necessity of intrusion
detection programs, and also made me research how
many more types there are.
Types of Intrusion Detection Systems:
Intrusion prevention systems (ISP)
This device, also know as Intrusion Detection and
Prevention Systems are network security appliances
that monitor for malicious activity. This is a stand alone
appliance identifies suspicious activity, isolates and
logs it, and attempts to block
Host-based intrusion detection systems (HIDS)
Host-based intrusion detection systems are installed on
the computer level and monitor the actual server it is
installed for suspicious activity, where ISPs operate on
the network and analyze packets.
Protocol-based intrusion detection systems (PIDS)
This detection system is generally added in front of
a web server and analyzes the HTTP, (and HTTPS)
protocol stream and or port numbers.
Application protocol-based intrusion detection
systems (APIDS)
APIDS typically are placed between servers and
monitors the application state, or more accurately the
protocols being passed between them. For example a
web server that might call on a database to populate a
webpage field.
like SNORT because; one itis free, and two because of
the support and its sheer flexibility. ( know, that is three
things)
SNORT RULES.
Like with any intrusion detection device, SNORT has
Rules.
(concent: /00 01 86 a5/") meg:"nouncd ac
The rules are actions that tells Snort what to do when
it finds a packet that matches the rule criteria. There
are 5 default actions; alert, log, pass, activate, and
dynamic. If you are running Snort inline mode, there
are additional options which include; drop, reject, and
sdrop.
wwrwhaking.org
alert — generate an alert using the selected alert
method, and then log the packet
og — log the packet
ass — ignore the packet
4, activate ~ alert and then turn on another dynamic
rule
5. dynamic — remain idle until activated by an activate
rule , then act as a log rule
6. drop — block and log the packet
7. reject — block the packet, log it, and then send a
TCP reset if the protocol is TCP or an ICMP port
unreachable message if the protocol is UDP.
8. sdrop — block the packet but do not log it.
If you want to learn more about SNORT, | recommend
you visit there site at: www.snort.org
| know there are other NIDS programs out there,
and I'm sure they are just as good as SNORT, but as a
network administrator/engineer this particular program
has already proven itself to me.
EtherApe: Gorilla Virtual Warfare
As | mentioned before, another program | like is
EtherApe. EtherApe is a graphical network monitor
for UNIX modeled operating systems. It doesn't have
the same features as SNORT, but what is does do is
sives you a graphical overview, on what is going on in
‘your network. I run EtherApe on a large screen monitor
above my desk. When trouble comes, | can see an
immediate flash of color that warns me that there is
a possible situation on my network. This seemingly
simple program has called me to action a couple of
limes. EtherApe has the ability to filter just the port
number you want to monitor, or by default all of them.
Like the name implies it works on an Ethemet network,
but it also works with FODI, Token Ring, ISDN, PPP.
and SLIP.
DOUGLAS CHICK
Douglas Chick is a Director of information Systems for @ large
‘company in the Orlando FL area. Although he introduces
himself as a Network Engineer/Administrator. As with many
‘computer people, Doug holds an MCSE and CCNA certification.
‘Doug frst became known en the Internet in May of 2000 for @
series of articles about Microsoft retiring the NTA MCSE that
were published on over 30 Internet Magazines. With his humor
and insightful look into the world on computer professionals,
he receives a great deal of attention from other computer
professionals around the world. Andis proud to admit that less
that one percent are in response for his many typo's and mis-
spellings. For more visit: www.TheNetworkAdministrator.com
SNORT HARINYI,BASICS
Writing Snort Rules
Snort, as you would know, is a tool used to detect intrusions
ona network.
What you will learn...
+ ew get you started with wring basic Snort rules
+ Configuration of Snortis not covered here
+ Preprocessor and advance rules are not covered
What you should know...
+ Good knowledge about TCRIP networks
«Packet Analysis
+ Using and configuring Snort
logging, sniffing or as an IPS, however in this
article we will look more into the concept of rules
by which Snort detects interesting trafic for us, basically
the kind of traffic we are looking for, like a network.
attack, a policy violation or may be traffic from a network.
application or device that you are troubleshooting. For
instance, if someone is doing XMAS port scan to our
network using nmap with the -=x option, Snort will give
Us the following alert message.
Ton the tool can also be used for packet
nex ge
ont Attempted Information Leak) [Priority: 2)
15351246. 970825 192.166.0.11
192.168.0,1:132
‘pep "7:53 205:0x0 zp:28021 rpuani20 ogmuan:40
CABTOFBIES Ack: ¢x0 Win: OxBOO TepLen:
20 Urgetes ox0
iret =>
/fewu.omergingtneeats.nat /egi-bin/cvsweb ogi!
sge/SCRN/SCRO NAP] [Reet => neepe//
doc semergingthreats.net/2000566
Ifthe use of P2P or IM applications is against the corporate
policy, Snort can detect their use on the network and
provide alerts with messages similar to these:
[e+] Pap Bivtortent announce request” (+7)
or
H@AHING sNorT
[99] CHAT Yahoo IM successful Logon [*1
To identify and alert on the appropriate events of
interest, the detection engine needs to have rules for
each event, A rule is what tells the engine where and
what to look for in the network traffic and what needs
lo be done if detected. Here is a simple rule and the
alert generated when that rule is triggered. Rule:
fleet tep any any -> 292.168.9.1 40604 InegstAcceas to
port 40804 on server"? sid:3000001;)
Alert:
[#5] [1:3000001:0} Access to port 40404 on server [*4]
(priority: 0)
TCP TTL:64 TOS:0K0 1926265 IpLen:20 Denlen: 40
. st OXSISTTI0 Ack! OXESESTPY Wn:
200 tepten: 20
Looking at this you must have got some idea about
the components of a rule, but we shall go in to further
depth and look at various options that can be used in
the rules. So let us start with the above rule which is @
quite simple one.
Arule is divided into two parts:
+ Rule Header — the initial portion before the
parentheses
wewchaking org+ Rule Options — between the parentheses
The Rule Header
‘The header tells the engine about which packets to look
into based on the protocol, IP & Port addresses and the
direction in which the packet is travelling. The action to
be taken upon detecting an event is also mentioned in
the header. The following is the rule header format:
Let's take a look at the first field in the rule header
which is -
This field shows the action that will be taken when @
packet matches the rule. Typically you would see alert
or log here, which are self explanatory, but there are
some more actions like pass to ignore the packet or
activate to invoke another rule which has been idle due
to its dynamic action.
Snort will match the addresses and other options only
if the packet is part of the protocol mentioned here,
Which could be 1», tem, cep OF udp,
IPA
R
For the rule to be triggered, the IP address specified
here must match the IP address in the packet. Each
packet has an IP header which has two IP addresses,
source and destination. It will depend on the direction
field whether the IP address given here will be
matched with the source or the destination. This field
can have a specific IP, network address with CIDR
notation, multiple IP addresses enclosed within square
brackets and separated by comma or the word ‘any’ to
ignore the IP address.
We can use variable representing IPs of servers
or networks which makes it easier to manage. The
variable are declared initially with the keyword va: and
subsequently the variable name preceded by a dollar
sign can be used in the rules. Variable decalaration
war TEST_SERVER 192.168.0
Using the variable
cp any any >
wwrwhaking.org
There are some predefined variable names for IPs and
networks which are used in the default ruleset, some
of them include
If the protocol used is TCP or UDP, there will be
a respective header attached to the packet which
will have 2 port addresses, source and destination.
Again, for the rule to be triggered, the port nos. must
also match. Whether this field will be matched with
the source or destination port will depend on the
direction field, A port number may be a single number
or a range given with the - separating the lower and
upper boundaries e.g. 1:200 for port numbers from 1
to 200,
Justas in IP ADDR, variables can be used to represent,
port numbers of services like this:
Variable declaration
Using the Variable
‘There are some predefined variable names for ports
which are used in the default ruleset, some of them
include — :
This field stands for the direction in which the packet
must be travelling to match this rule. If the direction
given is -> then the IP and port addresses on the left
side of this sign will be matched with the source IP
and port while the right side will be matched with the
destination. In case of the < sign, the left side address
will be checked in source as well as destination and
if found in one, the right side will be matched with the
other.
Note
The exclamation sign : can be used to negate the IP
ADDR and PORT NO. fields
Coming back to our previous example,
80904 on server" si
we are telling Snort to alert us if it sees a packet
with any IP and any port number in the source
(being left side of the direction -> field) and the IP
SNORT HARING |.BASICS
address 192.168.0.1 and port number 40404 in
the destination. Now suppose there is a service
running on port 40404 on the server 192.168.0.1
and some client connected to the service and
transmitted and received several messages. If this
communication is captured by Snort, we surely
are going to receive the alert as given above. The
question here is that will Snort alert us while the
connection is being established, after it has been
successfully established, at the first message, or at
all the messages sent and received?? To know that
we need to look at the rule and its description below
it which clearly says that an alert to be generated
whenever any packet is seen to be going to t
service. By this explanation itis very clear that we are
going to receive a lot of alerts as Snort will be looking
at the communication packet by packet and each
packet going towards the server will trigger the alert,
Hence, we are going to get an alert right from when
the client requests connection to the service (SYN
packet), then the acknowledgement from the client
to server, each packet containing data (message) or
acknowledgement from the client to server, the finish
(FIN) and finish acknowledgement packets. For any
packets from the server to the client, there will be
no alert, hence the SYN/ACK will not give an alert,
neither any messages received by the client.
So, does that serve the purpose? Well, it depends
on what was the objective for writing such a rule. If
you are just looking for attempts to connect to the
service or successful connections, then this would
create a lot of noise by giving too many alerts which
are not required. However, it may be useful if you are
trying to debug or troubleshoot a network application
or analyze malware activities like if you want to see
how frequently is the application sending a keep-alive
message while being idle or if you want to check on
the activity of a bot
To reduce the noise and get a single alert on
connection or when a specific message is being sent,
we need to use the rule options which are enclosed
within the parentheses.
The Rule Options
We have already used a couple of options in our
sample rule but those options did not have any role in
the detection logic. These options are called general or
metadata options which provide information about the
rule or event and do not have affect during detection
The description of the event which is displayed in
the alert comes from the rule option ms. This option
consists of the keyword ==: followed by an argument,
the message itself, separated by a .. Each option is
then followed by a ; to end the option and begin another,
if any. Some options do not require an argument hence
vIHARING snort
they also don't require the is still
required
. however, the -
sid and rev
‘The other option that we have in our rule is the ‘sid’
option which uniquely identifies Snort rules, Each
rule in Snort has a unique sid which is provided
by the author of the rule. To ensure that it does
not conflict with the sid of the rules included in the
Snort distribution, the sid must be above 1000000.
Other community rules may have their own series
which must be avoided in custom rules. The sid is
often accompanied by the
send (IP dat="192. 168,021") /RePeport=404040)
Making the rule more specific
‘We do not want alerts for each and every packet of the
connection as just the knowledge about the connection
attempt would be good enough. In this case we can add
another option which will make the rule trigger only at
the connection request from the client side. As we know
that a connection request packet has only the SYN bit
sel, so we can instruct Snort to look for this bit. The
‘option for this is c1ass and it takes the initial letter of the
required flags as the argument ike the one here”
alec: ‘ep any any => 192.166.0
30000017)
wychaking orgHere we are saying that alert us only ifthe TCP header in,
the packet has the SYN bit exclusively set. This will add
to all the previous criteria of Protocol, IPs and Ports as
specified in the rule header. Now we will not get all the
noise we were getting earlier but there is stil a problem
which can give us false negatives i.e. it may not alert us
even if the connection is being requested on the said
service. This is because the first packet in the three-way
handshake in TCP can have an ECN flag set along with
the SYN flag. The two ECN flags were reserved earlier,
hence they are represented by the number 1 and 2 rather
than the initial alphabet. We now need to alter the flags
arguments to account for the possibilty of an ECN flag
being set. This can be done by adding the optional flags
preceded by a comma, as given below:
“access t0 port
on server’; idi30000
The flags option affects the detection, hence it cannot be
called a general or metadata option. The options affecting
detection are payload or non-payload options. As the flags
option refers to the information in the TCP header and not
the payload, hence it isa non-payload option.
Payload Options
Most rules do not just depend on the header fields
to identify an event of interest but they look into the
payload i.e. the data passed in the packets. The
headers are based on standard specifications, hence
Snort can identify each field of a header and lets us
refer to them by keywords (e.g. flags, tt, id), however,
the payload may have variable data depending on the
application used to generate it. To search for a specific
string in the payload, we can use the rule option content
which lets us specify the data to be searched in ascii
text or in hexadecimal.
Now, we also want to be alerted when the admin user
is logging in to the service, so we first analyze the traffic
Using a sniffing tool which could be Snort itself or some
‘other like tcpdump, or Wireshark. From our analysis we
need to find the pattern for admin login. The following is
the output from topdump when the admin login was in
process (see Listing 1)
‘As we see from the packet capture that there is a
string usr admin which is being used as a command to
login as admin. This string can be used as our pattern
to detect admin login. We can write the following rule to
search the pattern:
a o@ ove rt
se om se on ¢t
A small tool that makes a big difference
Authors, Florian Eichelbenger anc Ludwig Ertl can be
Pere
eee eee
eens.BASICS
Listing 1. Only TCP Payloadis displayed here
0201 9075 7372 2062
6466 €96e 0001 7565 6224 3739 Ga66 6076
6276 6866 6465 2693 33346 3500 7065 7264
3a77 7777 7800
0201 0075
66a €3¢6 0001
3738 6866 6276
Seat
With this rule, Snort will
skip the first 3 bytes of
the payload and start
searching the string usr
admin within the next
9 bytes. This way the
rule is also optimized
as the search will not be
carried out for the entire
payload,
5 Relative Content
os Matching
To further reduce the
possibility of false
ain. 0634793
rnegi7user atnin login det sid:3000002) vevls
This rule will trigger whenever the admin login
happens, however there is also a good possibilty of it
triggering at other times when this string is not being
used as a command but as some data. This would be
termed as a false positive. While choosing patterns,
‘we must keep in mind that our pattern is as much as
possible unique to the required event.
Reducing False Positives
We will analyze the packets more closely looking for
more patterns to reduce the possibilty of false positives.
Firstly we can add the constant binary data preceding
and following our previous string. Binary data can be
represented in hexadecimal and enclosed within the
characters.
If the binary data was not constant in all the admin
logins, then we need to look at other parameters. Let's
say the string usr admin always starts after three bytes
of some variable data. We can instruct Snort to skip
the first three bytes of the payload and start matching
the pattern string immediately from there. For this we
need to use two content modifiers — offset and depth.
Content modifiers affect how the content option works
and have to follow the required content option for e.g.
(content: usr admin; nocase}) will look for the string
usr admin but ignore the case, here the nocase is a
content modifier.
»IHARING SNorT
alarms, we can match
additional pattern if the first match is successful. We
have noticed that there is a string perm: which always
comes 24 bytes ahead of the usr admin string (see
Listing 2).
The rule logic to match the second pattern will be
same, however the keywords used here are different as
now the match begins from where the previous match
ended, The content modifiers used for this new pattern
are distance, which is similar to offset, the only difference
being that distance is an offset from the end of previous
match whereas offset starts from the beginning of the
Payload, and the other one is within which is similar to
depth but works in conjunction with distance.
e124; within:
cogin detectec? 1d:3000002; xevs4z )
‘There are many more ways to enhance the rule,
maybe we can cover them in the next article. If you
have to say anything about this article or have a
wishlist for the next one, you can write to me and |
shall surely consider.
KISHIN FATNANI- CISSP, GCIH GOLD, GCFA,
CCSE R70, CEH, LPT
Kishin Is the founder of K-Secure (www.ksecure.net), an IT
security training company. He has over 20 years experience
in IT, main focus being security, and conducts many security
trainings including Packet Analysis, Snort Rule Writing,
Ethical Hacking, Forensic Analysis, Web Application Security,
Check Point and more. He has also been a SANS local mentor
for the GIAC certification GCFA and has some contributions to
the development of Check Point courseware and certification
His contact him at kishinfksecure.net
Blog: kishinf.blogspot.com
Facebook: www.facebook.com/ksecure
Twitter: www.twitter.com/ksecure
wychaking orgThe Challenge of — figgum2
Compliance
Q.What are the sorts of challenges large
corporations are facing with regards to
compliance?
KK: The challenges are many and this is because the
compliance environment is becoming more complex.
Business operations are becoming dynamic by the
day, regulators are becoming more watchful when it
‘comes to data security and customers are becoming
more demanding. A very simple way to know you've
got a problem is to ask your CISO: How secure are we
&
as an organization. If you find him hemming-hawing or
jving some vague response, then you really don't have
a handle on your information security.
Q. How are companies de:
challenging environment?
KK: What enterprises should aim to do is put in place
a framework, which helps critical questions that provide
a clear idea of where the organization stands with
regards to information security. Our firm makes one
19 with this
wwrwhaking.org
rica
ies
der 0
‘such compliance platform called as NX27K (http://
www.niiconsulting.com/products/iso_toolkit.htm!)
= which helps companies comply with the multiple
information security compliance frameworks and
regulations, such as ISO 27001, PCI DSS, SOX, etc.
Q. What should be the capabilities of such a
platform?
KK: The key feature of such a platform should be
flexibility. And this is the key principle on which NX27K
is built. You can modify what you see and put into your,
asset register, or modify your risk assessment formula,
modify the audit trail, the risk register, and almost every
‘aspect of the platform. Itintegrates with Sharepoint and
Active Directory. Further, it adapts risk assessment
‘uidelines such as from NIST and COBIT's Risk-!T
Q.Where can customers go for more
information?
KK: Customers can email us at products@niiconsu
Iting.com or visit us at hitp://www.niiconsulting.com/
products/iso_toolkit.htm! for more information on this
product and our other services and products.
Certified Professional Hacker,
13th Dec 2010 to 17th Dec 2010
nep:/wue.security infcourses/ephv2.htont
Certified Information Security Consultant,
13th Dec 2010 to 30th Dec 2010
tp:d/wermaisecuriyin/courses/eisc html
Certified Professional Hacker - eLearning
tp:/fisecurityn/elearning htm
SNORT HARING |.BASICS
Collection and
Exploration of Large Data
Why the use of FastBit is a major step ahead when
compared with state of the art relational database tools
based on relational databases.
What you will learn...
+ Basis of building traffic monitoring applications
+ Data Collection and Exploration
What you should know...
+ A basic knowledge of architecture and implementation of
trafic monitoring tools
+ Abasicknowledge of TCPAP
ollecting and exploring monitoring data is
becoming increasingly challenging as networks
become larger and faster. Solutions based on
both SQL-databases and specialized binary formats do
riot scale well as the amount of monitoring information
increases. In this article 1 would like to approach to
the problem by using a bitmap database that allows
to implementation of an efficient solution for both data
collection and retrieval
NetFlow and sFlow
NetFlow and sFlow are the current standards for
building traffic monitoring applications. Both are based
on the concept of a traffic probe (or agent in the sFlow
parlance) that analyses network traffic and produces
statistics, known as flows, which are delivered to a
central data collector. As the number of flows can be
pretty extremely high, both standards use sampling
mechanisms in order to reduce the workload on
bothof the probe and collectors. In sFlow the use of
sampling mechanisms is native in the architecture so
that it can be used on agents to effectively reduce the
number of flows delivered to collectors. This has a
drawback in terms of result accuracy while providing
results with quantifiable accuracy. With NetFlow, the
use of sampling (both on packets and flows) leads to
inaccuracy and this means that flows sampling is very
seldom used in NetFlow hence there is no obvious:
mechanism for reducing the number of flows records,
wlHARING SNORT
while preserving accuracy. For these reasons, network
operators usually avoid sampling data hence have to
face with the problem of collecting and analyzing a
large number of flows that is often solved using a flow
collector that stores data on a SQL-based relational
database or on disk in raw format for maximum,
collection speed. Both approaches have pros and
cons; in general SQL-based solutions allows users to
write powerful and expressive queries while sacrificing
flow collection speed and query response time,
whereas raw-based solutions are more efficient but
provide limited query facilities.
The motivation is to overcome the limitations of
existing solutions and create a new generation of a
flow collection and storage architecture that exploits
state-of-the-art indexing and querying technologies. In
the following | would lke to describe the design and
implementation of nProbe , an open-source probe and
flow collector, that allows flows to be stored on disk
Using the FastBit database.
Architecture and Implementation
nProbe is an open-source NetFlow probe that also
supports both NetFlow and sFlow collection and, flow
conversion between version (for instancei.c. convert v5
to v9 flows). Itfully supports the NetFlow v9 specification
‘so giving it has the ability to specify flow templates (ie. it
‘supports flexible netflow) that are configured at runtime
when the tool is started (Figure 1).
wewchaking orgcc]
When used as probe and collector, nProbe
supports flow collection and storage to either raw
files or relational databases such as MySQL and
SQLite. Support of relational databases has always
been controversial as users appreciated the ability to
search flows using a SQL interface, but at the same
time flow dump to database is usually enable only
realistic for small sites. The reason is that enabling
database support could lead to the loss of flows
due to the database overhead. There are multiple
reasons that contribute to this behavior and in
particularincluding:
+ Network latency and multi-user database access
for network-based databases.
+ Use of SQL that requires flow information to be
converted into text that is then interpreted by the
database, instead of using an API for directly writing
into the database
+ Slow-down caused by table indexes update during
data insertion
+ Poor database performance when searching data
during data insert.
Databases offer mechanisms for partially avoiding
some of the above issues, which includinge:
+ Data insert in batch mode instead of doing it in real
time.
+ Avoid network communications by using file-based
databases.
+ Disable database transactions.
+ Use efficient table format optimized for large data
tables.
+ Not defining tables indexes therefore avoiding the
overhead of index updates, though usually results
in slower data search time.
Other database limitations include the complexity of
handling large databases containing weeks of data,
and purging old data while stil accommodating new
flow records. Many developers partition the database
often creating a table per day that is then dropped
when no longer needed.
‘The use of file-based databases such as SQLite offer
a few advantages with respect
to networked relational databases, as:
+ ILis possible to periodically create a new database
(e.g. one database per hour) for storing flows
received during that hour, this is in order to avoid
creating large databases,
+ According to some tests performed, the flow insert
throughput is better than networked-databases but
still slower than raw flow dump.
wwrwhaking.org
Geb ee ed
+ In order to both overcome the limitations of
relational databases, and avoid raw flow dump due
to limited query faciliies, | decided to investigate
the use of column-based databases and in
Particular, of FastBit .
Validation and Performance Evaluation
Ihave used the FastBitlibrary forcreating an efficientflow
collection and storage system. This is to demonstrate
thal nProbe with FastBit is a mature solution that can
be used on in a production environment. In order to
evaluate the FastBit's performance, nProbe has been
deployed in two different environments:
Medium ISPs
The average backbone traffic is around 250 Mbit/sec
(about 40K pps). The traffic is mirrored onto a Linux
PC (Linux Fedora Core 8 32 bit, Kernel 2.6.23, Dual
Core Pentium D 3.0 GHz, 1 GB of RAM, two SATA II
disks configured with RAID 1) that runs nProbe in probe
mode. nProbe computes the flows and saves them on
disk using FastBit. In order to reduce the number of
flows, the probe is configured to save flows in NetFlow
v9 bi-directional format with maximum flow duration of
5 minutes. In average the probe generates 36 million
flows/day. Each FastBit partition stores one hour of
traffic. Before deploying nProbe, flows were collected
and stored in a MySQL database.
Large ISPs
nProbe is used in collector mode. It receives flows
from 8 peering routers, with peak flow export of 85 K
flows/sec. The collection server is a fast machine with
8 GB of memory, running Ubuntu Linux 9.10 server
64 bit. Each FastBit partition stores five minutes
of traffic that occupy about 5.8 GB of disk space.
A second server running Ubuntu Linux 9.10 server
84bit and 24 GB of memory is used to query the flow
data. The FfastBbit partitions are saved to a NFS.
mount on a local storage server. Before deploying
Packet Capture Flow Expat
aw Flos /MySOL/SOLie / Fast
Figure 1
SNORT HARING IBASICS
Probe, flows were collected using nfdump and each
month the total amount of flow dumps exceeds 4 TB
of disk space. The goal of these two setups is to both
validate nProbe with FastBit on two different setups
and compare the results with the solution previously
used.
FastBit vs Relational Databases
Let's compare the performance of FastBit with respect
to MySQL (version 5.1.40 64 bit), a popular relational
database. As the host running nProbe is a critical
machine, in order not to interfere with the collection
process, two days worth of traffic was dumped in
FastBit format, and then transfered to a Core2Duo 3.06
GHz Apple iMac running MacOS 10.6.2. Moving FastBit,
partitions across machines running different operating
systems and word length (one is 32 the other is 64 bit)
has not required any data conversion. This is a good
feature as over-time collector hosts can be based on
various operating systems and technology; hence flow
archives can be used immediately without any data
conversion is a desirable feature. In order to evaluate
how FastBit partition size affects the search speed,
hourly partitions have been merged into a single daily,
Partition. In order to compare both approaches, five
queries can be defined:
QU: sexzcr couns(+s,SUMIPRTS),SUN(BETES) FROM NETELOW
+ Q2: seuscr couwness
+ Q3: seize counrcey FROM NETELON GROUP BY cAVE_SRC_
+ 4: sexect svm_sne avon suxconas) suminyres) AS.
+ 5: seuzer rev4_sec_A0oR, 14_sne_rowr, 1ev4 oor
‘Table 2. FastBitvs MySQL Query Speed results aren seconds)
MySQL Pee etn
Cre
FastBit partitions have been queried using the fbquery
tool with appropriate command line parameters. All
MySQL tests have been performed on the same
machine with no network communications between
client and server. In order to evaluate the influence of
MySQL indexes on queries, the same test has been
repeated with and without indexes.
Data used for testing washave been captured on
Oct 12th and 13th (~68 milion flows) and contained
‘a subset of NetFlow fields (IP source/destination, port
source/destination, protocol, begin/end time). The
table below compares the disk space used by MySQL
and FastBit. In the case of FastBit, indexes have been
computed on all columns,
Merging FastBit partitions does not usually improve
the search speed but instead queries on merged data
requires more memory as FastBit has to load a larger
index in memory. In terms of query performance,
FastBit is far superior compared with MySQL as shown
in Table 2:
+ Queries that require access only to indexes take
less than a second, regardless of the query type
+ Queries that require data access are at least an
order of magnitude faster that on MySQL.
+ Index creation time on MySQL takes many minutes
and it prevents its use in real life when importing
data in (near-)realtime, and also indexes also take
a significant amount of disk space.
+ Indexes on MySQL do not speed up queries,
contrary to FastBit,
+ Disk speed is an important factor for accelerating
queries. In fact running the same test twice with
data already cached in memory, significantly
decreases the query speed. The use of RAID 0
has demonstrated that the performance speed has
been improved.
Open Issues and Future Work
Tests on various FastBit configurations have shown
that the disk is an important component that has a
major impact on the whole system. | am planning to
explore the use of solid-state drives in order to see if
the overall performance can benefit from it performance
increases.
A main limitation of
FastBit is the lack of data
compression as it currently
Se CE compresses only indexes but
a 208 26
a 234 6 03 029
e796 on 76 46
41033 ua a 572
os 1754 257 445, 281
«I HARING SNORT
not data. This is a feature is
planned to add, as it allows
15 os
— disk space to be saved
rence to reducereducing
557482
i the time needed to read the
473307
data,
wewchaking orgThis article is the base for developing interactive data
visualization tools based on FastBit partitions. Thanks
to recent innovations in web 2.0, there are libraries such
as the Google Visualization API that allow separating
data rendering from data source, Currently we are
extending nProbe adding an embedded web server
that can make FastBit queries on the fly and return
query results in JSON format. The idea is to create an
interactive query system that can visualize both tabular
data (e.g. flow information) and graphs (e.g. average
number of flows on port X over the last hour) by
performing FastBit queries. This way the user does not
have to interact with FastBit tools at all, and can focus
on data exploration.
The use of FastBit is a major step ahead when
compared with state of the art tools based on both
relational databases and raw data dumps. When
searching data on datasets of a few million records
the query time is limited to a few seconds in the worst
case, whereas queries that just use indexes are
completed within a second. The consequence of this
major speed improvement is that it is now possible
to query data in real time and avoid updating costly
counters every second, as using bitmap indexes It
Is possible to produce the same information when
necessary, Finally this work paves the way to the
creation of new monitoring tools on large data sets
that can interactively analyze traffic data in near-real
time, contrary to what usually happens with most tools
available today.
This work is distributed under the GNU GPL license and
is available at the ntop home page hitp:/www.ntop.org/
nProbe.html. The nBox appliance embedded withing
a pre-installed ntop and nProbe software can be
requested at www.wuerth-phoenix.com/nbox.
LUCA DERI, FOUNDER OF NTOP
Luca Deri was born in 1968. Although he was far too young to
remember, the keywords of that year were freedom, equality,
free thinking, revolution. In early 70s many free radio stations
had birth here in Italy because their young creators wanted to
have a way for spreading their thoughts, ideas, emotions and
tellthe world that they were alive ‘n kickin’. The Internet today
represents for him what free radio represented in the 705. He
wrote his PhD on Component-based Architecture for Open,
Independently Extensible Distributed Systems. Luca Deri is
the founder of Ntop.
wwrwhaking.org
The new
The flow-based network probe
and monitoring appliance
Who is using your network? What makes your
network traffic? Who is consuming most of the
bandwidth?
Ifyou want to answer these questions you should
become a nBox user.
‘© High-performance embedded NetFlow™M,
‘8 v5/v9/IPFIX probe
‘= Embedded Net Flow vs/v9/IPFIX collector
‘© 1Pv4, IPV6 and MPLS support
'= Easy to set-up and configure
‘© No additional delay in both mirrored traffic and
existing network
‘© User friendly web GUI for nProbe and ntop
‘© Multiple collector mode for load balancing
or redundancy
‘© Firmware and packages upgrade via Internet
'= All software reside on flash disk
‘© Ability to dump NetFlowTM flows on-disk
or on Database Server
KEY FEATURES.
The new nBoxis a cooperation between ntop founder
Luca Deri and Wirth Phoenix. Get more information at
worw.wuerth-phoenix.com/nBox
G5 — mia)
Netw PFD rite
abe t Comvee*
ackets Packets
Ga) a.
Detelerator
ux Kernel
Packets
t
WUORTHPHOENIX
worw.wuerth-phoenix.com/nBox
wawentop org,ADVANCED
Improving your custom
Snort rules
While it is easy to create a custom Snort rule, do you know
if you are actually making a good one or not? This article
introduces some common mistakes | find in custom Snort
rules and the potential implications of those mistakes.
What you will learn...
+ How fo measure the performance of your custom Snort rule
set, and how to identify the "bad performers
+ How to use Snort’ fast_pattern keyword to your advantage
+ How to add mote descriptive information to your tues to
improve the analysis process
What you should know...
+ The Snort rule language
+ Abasicknowledge of TCP
+ How to install and Perl modules via CPAN on a *NIX operating
system
over the last ten years. Packet processing
speed has improved, IP defragmentation
and stream reassembly functions have evolved, the
connection and state tracking engine has matured,
but there is one thing that keeps getting left behind,
Custom rule-sets.
With each revision of Snort new features are added
that enhance the detection capability and aid in packet
processing performance of the Snort engine. Those
enhancements not only open new avenues for detecting
the latest bad stuff out there, they create an opportunity
to improve the performance of older legacy rules you
may have created many years ago. Unless your rules
make good use of the current Snort language and are
kept up-to-date, what once used to be a good rule could
in fact turn bad.
T he Snort IPS engine has changed substantially
What isa bad Snort rule anyway?
Because this article focuses on finding and fixing bad
rules, before we look at any of them it would be wise for
me to define what | personally call a bad rule.
+ Arrule that cannot or will not perform the function
it's made for. Le. it won't catch the attack/event that
the rule tries to find (False negative)
+ Arule that catches the wrong stuff (False positive)
+ Arule that has little meaning or value to the analyst
(unk alerts)
»IHARING SNORT
+ A rule that wastes processing time and effort to
achieve its goal (performance hog)
The first two points in the list (dealing with false
positives and false negatives) will always need to be
addressed on a case-by-case basis, however when it
comes to the last two points there are some important
concepts you can follow that will substantially improve
custom rules regardless of what attack or event they
are designed to catch.
‘There are many books and online resources available
that discuss the Snort rule language in depth, and a
full introduction is far out of the scope of this article
However, to enable me to present some common rule
problems, I need to introduce the basic building blocks
of a Snort rule.
ASnort rule is made up of two parts, a rule header and.
a rule body. The rule body follows the rule header and is
Listing 1. An example "bad" Snort rule
AL NEY 80 \
wewchaking orgei eeu
surrounded by parentheses, The header is pretty easy
to understand as it reads close to natural language.
The rule header consists of an action, a protocol
specification, and the traffic that is to be inspected,
The rule body (shown in Listing 1 in blue) is made
up of a selection of rule options. A rule option consists
of a keyword followed by one or more arguments, For
example in the above rule there is a content keyword,
with an argument of os:easbenthasfo0
This rule instructs Snort to look for the text
ondend » in all packets flowing out of the network
to TCP port 80 that are part of an established TCP
session.
Giving rules more meaning to an analyst
The above rule in Listing 1 is a simple example of a bad
rule. Regardless of how much processing load it may
introduce to an IDS engine, or how many alerts it could
generate, just by looking at the rule source we can see
its bad. This rule lacks some of the most important
information that could give any value to its existence
and operation ~ an understanding of what's happening
oon the network, and what it means if it generates an
alert
When describing or teaching the Snort rule language,
I ike to group rule options together to describe them
in three categories based on their function within the
rule,
+ Detection Options: Keywords that test for the
presence of specific things in network traffic. This
could be any type of text, binary data, packet
header values, regular expressions, or decoded
application data, These are the keywords that
control if an alert is generated on a packet or not.
Example Snort keywords: content, pere, ipopts, tt,
flowbits, flow
+ Metadata Options: These are keywords that are
interpreted by the engine for alert organization,
Example Snort Keywords: sid, metadata, rev
+ Analyst Options: Keywords that are used by the
rule writer to convey information to an event analyst
who is investigating the event. Example Snort
keywords: msg, classtype, reference, priority
While writing a rule it Is important to understand that
any event it generates may need to be understood by
other people. If a security analyst is presented with an
event like the below which was generated from our
previous bad rule, what do they do to respond to the
event?
2.268,0.212854
wwrwhaking.org
Well after the analyst has likely scratched their head
for a moment, and then wondered what on earth
.atoo is, they will probably end up Googling
for oxussbeutbsaroo in an effort to understand what this
alert means to them,
Is this a serious event?
Is it somebody else's problem?
Should | start panicking?
It is common to have a different group of people
researching and writing rules vs. those will who
deal with the security events they may raise, and if
this isn't the case for you today it may well be in the
future, At the time of rule creation only the rule writer
really knows what she or he Is looking for, and the
implications to the network if the traffic is found. It is
therefore critical for this information to be passed on to
the event analyst within the rule itself. Unless a rule is
correctly explained, how can a writer expect an analyst
to be able to react accordingly?
Let's expand on my simple oxsesdseesvaaes> example
from earlier by providing some more theoretical
scenario information (Listing 2).
Note the addition of three new rule options: A
classification type, an overriding priority qualification,
and a couple of useful references. With these extra
Tule options added an analyst dealing with the event
now knows that oxdesdbeesneatoo IS in fact a low-priority
Trojan, associated with CVE:2010-99999, and a related
toa specific company's product.
‘These seemingly minor additions make massive
returns in respect to the security event analysis and
remediation process. Sometimes the simplest changes
provide the greatest value.
Identifying and optimizing slow rules that are
wasting your CPU cycles
So while fixing the analyst information problem is
pretty simple, identifying suboptimal rules in terms of
Listing 2. The badrule, now improved with far more
infermation to help an analyst
‘$0u_ NET any > SEXTERNAL NET 40 \
teojan-activicys \
‘eve, 2010-98999; \
rl, nesp:/Mnycons
ssa:100001) rev:2,
SNORT HARING:ADVANCED
computational overhead is a litle more of a technical
process. To make this challenge possible Snort can
kindly provide us feedback of how the system functions
in relation to the current configuration and network
traffic being inspected,
There are a couple of useful configuration lines that,
‘can be added to your snort.conf to provide performance
feedback about how the detection engine is performing,
Today | will focus on the output provided by porte
cong profiler
Adding this ponte rutes configuration directive to your
snort.conf will enable performance profiling of your
snort rule-set. At exit, Snort will output to STDOUT
a list of the top N (specified here as ten) worst
Performing rules categorized by the total time taken
to check packets against them. This data can also be
written to a text file of choice, and many other sort
methods are available, Check the snort manual for full
details,
Note
Snort must be compiled with -
enable the performance profiling capability.
Before starting to inspect the performance output,
it is vital to understand that all of the data we see is
dependant on two distinct variables:
ng to
+ The current configuration running (including the
rule-set)
+ The network traffic that is
detection engine
inspected by the
When testing and tweaking anything as complex as
an IPS rule-set for performance, | find it imperative
to isolate and work on only a single variable at a time,
By focusing my tests on a large sample of network
traffic stored in PCAP files that is representative to
where the sensor operates, | can tweak my rules for
performance against this static data-set. When | think
have optimally tweaked any rules, I can then move to
test against live traffic.
‘An example of rule profiling output is shown in Listing
3, and each data column is explained below.
+ Num: This column reflects this rule's position
number in regard to how bad the rule performs.
Here the top (number 1) reflects the rule that is
responsible for consuming the most processing
time (cots: )
+ SID, GID, Rev: The Snort ID, Generator 1D, and
Revision number of the rule. This is shown to help
us identify the rule in question in our rule-set,
+ Checks: The number of times rule options were
checked affer the fast_pattern match process (yes,
that bit is bold because it is important),
+ Matches: The number of times all rule options
match, therefore traffic matching the rule has been
found
+ Alerts: The number of times the rule generated
an alert, Note that this value can be different from
Matches due to other configuration options such as
alert suppression.
+ Microsecs: Total time taken processing this rule
against the network traffic
+ AvgiCheck: Average time taken to check each
Packet against this rule.
+ Avg/Match: Average time taken to check each
packet that had all options match (the rule could
have generated an alert)
+ Avg/Nonmatch: Average time taken to check each
packet where an event was not generated (amount
of time spent checking a clean packet for bad stuff)
The two values a rule writer has some level of control
over are the number of checks, and how long it took to
perform those checks. Ideally we would like to have low
figures in all columns, but decreasing the Checks count
is the first important part of rule performance tuning. To
be able to tweak our rule to affect this value, we need to
first understand exactly what Checks represents.
Introducing the fast_pattern matcher
When Snort decides what rules need to be evaluated
against a network packet, it goes through two stages
before starting its in-depth testing functions.
Listing 3. Sample Snort rule profiling output
HAHING
wewchaking orgei eeu
1) Protecol, Port number and service identification
Snort optimizes the rule-set into protocol, port and
service (application protocol) based detection rule-
buckets. Note that service based buckets are only used
when Snort is compiled with -enable-targetbase, and
an attribute table is loaded.
For example, if inbound traffic is destined to arrive
at TCP:80 (HTTP), there isn’t much point in running
it though the rules associated with SMTP (TCP:28).
The packet is assessed against the rules in the TCP.
80 bucket. The same decision is also made related to
source port and service metadata.
Snort also has an extra rule-bucket for the any any
rules. These are the rules that use the value anyas both
the source and destination port numbers. All packets
are also checked against the any any rules as well after
being assessed against their particular port/service
based rule bucket.
2) Packet content (fast_pattern check)
After identifying what rule bucket(s) this packet should
be assessed against, a pre-screening content check
known as the « .e=s match is applied for all rules
in the bucket(s)
For any Snort rule to raise an event all rule-options in
that rule must match.
Applying a rss:_pacte:n check process allows Snort
to quickly test packets for the presence of a st
content string (a single content: value) required
to generate an event. The goal of this test is to
Quickly identify all packets that have any possibility
of alerting after all of the rule options are tested. if a
packet doesn't match the csot_ps:cern check, there is
absolutely no point in running more computationally
intense checks against it. Because the
match has failed, we know that at least one of the
rule options will not match, and an alert will never be
generated.
Number of Checks
This brings us back to the important Checks value.
The number of checks is the number of times
a tule is assessed after both the protocot/port/
service identification and sast_pse
complete.
The more irrelevant packets that we can exclude with
these two steps, the lower the number of checks will be,
the more optimal and focused the rule will be, and the
less time will be wasted performing in-depth assessment
of packets that will never generate an event.
» processes are
Identifying the optimal content check for
fast_pattern
By default Snort selects the string from the longest
content keyword (measured in number of characters)
wwrwhaking.org
for use in the cast. sattezs test, The design rationale
behind this is simple — the longer a content check, the
more unique it will likely be, therefore less packets
will inadvertently match it. Although this is commonly
the case, there are times when the rule writer will
have a different opinion based on knowledge and
experience.
Looking at the rule in Listing 4, Example.com is the
longest content check (eleven characters), and by
default will be used for the ras+_sactern check. The other
content CST-ID-001 is however less likely to be found
in network traffic, especially if your company name just
so happened to be Example.com. It is therefore wise
to tell Snort to use this better value for the rast_psctern
check with the sat potter modifier keyword,
Following the pactern check, each rule option
is then tested in the order that it appears in the rule.
Finally the source and destination IP addresses are
tested to check if they match those defined in the rule
header. Only if every check made is successful is the
rule action (such as alert) taken.
If any of the tests fail to match, no further checks are
made on the packet against that rule, therefore it is
advisable to place quick tests such as SEXTERNAL NET SHTTP_§
Potential Data Leak: Honey Token cSt
SNORT HARING:ADVANCED
19 5. Dumbpig.plin action
varefubontl
ercblem(s) found with
References:
tp:/snortorg
httpy/ur-sourcefre blogspot.com!
httpy/leonward.wordpress.com/dumbpig/
DumbPig can be found at itp:/code.google.com/
p/dumbpig/, it is written in Perl and uses Richard
Harman's very Useful rarso:;ssort rer) module to parse
arule-set,
Listing 5 shows the tool in action. In this example
ddumbpig pl is reading in a rule file called ras.rstos, and
has identified two problems with rule sid:1000008.
The first problem shown in Listing 5 is in my
experience very common. A rule-writer has created
a rule for DNS traffic that is commonly found on both
TCP and UDP port 53. Rather than create two rules
(one for TCP and one for UDP), the rule writer has
used the IP protocol in the Snort header, but has also
specified @ port number. Because the IP protocol
doesn't have ports (port numbers are a transport layer
construct), this value is ignored. The end result is that
every packet regardless of port will be checked for this
content. This is very sub-optimal to say the least. The
2IHARING
second problem with this rule is that itis missing extra
analyst data that will provide more value to any alerts
raised,
Summary
The Snort rule language is simple to pick up, and ina
similar way to any other language It Is easy to fall into
‘some bad habits. Hopefully this article has introduced
some simple suggestions that will improve any of your
in-house IDP rules in respect to their performance and
usefulness.
LEON WARD
Leon is a Senior Security Engineer for Sourcefire based in
the UK. He has been using and abusing Snort and other
network detection technologies for about ten years and hates
referring to himself in the third-person. Thanks go to Alex Kirk
(Sourcefire VRT) and Dave Venman (Sourcefire SE) for sanity
‘checking this document.
wewchaking orgpeered
fern
Insecure Websites in DMZ
Still Pose a Risk
Level of Trust
Normally, awebsiteis considered to bea partof the untrusted
‘outer perimeter ofa company network infrastructure, Hence,
system administrators usually put a web server in the DMZ,
part of a network and assume the information securiy risk
from the website to the network is mitigated. However,
several industry security standards have been imposed to
protect the public infrastructure such as webs servers and
name servers in addition to the services directly subjected
to the standard application scope. Why is i so important to,
protect your website even if itis not closely connected to
your critical data infrastructure?
Social Impact
Humans are the weakest ink in the chain of a company's
security. The experience gathered during more than
5 years of penetration testing shows that almost no
large-scale companies can resist a social-engineering
attack vector. In companies which have more than 30
employees, a penetration tester or a real intruder can
pretext, deceive, and easily persuade at least 10% of the
available employees to open an attachment or follow a link
to the malicious website containing an exploit pack and a
Viral payload. Basic countermeasures include restricting
network access to all websites but whitelsted sites,
Which includes your own website, or simply educating
the employees. So, what happens when an intruder gains
access to the website? The following list highlights what
can be done with a web server located in DMZ:
+ Inject an exploit pack and payload into the main
page or create malicious pages
+ Send spam and scam letters to the company
‘employees inviting them to visit a malicious page at
the website
+ Install a rootkit and sniffer to maintain access and
get all password inputs by system administrators or
website maintainers
+ Modify links from legitimate sites to malicious
ones, for instance, to redirect Internet bankin
Onthe‘Net
[1] OWASP Testing Guide ~ htav/aww.owasp.org/indexphp/
Category OWASP Testing Project
[2] OWASP Development Guide — http:/wuwowasporg/
index php/Category OWASP Guide Project
[B1 InformzaschitaSC(QSA, PA-QSA)~ htp:/vwinfosec.ru/en
wwrwhaking.org
VHdopm3auta
(Cucrembtili uetrerpaTop
link to http:/fibank.yOurbank.ru instead of ‘ttp://
ibank yourbank.com
+ Pivot client-side payloads through the web server in
the case of networks with restricted Internet access.
This list includes only those risks related to successful
network penetration. In addition, there are business
image risks such as defacing, modifying sensitive public
information (e.g. exchange rates at bank's website,
payment credentials at some charity company's
website, phone numbers etc.), or denial of service by
deleting everything and bringing the web server down.
Ways to Protect
There are several methodologies to assess website
security and mitigate risks connected with the website.
One of the most popular is the OWASP Testing Guide
[1], which includes more than 300 checks and addresses
almost all known web vulnerabilities. The PCI Data
Security Standard refers to the top 10 most widespread
‘vulnerabilities in the software, called the OWASP Top
Ten, and is a basic requirement for any website dealing
with credit card payments. For developers, there is also
‘a Development Guide [2], the goal of which is to prevent
mistakes affecting security,
To companies willing to protect their websites and
networks, Informzaschita offers the following services:
+ Complete website assessment according to
OWASP Testing Guide (300+ checks)
+ Express assessment according to OWASP Top Ten
and deployment of Web Application Firewalls for
small businesses or companies falling under PCI
DSS requirements
+ Complete PC! PA-DSS assessment for companies
developing payment applications
+ Automated security web and network scanning
MARAT VYSHEGORODTSEV,
Information Security Assessment Specialist at Informzaschita
[email protected], (+7 495) 980-2345
wwwinfosec.rufen
SNORT HARING |.ADVANCED
An Unsupervised IDS
False Alarm Reduction System - SMART
Signature-based (or rule-based) network IDSs are widely used
in many organisations to detect known attacks (Dubrawsky,
2009). A common misconception about IDSs is that they are
Plug-and-Play devices that can be installed and then allowed
to run autonomously. In reality, this is far from the truth.
What you will learn.
+ The imitations of 0S tuning
+ The basic concepts and characteristics of SMART system
+ The benefits of SMART
What you should know...
+ Basics of Intrusion Detection Systems
+ Basic syntax of Snort rules
The need for alarm reduction
Depending on the quality oftheir signatures, they can
generate a significant volume of false alarms. Even
when the alarms are genuine, the sheer number that
can be generated with aggressive attacking traffic
(0.9. Denial of Service attacks, vulnerability scans,
malware propagation) can be a problem. This is
especially the case if the IDS has been configured
to generate an alert each time a rule matches.
Tuning is often necessary to address the problems
of superfluous and false alarms. However, if done
recklessly, it can increase the risk of missing attacks.
On another note, when attacks span multiple stages,
it would be useful to have a mechanism of aggregating
and grouping together all alarms relating to the same
activity. This not only enhances detection, but also
the efficiency of analysing and validating alarms.
In order to address the issues above, alarm reduction
systems are needed. Alarm reduction is a process that
analyses the intrusion alerts generated by IDS, filters
the false alarms and then provides @ more concise and
high level view of detected incidents
cost
Figure 1. Framework of false alarm classification model
»IHARING
10S sensor
TDS sensor
TDS sensor
Events
Tats
SMART
(SOM K-Means Alarm Reduction Tool)
SMART is an automated alarm reduction tool,
designed for Snort IDS. It extends the functionality of
Basic Analysis and Security Engine (BASE) (BASE,
2009), a popular front-end for Snort, by providing a
more holistic view of detected alarms. The system
comprises of two stages; the first stage is responsible
for removing superfluous alarms, whilst the second
stage distinguishes true from false alarms, by
‘observing trends in their occurrence. The idea behind
SMART is not to replace human analysts, but to
inform alarm validation and IDS tuning by identifying
the most relevant candidates (alarms and rules) for
Figure 1 depicts the proposed classification model.
Specifically, it shows that data is collected from IDS
sensors, and stored in a database. The system then
retrieves the data from the database and classifies
them by extracting the attributes from the alerts and
feeding them into the unsupervised SOM-based
clustering system,
wewchaking orgCO aL atau Acie Luau
The classification process consists of the following
phases:
1, Feature extraction — The system uses several
attributes extracted from the alert database,
which are considered effective to correlate alerts
generated from a single activity. The extracted
data are then normalised since the value of
the data are varied depending upon the type of
attributes used.
2. Alarm aggregation (first stage correlation) - Given a
set of input vectors from the first phase, the system
is trained unsupervised in the second phase to
map the data so that similar vectors are reflected
in their arrangement. The distance between two
input vectors is presented on the map, not by their
absolute dissimilarity (which can be calculated), but
the relative differences of the data properties. The
objective is to group alerts from the same attack
instance into a cluster.
3. Cluster analysis - The result of the classification
is further evaluated to attain a set of attributes
from each cluster created in the previous phase
(ie. the first stage correlation), Building accurate
and efficient classifiers largely depends upon
the accuracy of the attributes, which are used as
the input data for the classification, Seven alert
attributes (as shown in Table 1) were chosen to
represent the value of each input vector in the next
classification (i.e. second stage correlation). Two
out of the seven attributes, namely the frequency
of alarm signatures and the average time interval
between the alerts each day were computed.
These features are considered to be the most
relevant in terms of influencing the magnitude of
the alert signatures.
4, Alert classification (second stage correlation) —
The final classification is carried out based upon
the attributes extracted in the third phase. The
main objective of this stage is to label the alerts
into true and false alarms, thus reducing the
number of alerts before being presented to the
administrator.
The underlying architecture of our proposed alarm
classifier that illustrates the four phases of the
classification process is presented in Figure 2.
Evaluating SMART
In order to evaluate the effectiveness of SMART, a
set of experiments has been conducted. The dataset
that was used for the experiments was collected on
a public network (100-150 MB/s) over a period of 40
days, logging all traffic to and from the organisation’s
web sorver. It contained 99.9% of TCP, and 0.1% of
ICMP traffic. The effectiveness of reducing false alarms
In SMART is compared against conventional tuning
methods.
False alarm rate by tuning
Tuning is @ popular technique of reducing false
alarms, and it is based on the adaptation of the IDS
configuration to suit the specific environment where
the IDS is placed (Chapple, 2003). This often involves
modifying preprocessors, removing (or modifying) rules
prone to false alarms, or modifying variables.
The first phase of the experiments involved
running snort in default configuration and validating
the generated alarms to identify the most suitable
candidates for tuning. The following three rules were
the most suitable, as they triggered the most false
alarms:
WEB-IIS view source via translate header
This event is categorised as web application
activity, which targets the Microsoft IIS 5.0 source
disclosure vulnerability (Snort, 2010c). Surprisingly,
this signature alone had accounted for 59% of the
total alerts, in which approximately 1,970 alerts
were generated per day by this event. Although
the signature was created to detect a Microsoft IIS
source disclosure vulnerability exploitation attempt,
Le 1 The interpretation and data collection methods ofthe alarm attributes for second stage correlation
ERT FEATURES | DESCRIPT
‘The service port number Indicates ifthe alarm contains a well-known port number, or unknown service
calty ofthe alerts. There ate 3 types of alert priority, namely 1st, 2nd, and 3rd, multiple signatures are
No of alerts Total numberof alerts grouped in one cluster
Noofsignatures Total number of signature type ina cluster
Protocol ‘Type of trafic from event triggering the alerts
Port number
ports.
Alert priority ie
found in a cluster, the priority value for each signature could be added together.
Time interval Time interval between events from a particular signature
No of events
The number of events1 in which a particular alert signature is triggered within a day
“One event equals to a number of alerts from a single signature, which are triggered by a particular activity.
wwrwhaking.org
SNORT HARINGADVANCED
in this case all generated alarms were related to
normal traffic.
When examining the snort rule, it does not seem
proficient enough to detect this type of event. It appears
to be very loosely written, by searching for a particular
string in the packet payload (in this case, Translate: N)
Since the Translate: fis a valid header used in WebDAV
application (WebDAV, 2001), the rule tends to trigger a
vast volume of alarms.
If the rule gets modified to search for the GET
command in the content, itis likely that false alarms
would be reduced. The attack is launched by requesting
a specific resource using HTTP GET command,
followed by Translate: fas the header of HTTP request.
in this case, a tuning can be performed by modifying the
signature rule to:
WEB.IIS view source via translate header ~ Tuned
signature rule
2 xeferencerbugtrag, 14764; xeference:bugtrag, 15782
0494,
by eeserenceiness ase
Indeed, when snort was run again with the modified
ruleset, this rule had effectively eliminated 95% of the
initial false alarms.
WEB-MISC robots.txt access
Although this event is raised when an attempt has
been made to directly access robots.txt file (Snort,
2010b), it can also be raised due to legitimate
activities from web robots or spiders. A spider is
software that gathers information for search engines
by crawling around the web indexing web pages
and their links. Robots.txt file is created to exclude
some pages from being indexed by web spiders (e.g,
submission pages or enquiry pages). As web indexing
is regular and structurally repetitive, this activity tends
to cause a superfluous amount of alerts. In this study,
‘Networ alarm |
‘ate
CO ‘acreaaton
Flee ‘Custer
‘Alar
PosttverTve = = lassfeaton
Negative at Foatre Selection
Figure 2. Architecture of alse alarm classifier
“HARING
approximately 23% of total alerts (approximately 750
alarms per day) were related to this activity, and they
were all false alarms.
In order to modify this rule to exclude normal web
spider activity, the source IP addresses would need
lo be examined, in order to verily their authorisation in
accessing the Robots.txt file. This approach, however,
seems to be hardly feasible to deploy. Of course,
identifying all authorised hosts from their source IP
addresses is impractical and dangerous for exploitation.
Specifying such a large number of IP addreses can be
a problem. Also, the mere fact of allowing specific hosts
to access this file could be exploited in order to bypass
detection,
As such, event thresholding was used instead (Beale
and Caswell, 2004). As robots.txt access requests
generate regular and repetitive traffic, a limit type of
threshold command is the most suitable tuning in this
case. Such a threshold configuration would be as
follows:
The rule logs the first event every 60 seconds, and
ignores events for the rest of the time interval. The
result showed that approximately 10% of false alarms
had been effectively reduced. This indicates that
tuning can only reduce a very insignificant number of
false alarms from this event.
ICMP L3retriever Ping is an event that occurs when
ICMP echo request is made from a host running
L3Retriever scanner (Snort, 2010c). Quite a few alerts
were generated from this event; contributing to 8% of
the total alerts. This figure indicates that approximately
250 alerts were generated by this rule every day.
Surprisingly, there were no malevolent activities
detected following the ICMP traffic. In addition,
normal ICMP requests generated by Windows 2000
and Windows XP are also known to have similar
payloads to the one generated by L3Retriever scanner
(Greenwood, 2007). In view of this issue and given
that no suspicious output detected following these
ICMP requests; these alerts were labelled as false
positives.
The only method that can be deployed to suppress
the number of false positive triggered from this event
is by applying event suppressing or thresholding
command. Instead of using limit type of threshold
command as previous signature, this rule utilised both
type of command to log alerts once per time interval
and ignore additional alerts generated during that
period:
wewchaking orgrea
alert icmp SENPERNAL NET any -> SHOME_NE? any (nsg:"1CMe
Lineteiaver Ping*;seode:0 itype:sy content:
NOPQASTUVABCOBAGHH"; depth:327,zeference:acachnida, 311;
classtypesattenpted-recon; threshold: type both, track
by_svey count 3, seconds 60) si
65 2e054)
The threshold is written to detect brisk ICMP echo
requests by logging alerts once per 60 seconds after
seeing 3 occurrences of this event. This experiment
has also proved that the event thresholding can
successfully reduce up to 89% of the false alarms
generated by this activity.
Overall, fine tuning has been effective in reducing
false alarms. However, several limitations exist:
1. The procedure increases the risk of missing
noteworthy incidents - Suppressing the number
of alerts generated can also create a possibility
of ignoring or missing real alerts. For example, a
malicious user can hide his/her action within the
excessive number of alerts generated by using a
spoofed address from web spider agent. In fact,
looking for an overly specific pattern of a particular
attack may effectively reduce the false alarms;
however, this method can highly increase the risk of
missing its range. A skilful attacker can easily alter
and abuse the vulnerability in various ways as an
attempt to evade the IDS.
2. Tuning requires a thorough examination of the
environment by qualified IT personnel and requires
Bae cee Luu
a frequent updating to keep up with the flow of new
vulnerabilities or threats discovered
False alarm rate by SMART
The following stage of the experiments involved
analysing the snort alerts with SMART. This experiment.
presents the results of SMART classification, which is
run every two hours, using only four hours alerts from
the private dataset. This is due to increased memory
requirements of running our unsupervised alarm
reduction system for larger data. So, instead of running
‘one correlation for the entire data set, SMART runs a
correlation over a particular time period (e.g. every
‘one or two hours). Figure 3 shows the maps of the
correlations.
The classification reveals that about 78.8% of false
alarms have been identified in the first map (left),
whilst 96% of them have been detected in the second,
mappings (right), as shown in Figure 3. Those alarms
located in the upper portion are labelled as true alarms,
whilst the lower portion is for the false alarms. It is,
notable that our system has shown promising result
in filtering all hectic and unnecessary alerts triggered
by the IDS. For example, the alerts from WEB-IS view
source via translate header and WEB-MISC robots.txt
access signatures, which had caused 82% of false
alarms from the entire data.
In addition to the private data set, the system has also
been tested using publicly available data set, DARPA
1999. The experiment also shows promising results,
e
=
Figure 3. SMART classification result using private data set
wwrwhaking.org
2eustes
SNORT HARING.ADVANCED
References
1. BASE (2008), Basic Analysis and Security Engine (BASE) Project, htp:/base secureideas.net/
2. Beale, J. and Caswell (2004), Snort 2.1 Intrusion Detection, 2nd edn, Syngress, United State of America, ISBN 1931836043
3. Chapple, M. (2003), Evaluating and Tuning an Intrusion Detection System, Information Security Magazine, http:/searchsecurit
ylechtarget.com/tp/1,289483,sid14 gci918619,00.html
4, Dubrawsky, | (2008), CompTIA Security+ Certification Study Guide: Exam SYO-201 3E: Exam SYO 201, Study Guide and Prep
kit, 3rd edn, Syngress, United States of America ISBN 1597494267
Greenwood, B. (2007), Tuning an IDS/IPS From The Ground UP, SANS Institute InfoSec Reading Room, 2007, httpy/
wunw.sans.org/reading_room/whitepapers/detection/tuning-idsips-ground_1896
Snort (2010a), WEB-IS view source via translate header, http://www.snort.org/search/sld/10422¢=1
Snort 20106), WEB-MISC robots.txt access, htip:/iwww snort org/search/id/18522"=1
Snort 20100), ICMP L3Retriever Ping, http:/wwwsnort.org/search/sid/466?r=1
WebDAV (2001), WebDAV Overview, Sambar Server Documentation, htpy/ww.kadushisoft.com/syshelp/webdav.htm
reducing up to 95% and 99% of false alarms in the
first and second classification respectively. The system
appears effective in filtering the false alarms triggered
by @ noisy traffic such as ICMP traffic (ICMP Ping and
Echo Reply) and web-bug alerts, which have formed
the highest number of false alarms.
Overall, SMART has been effective in detecting
false alarms, such as the redundant and noisy alerts
raised by ICMP traffic. In fact, it is also proved that
the system outperforms the traditional tuning method
In filtering the WEB-MISC robots.txt access alerts. In
other words, the issue of subjective rule suffered by
common tuning method can be addressed using the
GINA TJHAL
She holds a BSc Computer Science from the University of
Wollongong, Australia (2005), and an MSc in Information
‘System Security from the University of Plymouth, UK (2006).
She is currently a PhD candidate in the Centre of Security,
Communications & Network Research at University of
Plymouth, UK. Her current research interests include network
intrusion detection and prevention, pattern classification,
neural network and data mining.
MARIA PAPADAKI
‘Maria Papadakiis a lecturer in Network Security, at University
of Plymouth, UK. Prior to joining academia, she was working
as a Security Analyst for Symantec EMEA Managed Security
Services (MSS), UK. Her postgraduate academic studies
include @ PhD in Intrusion Classification and Automated
Response (2004), and an MSc in Integrated Services and
Intelligent Networks Engineering (2000), University of
Plymouth, UK. Her research interests include intrusion
prevention detection and response, network security
‘monitoring, incident prioritisation, security usability, and
security education. Dr Papadaki is a GIAC Certified Intrusion
‘Analyst, and is a member of the GIAC Advisory Board, as well
4s the British Computer Society. Further details can be found
cat www plymouth. ac.uk/escan.
»IHARING
proposed system. More than 0% of false alerts from
WEB-MISC robots.txt access can be identified by
‘SMART.
Summary
Performing a fine-tuning to reduce false alarms is not
a straightforward task. If not done properly, there is
a possibilty that the system might miss real attacks
SMART is an automated alarm reduction system,
which helps fitering false alarms generated by Snort
IDS. It has advantages over the conventional tuning
method, Unlike tuning, the system does not require
any prior knowledge of the network and protected
systems. In addition, it provides a higher level of
alert information to the administrator by aggregating
alerts from the same attack instance and validates the
accuracy of Snort IDS.
Acknowledgments
The authors would ike to acknowledge the contribution
of Prof Steven Furell and Dr Nathan Clarke In the work
presented inthis paper.
wewchaking org~~ lin
eee a erolaal= an ate ate = ESS SoM LV ella og iia =n =
Tecotmtcliare Eat Na ine Me oe Tie ao iale Kee pert clog
Ete Toc) stole ETO sata) intial]
elec CMMs MC) ol eo eatoa Maca lac helele) foc.
Errata oil la Mas Rca Male are Tiara pee =a) =a) n=
oer ec Ma Ica Naa ae ca a iscal nip
POE RAM eed =a a encase
Kaspersky Lab Premier Partnen ESET Premier Partner Corel Pl
Adobe Silver Solution Partnen Autodesk Gold Partnen ABBY Lal
arc aaeriatat
eee Meee Me Noel o Meo Malo \Vi-V-Wpletv-) p=
Ceo ee RO Ree ra aie sea i= ny) © |
aoe Mig loa ac Roeia ecg Met ial ee goal || Sos-1>1 py
Grote Wate la Reta la oleg = ae leg =|
Pine cocioa lac tev couacect gloat isi l| WoVinin |)
Sa ocge tae ta Matcto ake) = Negra aaa s
ities and 20 countries.
ao aae IN a isnbaL ical stoi Coie a Mier ip stop a No ic)
RONaaceraeliaatetnelt Retoiaal Ushers a tae