Thursday, October 15, 2015

Suricata with afpacket - the memory of it all


Suricata IDS/IPS/NSM is a highly scalable, modular and flexible platform. There are numerous configuration options available which empower a lot.

This blog post aims to give you an overview of the ones that have an impact on the memory consumption for Suricata and how does the suricata.yaml config settings affect the memory usage of/for Suricata and the system it resides on.

One of the always relevant questions with regards to performance tuning and production deployment is  - What is the total memory consumption of Suricata? Or to be correct - what is the total memory that Suricata will consume/use and how can that be calculated and configured more precisely?

The details of the answer are very relevant since it will most certainly affect the deployment set up. The risk of not correctly setting up the configuration can lead to a RAM starvation which in turn would force the use of swap which would most likely make your particular set up not optimal (to be frank - useless).

In this blog post we will try to walk through the relevant settings in  a suricata.yaml configuration example and come up with an equation for the total memory consumption.

For this particular set up I use:
  • af-packet running mode with 16 threads configuration
  • runmode: workers
  • latest dev edition(git - 2.1dev (rev dcbbda5)) of Suricata at the time of this writing.
  • IDS mode is used in this example
  • Debian Jessie/Ubuntu LTS (the OS should not matter)

Lets dive into it...

MTU size does matter

How so?

If you look into the setting for max-pending-packets in suricata.yaml

max pending packets: 1024

that will lead to the following output into suricata.log:

.....
 (tmqh-packetpool.c:398) <Info> (PacketPoolInit) -- preallocated 1024 packets. Total memory 3321856
.....
which is  - 3244 bytes per packet per thread(pool).


The size of each packet takes in the memory is the sizeof(struct
Packet_) + DEFAULT_PACKET_SIZE. So ~1.7K plus ~1.5K or about 3.2K.
Total memory used would be:

<number_of_threads>*<(sizeof(struct Packet_) + DEFAULT_PACKET_SIZE)>*<max-pending-packets> = 16 * 65534 * 3.2K = 3.20GB.

NOTE: That much RAM will be reserved right away
NOTE: The number of threads does matter as well :)

So why is the NIC MTU important?

The MTU setting on the NIC (IDS) interface is used by af-packet as a default packet size(aka - DEFAULT_PACKET_SIZE) if no explicit default packet size is specified in the suricata.yaml:
# Preallocated size for packet. Default is 1514 which is the classical
# size for pcap on ethernet. You should adjust this value to the highest
# packet size (MTU + hardware header) on your system.
#default-packet-size: 1514
Note above the "default-packet-size" is commented/unset. In that case af-packet will use the MTU set on the NIC as a default packet size - which in this particular set up (NIC) if you do "ifconfig" is 1514.

So when you would like to play "big" and enable those 9KB jumbo frames as MTU on your NIC - without having a need for it  ....you may end up with an unwanted side effect the least :)


Defrag memory settings and consumption

defrag:
  memcap: 512mb
  hash-size: 65536
  trackers: 65535 # number of defragmented flows to follow
  max-frags: 65535 # number of fragments to keep (higher than trackers)
  prealloc: yes
  timeout: 60

The setting above from the defrag section of the suricata.yaml will result in the following if you check your suricata.log:

defrag-hash.c:220) <Info> (DefragInitConfig) -- allocated 3670016
bytes of memory for the defrag hash... 65536 buckets of size 56
defrag-hash.c:245) <Info> (DefragInitConfig) -- preallocated 65535
defrag trackers of size 168
(defrag-hash.c:252) <Info> (DefragInitConfig) -- defrag memory usage: 14679896 bytes, maximum: 536870912

Here we have(in bytes) -
(defrag hash size * 56) + (prealloc defrag trackers * 168). In this case that would be a total of
(65536 * 56) + (65535 * 168) = 13.99MB
which is "defrag memory usage: 14679896 bytes" from the above output.

That much memory is immediately allocated/reserved.
The maximum memory usage allowed to be used by defrag will be 512MB.

NOTE: The defrag settings you configure for preallocation must sum up to be lower than the max amount allocated (defrag.memcap)


Host memory settings and consumption

host:
  hash-size: 4096
  prealloc: 10000
  memcap: 16777216

The setting above (host memory settings have effect on the ip reputation usage ) from the hosts section of the suricata.yaml will result in the following if you check your suricata.log:

(host.c:212) <Info> (HostInitConfig) -- allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64
(host.c:235) <Info> (HostInitConfig) -- preallocated 10000 hosts of size 136
(host.c:237) <Info> (HostInitConfig) -- host memory usage: 1622144 bytes, maximum: 1622144

Pretty simple (in bytes) -
(hash-size*64) + (prealloc_hosts * 136) =
(4096*64) + (10000 * 136) = 1622144 =1.54MB are allocated/reserved right away at start.
The maximum memory allowed is 162MB(16777216 Bytes)


Ippair memory settings and consumption


ippair:
  hash-size: 4096
  prealloc: 1000
  memcap: 16777216

The setting above (ippair memory settings have effect on the xbits usage) from the hosts section of the suricata.yaml will result in the following if you check your suricata.log:

(ippair.c:207) <Info> (IPPairInitConfig) -- allocated 262144 bytes of memory for the ippair hash... 4096 buckets of size 64
(ippair.c:230) <Info> (IPPairInitConfig) -- preallocated 1000 ippairs of size 136
(ippair.c:232) <Info> (IPPairInitConfig) -- ippair memory usage: 398144 bytes, maximum: 16777216

Pretty simple as well  (in bytes) -
(hash-size*64) + (prealloc_ippair * 136) =
(4096*64) + (1000 * 136) = 398144 =1.54MB will be allocated/reserved immediately upon start.
The maximum memory allowed is 162MB(16777216 Bytes)

Flow memory settings and consumption


flow:
  memcap: 1gb
  hash-size: 1048576
  prealloc: 1048576
  emergency-recovery: 30
  #managers: 1 # default to one flow manager
  #recyclers: 1 # default to one flow recycler thread

The setting above from the flow config section of the suricata.yaml will result in the following in your suricata.log:

[393] 7/6/2015 -- 15:37:55 - (flow.c:441) <Info> (FlowInitConfig) --
allocated 67108864 bytes of memory for the flow hash... 1048576
buckets of size 64
[393] 7/6/2015 -- 15:37:55 - (flow.c:465) <Info> (FlowInitConfig) --
preallocated 1048576 flows of size 280
[393] 7/6/2015 -- 15:37:55 - (flow.c:467) <Info> (FlowInitConfig) --
flow memory usage: 369098752 bytes, maximum: 1073741824

Here we have (in bytes) -
(flow hash * 64) + (prealloc flows * 280) which in this case would be
(1048576 * 64) + (1048576 * 280) = 344MB
The above is what is going to be immediately used/reserved at start up.
The max allowed usage will be 1024MB

A piece of advice if I may - don't ever add zeros here if you do not need to. By don't need to  - I mean if you do not  see flow emergency mode counters increasing in your stats.log.

Prealloc-sessions settings and consumption

stream:
  memcap: 32mb
  checksum-validation: no      # reject wrong csums
  prealloc-sessions: 20000
  inline: auto     

The setting above from the prealloc sessions config section of the suricata.yaml will result in the following in your suricata.log:

(stream-tcp.c:377) <Info> (StreamTcpInitConfig) -- stream "prealloc-sessions": 20000 (per thread)

This translates into bytes as follows (TcpSession structure is 192 bytes, PoolBucket is 24 bytes):
(192 + 24) * prealloc_sessions * number of threads = memory use in bytes
In our case we have - (192 + 24) * 20000 * 16 = 65.91MB. This amount will be immediately  allocated upon start up.
NOTE: The number of threads does matter as well :)

af-packet ring size memory settings and consumption


    use-mmap: yes
    # Ring size will be computed with respect to max_pending_packets and number
    # of threads. You can set manually the ring size in number of packets by setting
    # the following value. If you are using flow cluster-type and have really network
    # intensive single-flow you could want to set the ring-size independently of the number
    # of threads:
    ring-size: 2048

The setting above from the af-packet ring size config section of the suricata.yaml will result in the following in your suricata.log the ringsize setting actually controls the size of the buffer for each
ring(per thread) - buffer for af-packet:

[7636] 31/8/2015 -- 22:50:51 - (source-af-packet.c:1365) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=103 frame_size=1584 frame_nr=2060
[7636] 31/8/2015 -- 22:50:51 - (source-af-packet.c:1573) <Info> (AFPCreateSocket) -- Using interface 'eth0' via socket 7
[7636] 31/8/2015 -- 22:50:51 - (source-af-packet.c:1157) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth01 using socket 7
[7637] 31/8/2015 -- 22:50:51 - (source-af-packet.c:1365) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=103 frame_size=1584 frame_nr=2060
[7637] 31/8/2015 -- 22:50:51 - (source-af-packet.c:1573) <Info> (AFPCreateSocket) -- Using interface 'eth0' via socket 8


In  general - that would mean -
<number of threads> * <ringsize> * <(sizeof(structPacket_) + DEFAULT_PACKET_SIZE)>
or in our case - 16*2048*3514=109MB
This is memory allocated/reserved immediately.

Above I say "in general". You  might wonder where does this come from:
(source-af-packet.c:1365) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=103 frame_size=1584 frame_nr=2060

Why 2060 frames while we have specified 2048,why block_size/frame_size and what is their relation? Full detailed description about that you can find here - https://www.kernel.org/doc/Documentation/networking/packet_mmap.txt (thanks regit)


Stream and reassembly memory settings and consumption


stream:
 memcap: 14gb
 reassembly:
   memcap: 20gb


The setting above from the stream and reassembly config section of the
 suricata.yaml will result in the following in your suricata.log:
......
(stream-tcp.c:393) <Info> (StreamTcpInitConfig) -- stream "memcap": 15032385536
......
(stream-tcp.c:475) <Info> (StreamTcpInitConfig) -- stream.reassembly "memcap": 21474836480
......

The above is very straight forward. The stream and reassembly memcaps
are in total 14GB+20GB=34GB
This is max memory allowed. It will not be allocated immediately.

Further below in the config section we have :

     #raw: yes
     #chunk-prealloc: 250

Q: What does raw mean and what is chunk-prealloc?
A: The 'raw' stream inspection (content keywords w/o http_uri etc) uses
'chunks'. This is again a preallocated memory block that lives in a pool.

Q: So what is the size of "chunks " ?
A: 4kb/4096bytes

So in this case above we have -
250*4096 = 0.97MB
This is deducted/taken from the memory allocated by the  stream.reassembly.memcap value.

we also have prealloc segments (values in bytes):
    #randomize-chunk-range: 10
    #raw: yes
    #chunk-prealloc: 250
    #segments:
    #  - size: 4
    #    prealloc: 256
    #  - size: 16
    #    prealloc: 512
    #  - size: 112
    #    prealloc: 512
    #  - size: 248
    #    prealloc: 512
    #  - size: 512
    #    prealloc: 512
    #  - size: 768
    #    prealloc: 1024
    #  - size: 1448
    #    prealloc: 1024
    #  - size: 65535
    #    prealloc: 128
    #zero-copy-size: 128

More detailed info about the above you can find from my other blog post here - http://pevma.blogspot.se/2014/06/suricata-idsips-tcp-segment-pool-size.html

NOTE: Do not forget that these settings (segments preallocation) is deducted/taken from the memory allocated by the  stream.reassembly.memcap value.

App layer memory settings and consumption


app-layer:
 protocols:
   dns:
     # memcaps. Globally and per flow/state.
     global-memcap: 2gb
     state-memcap: 512kb
...
....
   http:
     enabled: yes
     memcap:2gb


Here we have - app-layer dns + http or in this case - 2GB + 2GB = 4GB

Other settings that affect the memory consumption


...
detect-engine:
  - profile: medium
  - custom-values:
...

Some more information:
https://redmine.openinfosecfoundation.org/projects/suricata/wiki/High_Performance_Configuration

The more rules you load the heavier the effect of a switch in this setting will be. For example a switch from profile: medium to profile: high would be most evident if you would like to try with >10000 rules.

mpm-algo: ac

The memory algorithm is of importance of course. However ac and ac-bs are most performant with ac-bs being less mem intensive but also less performant.


Grand total generic memory consumption equation

So if we sum up all the config options that have effect on the total memory consumption by Suricata with mind of the set up referred to here  (afpacket with 16 threads) we have (in bytes or mb/gb depending how you have your yaml memcap settings):

<number_of_total_detection_threads>*<((1728)+(default_packet_size))>*<max-pending-packets>
+
<defrag.memcap>
+
< host.memcap>
+
< ippair.memcap>
+
 < flow.memcap>
+
 <number_of_threads>*<216>* <prealloc-sessions>
+
 [per af-packet interface enabled]<af-packet_number_of_threads> * <ringsize> * <((1728)+(default_packet_size))>
+
<stream.memcap>+<stream.reassembly.memcap>
+
 <app-layer.protocols.dns.global-memcap>
+
<app-layer.protocols.http.memcap>
=
Total memory that is configured and should be available to be used by Suricata


Thank you




Friday, August 28, 2015

Failed to open ethX: pfring_open error


This is a blogpost about getting around the following error when using Suricata with pfring:

(source-pfring.c:444) <Error> (ReceivePfringThreadInit) -- [ERRCODE: SC_ERR_PF_RING_OPEN(34)] - Failed to open eth2: pfring_open error. Check if eth2 exists and pf_ring module is loaded.
(tmqh-packetpool.c:394) <Info> (PacketPoolInit) -- preallocated 65534 packets. Total memory 230679680
pfring_set_channel_id() failed: -1

However in my case eth2 existed, was up and running and the pfring module was loaded. So what happened in a bit more detail below :

I experienced this after a git pull update/upgrade of Suricata (latest at the moment of this writing) and after I re compiled pfring (using the latest pfring from git (https://github.com/ntop/PF_RING.git).

My set up (linux Debian/Ubuntu like systems):

root@suricata:/var/data/log/suricata# ifconfig eth2
eth2      Link encap:Ethernet  HWaddr 00:e0:ed:19:e3:e0
          inet6 addr: fe80::2e0:edff:fe19:e3e0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2962266192 errors:0 dropped:5527381 overruns:0 frame:0
          TX packets:19 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:2867936692537 (2.8 TB)  TX bytes:3345 (3.3 KB)
The pfring set up I had was configured like this below:
root@suricata:/var/data/log/suricata# modprobe pf_ring transparent_mode=0 min_num_slots=65534

A regular check reveals nothing abnormal:
root@suricata:/var/data/log/suricata# modinfo pf_ring && cat /proc/net/pf_ring/info
filename:       /lib/modules/3.14.0-031400-generic/kernel/net/pf_ring/pf_ring.ko
alias:          net-pf-27
description:    Packet capture acceleration and analysis
author:         ntop.org
license:        GPL
srcversion:     E344EB01757B55E97A93D0C
depends:     
vermagic:       3.14.0-031400-generic SMP mod_unload modversions
parm:           min_num_slots:Min number of ring slots (uint)
parm:           perfect_rules_hash_size:Perfect rules hash size (uint)
parm:           transparent_mode:(deprecated) (uint)
parm:           enable_debug:Set to 1 to enable PF_RING debug tracing into the syslog (uint)
parm:           enable_tx_capture:Set to 1 to capture outgoing packets (uint)
parm:           enable_frag_coherence:Set to 1 to handle fragments (flow coherence) in clusters (uint)
parm:           enable_ip_defrag:Set to 1 to enable IP defragmentation(only rx traffic is defragmentead) (uint)
parm:           quick_mode:Set to 1 to run at full speed but with upto one socket per interface (uint)
PF_RING Version          : 6.1.1 (dev:250a67fe1082121ac511a19ebc3fe1fc5f494bfe)
Total rings              : 0

Standard (non DNA/ZC) Options
Ring slots               : 65534
Slot version             : 16
Capture TX               : Yes [RX+TX]
IP Defragment            : No
Socket Mode              : Standard
Total plugins            : 0
Cluster Fragment Queue   : 0
Cluster Fragment Discard : 0
Suricata and pfring have been installed as explained here - on the Suricata redmine wiki.
root@suricata:~# ldd /usr/local/bin/suricata
    linux-vdso.so.1 =>  (0x00007fff419fe000)
    libhtp-0.5.17.so.1 => /usr/local/lib/libhtp-0.5.17.so.1 (0x00007f32af5a1000)
    libGeoIP.so.1 => /usr/lib/x86_64-linux-gnu/libGeoIP.so.1 (0x00007f32af372000)
    libluajit-5.1.so.2 => /usr/local/lib/libluajit-5.1.so.2 (0x00007f32af103000)
    libmagic.so.1 => /usr/lib/x86_64-linux-gnu/libmagic.so.1 (0x00007f32aeee7000)
    libcap-ng.so.0 => /usr/local/lib/libcap-ng.so.0 (0x00007f32aece2000)
    libpfring.so => /usr/local/lib/libpfring.so (0x00007f32aeaa3000)
    libpcap.so.1 => /usr/local/pfring/lib/libpcap.so.1 (0x00007f32ae80e000)
    libnet.so.1 => /usr/lib/x86_64-linux-gnu/libnet.so.1 (0x00007f32ae5f5000)
    libjansson.so.4 => /usr/lib/x86_64-linux-gnu/libjansson.so.4 (0x00007f32ae3e8000)
    libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f32ae1ca000)
    libyaml-0.so.2 => /usr/lib/x86_64-linux-gnu/libyaml-0.so.2 (0x00007f32adfaa000)
    libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f32add6b000)
    libnss3.so => /usr/lib/x86_64-linux-gnu/libnss3.so (0x00007f32ada31000)
    libnspr4.so => /usr/lib/x86_64-linux-gnu/libnspr4.so (0x00007f32ad7f4000)
    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f32ad42e000)
    libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f32ad215000)
    libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f32acf0f000)
    libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f32acd0a000)
    libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f32acaf4000)
    /lib64/ld-linux-x86-64.so.2 (0x00007f32af7d4000)
    libnuma.so.1 => /usr/lib/x86_64-linux-gnu/libnuma.so.1 (0x00007f32ac8e9000)
    librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f32ac6e0000)
    libnssutil3.so => /usr/lib/x86_64-linux-gnu/libnssutil3.so (0x00007f32ac4b5000)
    libplc4.so => /usr/lib/x86_64-linux-gnu/libplc4.so (0x00007f32ac2b0000)
    libplds4.so => /usr/lib/x86_64-linux-gnu/libplds4.so (0x00007f32ac0ab000)


Further more my Suricata start line was like this:

suricata --pfring-int=eth2 --pfring-cluster-id=99 --pfring-cluster-type=cluster_flow -c /etc/suricata/peter-yaml/suricata-pfring.yaml --pidfile /var/run/suricata.pid -v

Even though everything seems fine - I could  not start Suricata with pfring:

[31591] 5/8/2015 -- 17:10:31 - (tmqh-packetpool.c:394) <Info> (PacketPoolInit) -- preallocated 65534 packets. Total memory 230679680
pfring_set_channel_id() failed: -1
[31591] 5/8/2015 -- 17:10:31 - (source-pfring.c:444) <Error> (ReceivePfringThreadInit) -- [ERRCODE: SC_ERR_PF_RING_OPEN(34)] - Failed to open eth2: pfring_open error. Check if eth2 exists and pf_ring module is loaded.
[31592] 5/8/2015 -- 17:10:31 - (tmqh-packetpool.c:394) <Info> (PacketPoolInit) -- preallocated 65534 packets. Total memory 230679680
pfring_set_channel_id() failed: -1
[31592] 5/8/2015 -- 17:10:31 - (source-pfring.c:444) <Error> (ReceivePfringThreadInit) -- [ERRCODE: SC_ERR_PF_RING_OPEN(34)] - Failed to open eth2: pfring_open error. Check if eth2 exists and pf_ring module is loaded.
[31593] 5/8/2015 -- 17:10:32 - (tmqh-packetpool.c:394) <Info> (PacketPoolInit) -- preallocated 65534 packets. Total memory 230679680
pfring_set_channel_id() failed: -1
[31593] 5/8/2015 -- 17:10:32 - (source-pfring.c:444) <Error> (ReceivePfringThreadInit) -- [ERRCODE: SC_ERR_PF_RING_OPEN(34)] - Failed to open eth2: pfring_open error. Check if eth2 exists and pf_ring module is loaded.
[31594] 5/8/2015 -- 17:10:32 - (tmqh-packetpool.c:394) <Info> (PacketPoolInit) -- preallocated 65534 packets. Total memory 230679680
pfring_set_channel_id() failed: -1
[31594] 5/8/2015 -- 17:10:32 - (source-pfring.c:444) <Error> (ReceivePfringThreadInit) -- [ERRCODE: SC_ERR_PF_RING_OPEN(34)] - Failed to open eth2: pfring_open error. Check if eth2 exists and pf_ring module is loaded.
....

I was getting that error even though I reloaded the pfring module:
rmmod pr_ring
modprobe pf_ring transparent_mode=0 min_num_slots=65534
the way I usually do...

In short - this is the fix:

LD_LIBRARY_PATH=/usr/local/pfring/lib suricata --pfring-int=eth2  --pfring-cluster-id=99 --pfring-cluster-type=cluster_flow  -c /etc/suricata/peter-yaml/suricata-pfring.yaml --pidfile /var/run/suricata.pid -v

Notice the use of:
LD_LIBRARY_PATH=/usr/local/pfring/lib suricata 

More information about what is LD_LIBRARY_PATH     

To get rid of LD_LIBRARY_PATH you can create a pfring.conf file in /etc/ld.so.conf.d/ containing:
/usr/local/pfring/lib
and run
sudo ldconfig




Friday, May 22, 2015

Suricata - wildcard rule loading


Recently (few hrs ago as of writing this blog) there was  a new feature (thanks to gozzy) introduced in Suricata IDS/IPS/NSM  - wildcard rule loading capability.

As of the moment the feature is available in our git master. If you are wondering how to get that up and running or do not have the latest Suricata from git master - here is a quick tutorial (Debian/Ubuntu):

1)
apt-get -y install libpcre3 libpcre3-dbg libpcre3-dev build-essential \
autoconf automake libtool libpcap-dev libnet1-dev libyaml-0-2 \
libyaml-dev zlib1g zlib1g-dev libmagic-dev libcap-ng-dev \
libjansson-dev pkg-config libnss3-dev libnspr4-dev git-core

2)
git clone git://phalanx.openinfosecfoundation.org/oisf.git && cd oisf/ &&  git clone https://github.com/ironbee/libhtp.git -b 0.5.x

3)
 ./autogen.sh && \
 ./configure --prefix=/usr/ --sysconfdir=/etc/ --localstatedir=/var/ \
 --enable-geoip --enable-unix-socket \
 --with-libnss-libraries=/usr/lib --with-libnss-includes=/usr/include/nss/ \
 --with-libnspr-libraries=/usr/lib --with-libnspr-includes=/usr/include/nspr \
 && make clean && make && make install-full && ldconfig

To confirm -
suricata --build-info

Now that you have latest Suricta up and running - here it is what this blog post is all about  - wildcard rule loading for Suricata IDPS. Some possible scenarios of use are loading wildcarded rules form the :

Command line


Please note the "quotes" !
suricata -c /etc/suricata/suricata.yaml  -v -i eth0 -S "/etc/suricata/rules/*.rules"

Pretty self explanatory. The command above will load all .rules files from /etc/suricata/rules/
suricata -c /etc/suricata/suricata.yaml  -v -i eth0 -S "/etc/suricata/rules/emerging*"
The command above will load all emerging* rules files from /etc/suricata/rules/

Config file


You can also set that up in the suricata.yaml config file. Here is how (please note the "quotes").

In your rules section in the suricata.yaml:

# Set the default rule path here to search for the files.
# if not set, it will look at the current working dir
default-rule-path: /etc/suricata/rules
rule-files:
 #- "*.rules"
 - "emerging*"
 #- botcc.rules
 #- ciarmy.rules
 #- compromised.rules
 #- drop.rules
 #- dshield.rules
 #- emerging-activex.rules
 #- emerging-attack_response.rules
The set up above will load all emerging* files and the rules residing in those. Then you can start Suricata anyway you would like, examples:

suricata -c /etc/suricata/suricata.yaml  -v -i eth0
suricata -c /etc/suricata/suricata.yaml  -v --af-packet=eth0

 and in suricata.log you should see all emerging* rule files being loaded:

......
[13558] 22/5/2015 -- 17:19:39 - (reputation.c:620) <Info> (SRepInit) -- IP reputation disabled
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-activex.rules
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-attack_response.rules
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-chat.rules
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-current_events.rules
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-deleted.rules
[13558] 22/5/2015 -- 17:19:39 - (detect.c:420) <Warning> (ProcessSigFiles) -- [ERRCODE: SC_ERR_NO_RULES(42)] - No rules loaded from /etc/suricata/rules/emerging-deleted.rules
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-dns.rules
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-dos.rules
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-exploit.rules
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-ftp.rules
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-games.rules
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-icmp.rules
[13558] 22/5/2015 -- 17:19:39 - (detect.c:420) <Warning> (ProcessSigFiles) -- [ERRCODE: SC_ERR_NO_RULES(42)] - No rules loaded from /etc/suricata/rules/emerging-icmp.rules
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-icmp_info.rules
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-imap.rules
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-inappropriate.rules
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-info.rules
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-malware.rules
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-misc.rules
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-mobile_malware.rules
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-netbios.rules
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-p2p.rules
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-policy.rules
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-pop3.rules
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-rpc.rules
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-scada.rules
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-scan.rules
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-shellcode.rules
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-smtp.rules
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-snmp.rules
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-sql.rules
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-telnet.rules
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-tftp.rules
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-trojan.rules
[13558] 22/5/2015 -- 17:19:41 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-user_agents.rules
[13558] 22/5/2015 -- 17:19:41 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-voip.rules
[13558] 22/5/2015 -- 17:19:41 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-web_client.rules
[13558] 22/5/2015 -- 17:19:41 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-web_server.rules
[13558] 22/5/2015 -- 17:19:41 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-web_specific_apps.rules
[13558] 22/5/2015 -- 17:19:43 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-worm.rules
.......

You can also use it like that :

# Set the default rule path here to search for the files.
# if not set, it will look at the current working dir
default-rule-path: /etc/suricata/rules
rule-files:
 #- "*.rules"
 - "*web*"
 #- "emerging*"
 #- botcc.rules
 #- ciarmy.rules
 #- compromised.rules
 #- drop.rules


That's it.


Thursday, May 21, 2015

Suricata - multiple interface configuration with af-packet



Suricata is a very flexible and powerful multithreading  IDS/IPS/NSM.

Here is a simple tutorial (tested on Debian/Ubuntu) of how to configure multiple interfaces for af-packet mode with Suricata (af-packet mode works by default/out of the box on kernels 3.2 and above). Lets say you would like to start simple IDSing with Suricata on eth1, eth2 and eth3 on a particular machine/server.


In your suricata.yaml config (usually located in /etc/suricata/) find the af-packet section and do the following:


af-packet:
  - interface: eth2
    threads: 16
    cluster-id: 98
    cluster-type: cluster_cpu
    defrag: no
    use-mmap: yes
    ring-size: 200000
    checksum-checks: kernel
  - interface: eth1
    threads: 2
    cluster-id: 97
    cluster-type: cluster_flow
    defrag: no
    use-mmap: yes
    ring-size: 30000
  - interface: eth3
    threads: 2
    cluster-id: 96
    cluster-type: cluster_flow
    defrag: no
    use-mmap: yes
    ring-size: 20000
Of course feel free to adjust the ring-sizes (packet buffers) as you see fit for your particular set up.
NOTE:  do not forget to use a different cluster-id

so now you can start suricata like so:

suricata -c /etc/suricata/suricata.yaml -v --af-packet 

That above will start Suricata which will listen on eth2 with 16 threads with cluster_type: cluster_cpu and on eth1,eth3 with 2 threads each with cluster_type: cluster_flow. Have a look in your suricata.log file for more info.

If you would like to just test and see how it goes for eth2 only:
suricata -c /etc/suricata/suricata.yaml -v --af-packet=eth2

...easy and flexible.







Saturday, April 25, 2015

Suricata - check loaded yaml config settings with --dump-config



There is a very useful command available to Suricata IDS/IPS/NSM :
suricata --dump-config

The command above will dump all the config parameters and their respective values that are loaded by Suricata from the config file. You can run the command in any case - it does not matter if Suricata is running or not.

There is a peculiarity however. Sometimes people would think that the command(above) would dump the currently loaded config values by Suricata.... in some case it will and in some cases it will not.

So what does it depend on?.... simple:
suricata --dump-config

will dump the config settings that are loaded (or will be loaded) by Suricata by default from
/etc/suricata/suricata.yaml

So if you are running Suricata with a config file called suricata-test.yaml (or suricata.yaml located in a different directory) - you will not see those settings...unless you specify that config file in particular:
suricata --dump-config -c /etc/suricata/suricata-test.yaml
Here is a real case example.
I run Suricata for a specific test where I had specified the defrag memcap to be 512mb :
defrag:
  memcap: 512mb
  hash-size: 65536
  trackers: 65535 # number of defragmented flows to follow
  max-frags: 65535 # number of fragments to keep (higher than trackers)
  prealloc: yes
  timeout: 60

Suricata up and running:
root@LTS-64-1:~/Work # ps aux |grep suricata
root      8109  2.3  7.6 878444 308372 pts/6   Sl+  12:45   1:02 suricata -c /etc/suricata/suricata-test.yaml --af-packet=eth0 -v
root@LTS-64-1:~/Work #

And the peculiarity that this blogpost is trying to emphasize on about :
root@LTS-64-1:~/Work # suricata --dump-config  |grep defrag.memcap
defrag.memcap = 32mb
root@LTS-64-1:~/Work # suricata --dump-config -c /etc/suricata/suricata-test.yaml |grep defrag.memcap
defrag.memcap = 512mb
root@LTS-64-1:~/Work #



suricata --dump-config dumps the settings loaded(or to be loaded) from the default location /etc/suricata/suricata.yaml if you are running suricata with a yaml config with a different name than the default or with a different location that the default - in order to get those settings - you need to specify that particular yaml location, like so:

suricata --dump-config -c /etc/local/some_test_dir/suricata/suricata-test.yaml


Thanks

related article:
http://pevma.blogspot.se/2014/02/suricata-override-config-parameters-on.html


Monday, April 6, 2015

Suricata IDPS - Application layer anomalies protocol detection




Suricata IDS/IPS/NSM also allows you to do application layer anomaly  detection.
I started talking to inliniac about protocol anomaly detection rules one day on the Suricata IRC chat room...which evolved more into a discussion resulting in us updating the rule sets with some examples of how to do that.

Below are a few examples for rules usage:

HTTP

alert tcp any any -> any ![80,8080] (msg:"SURICATA HTTP not tcp port 80, 8080"; flow:to_server; app-layer-protocol:http; sid:2271001; rev:1;)
The above rule finds http traffic that is not using dest port 80 or 8080.


alert tcp any any -> any 80 (msg:"SURICATA Port 80 but not HTTP"; flow:to_server; app-layer-protocol:!http; sid:2271002; rev:1;)
The above rule is kind of the reverse of the previous one - it will alert if the tcp traffic with destination port 80 is not http.

Here is another example

TLS

alert tcp any any -> any 443 (msg:"SURICATA Port 443 but not TLS"; flow:to_server; app-layer-protocol:!tls; sid:2271003; rev:1;)

HTTPS

Detecting HTTP traffic over HTTPS port -

alert http any any -> any 443 (msg:"SURICATA HTTP clear text on port 443"; flow:to_server; app-layer-protocol:http; sid:2271019; rev:1;)

You can find the full ruleset (open source and free to use) with examples for HTTP, HTTPS, TLS, FTP, SMTP, SSH, IMAP, SMB, DCERPC, DNS, MODBUS application layer anomaly  detection  here:

https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Protocol_Anomalies_Detection







Thursday, February 19, 2015

Chasing MTUs


Setting up (configuring) the right MTU (maximum transmission unit) size when running Suricata IDS/IPS.

Sometimes you can end up in a situation as follows :


capture.kernel_packets    | AFPacketeth12              | 1143428204
decoder.pkts                    | AFPacketeth12             | 1143428143
decoder.invalid                | AFPacketeth12              | 416889536

a whole lot of  decoder.invalid. Not good. What could be the reason for that? One thing you should check right away is the MTU of the traffic that is being mirrored.

What does it mean? Well there is the MTU that you set up on the server that you run Suricata on and there is the MTU that is present in the "mirrored" traffic.

What is the difference?Why should it matter?
It matters because if not set correct  it will result in a lot of decoder.invalids (dropped by Suricata) and you will be missing on a lot of traffic inspection.
Example: if  on the sniffing interface that you run Suricata on has a MTU set as 1500  and in the traffic that you mirror you have jumbo frames (MTU 9000) - most likely your decoder.invalids will show a whole lotta love in your stats.log.

How can you adjust the MTU on the interface (NIC) ? (example)
First a have  look what is the current value:
ifconfig eth0
then adjust it
ifconfig eth0 mtu 1514

By the way - what could be the max size of the MTU (and what sizes there are in general)  -
(short answer - 9216)


This is the easy part :). There are situations where you do not know what is the MTU of the "mirrored" traffic. There is a few ways to find this  - ask the network team/guy, make a phone call or two, start manually testing and setting it on the NIC to find a middle ground ....however you can also make use of the procedure shown below (in order to get the byte size of the MTU):


On your Server/Sensor
1)
Stop Suricata.

2)
Change the MTU to 9216
(the interface that Suri is sniffing on)

example - ifconfig eth0 mtu 9216
(non boot persistent)

3)
install tcpstat - if you do not have it
apt-get install tcpstat

5)
run the following (substitute the interface name with yours - that Suri is sniffing on)
tcpstat -i eth0 -l -o "Time:%S\tn=%n\tavg=%a\tstddev=%d\tbps=%b\tMaxPacketSize=%M\n"  5
6)
Give it a minute or two
If there are Jumbo frames you should see that in the output (something like) -
"MaxPacketSize=9000", if not you should see whatever the max size is.

7)
Adjust your interface MTU accordingly  - the one that Suri is sniffing
on. -> Start Suri

8)
Let it run for  a while - lets say 1 hr. Have a look at the decoder.invalid stats in stats.log

NOTE: Do NOT just set the MTU to 9216 directly ("just to be on the safe side"). Only set it that high if needed !!

NOTE: This example below is not using the "-l" option of tcpstat as denoted in point 5) above - look at man tcpstat for more info



(tested on Ubuntu/Debian)
That's all ....feedback welcome.