Skip to main content

Trisul Hub Configuration File

All configuration parameters for the Trisul Hub are stored in a single XML file called trisulHubConfig.xml

Default Location

/usr/local/etc/trisul-hub/domain0/hub0/context0/trisulHubConfig.xml

for context named data1 the path would be …hub0/context_data1/trisulHubConfig.xml*

caution

Root privileges needed to edit

note

Also see trisulProbeConfig.xmlfor editing Trisul Probe parameters

Sections

Click on a section to see the config parameters inside that section.

SectionWhat part of trisul does it configure
AppThe hub process level params
LoggingLogging policy – file sizes and rotation
StatsEngineDatabase cluster tuning
ServerForTRP– server parameters
ProbesList of probes allowed to connect and mapping to layers
IPDRIP Flow Detail Record (IPDR) application parameters
DBTasksSetting for various database maintenance tasks

App

note

Commonly modified parameters are Setuid, TrisulMode, LicenseFile

ParametersDefaultsDescription
Usertrisul.trisulWhich user/group should trisul run as after dropping root privileges.
TempFolder/tmp
DBRoot/usr/local/var/lib/trisul-hub/ domain0/hub0/context0The base directory under which Trisul stores all its data.
TrafficDBRoot/usr/local/var/lib/trisul/domain0 /hub0/context0/metersThe directory under which Trisul stores traffic and flow statistics.
ConfigDB/usr/local/var/lib/trisul/ domain0/hub0/context0/ config/TRISULCONFIG.SQDBLocation of the configuration database.
BinDirectory/usr/local/binWhere trisul looks for executable binaries
DataDirectory/usr/local/share/trisul-hubData files
LicenseFile/usr/local/etc/trisul-hub/LicenseKey.txtLocation of the license file.
DebugModefalseDebug mode is used when trying to developLUAprobe scripts. If DebugMode == True then all streaming metrics from all probes are just sunk to /dev/null. Hence this is used for probe testing

Logging

The two components in a Hub node areflushersandquery servers. This section configures their log files with prefixfsandqsrespectively.

ParametersDefaultsDescription
Logdir/usr/local/var/log/trisulWhere the log files are stored.
Logfilens-???.logLog file pattern. The default is ns-001.log, ns-002.log, etc.
LogRotateSize5000000Size of each log file is allowed to grow to this size before Trisul moves to the next file.
LogRotateCount5The number of files in the log ring.
FlusherLogFilefs-???.loglog file pattern.
FlusherLogLevelDEBUGAll messages higher than this level are logged. The available log levels in order of severity (most severe one first is).
EMERG
FATAL
ALERT
CRIT
ERROR
WARN— this level after a few weeks of smooth running
NOTICE
INFO
DEBUG— Recommended default level
FlusherLogRotateSize5000000Max size of each log file
FlusherogRotateCount5Number of files in ring
TrpLogFileqs-???.loglog file pattern.
TrpLoglevelDEBUGlog level
TrpLogRotateSize5000000Max size of each log file
TrpLogRotateCount5Number of files in ring
IpdrdLogFileis-???IPDRlog file pattern. These parameters are for the IPDR query service
IpdrdLoglevelDEBUGIPDRservice logging level.
IpdrdLogRotateSize5000000Max size of each file in bytes
IpdrdLogRotateCount5Number of log files

StatsEngine

Controls the database storage and retention policy for Trisul.

ParametersDefaultsDescription
FTSFlushBudget5TrisulFTS(Full Text Resources) need to complete the Flush operation within these many seconds. Since Trisul is a Real time system, we have a total about about 60 seconds for the entire snapshot window to flush.
JournalModeWALTrisul Resources are stored in SQLITE3 leaf nodes.
OfflineAnalysisQueueSize2000000When importing PCAPs or other offline formats, this parameter controls the Hi Water mark of the items on the queue of the Hub. This helps to control memory usage on the Hub Node.

SlicePolicy

Controls data location and retention policy.

ParametersDefaultsDescription
SliceWindowDAILYHow much data is contained in a single slice. The available choices are :
HOURLYFrom 00 Min to 59 Min every hour
DAILYFrom 12:00 AM to 11:59 PM every day

Operational

ParametersDefaultsDescription
SliceCount3232 slices are kept in the operational area. Combined with the default SliceWindow ofDAILY. This means 32 days worth of data in the oper area. Slices older than 32 days will slide over to the reference area.
UsageRedMarkGenerate an alert when the disk usage percent exceeds this value for admin purposes. Leave blank or zero to disable disk usage alerting. Default disabled.

Reference

ParametersDefaultsDescription
SliceCount32Controls how many slices are kept in the reference area. If you set this to 0, the slices will then move straight from operational to archive.
UsageRedMarkGenerate an alert when the disk usage percent exceeds this value for admin purposes. Leave blank or zero to disable disk usage alerting

Archive

For long term storage mostly for compliance purposes.

ParametersDefaultsDescription
SliceCount32Controls how many slices are kept in the archive area. If you set this to 0, slices move directly to /dev/null (ie are deleted).
UsageRedMark95Generate an alert when the disk usage percent exceeds this value for admin purposes. Leave blank or zero to disable disk usage alerting

Extra archives

An optional feature for advanced users allows for extra archives for example to be mounted to slower storage. These are disabled by default. Change the name of the node fromExtraArchives_DisabledtoExtraArchivesto activate this feature.

ParametersDefaultsDescription
ID1This ID is used to access the archive mount point. ID of 1 would lead to mount pointxarchive_1
SliceCount32Number of days data in this extra archive

Flushers

This section controls how many backend flushers are used. The default number of flushers used by Trisul isTWO. This is an advanced tuning parameter. You can increase the number of flushers up to eight for large to very large deployments of Trisul.

ParametersDefaultsDescription
ServerImagePath to trisul_flushd
PIDFileWhere thePIDfor the running trisul_flushd process is stored
AutoStarttrueAutomatically start flushd process
ControlChannelInternalIPCchannel
FlushersFor each flusher instance specify the connection and DB instance number. Sequentially from 0..8 (MAX)

Server

Controls theTRPServer Process used for database querying functionality. The process that provides the queryAPIis called trisul_trpd@

ParametersDefaultsDescription
ZmqConnectionThe port running theTRPProtocol where you can connect and query the trisul database. By default, this is anIPCsocketipc:///usr/local/var/lib/trisul-hub/domain0/hub0/context0/run/trp_0. You can change this parameter to allow a remoteTCPconnection.Example: To allow queries usingTCPPort 12004



1. Change this parameter totcp://10.0.0.23:12004where10.0.0.23is the IP address of theHUBnode

2. Then restart the context like sotrisulctl_hub restart context default@hub0

PIDFileWhere thePIDof the running trisul_trpd process is stored
NumServers6Number of backend servers to start.
ParallelQueriesfalseWhether parallel queries must be turned on for all queries. The defautl is false, use this only when you have the database stored on different spindles.

Probes

Add probes that are allowed to connect to this context.

Each probe is a line with the following details.

ParametersDefaultsDescription
LayerLayer number allocated to the probe.
ProbeIDProbeID eg,probe0this probe must be authenticated by aCURVEcertificate earlier for the domain this hub belongs to. Seetrisulctl_hub install probe

DBTasks

Control the various database maintenance tasks. These tasks are scheduled internally by Trisul at fixed intervals.

Archiver

Archiver is responsible for sliding old data.

ParametersDefaultsDescription
EnableTRUEArchiving is enabled.

SummSlice

Slices data is summarized so that reporting on total entities are fast.

ParametersDefaultsDescription
EnableTRUEFine grained daily summary calculation of per group disk storage.

CacheBuild

Database optimizer task to pack frequently used keys to speed up long range time series operations.

ParametersDefaultsDescription
EnableTRUEArchiving is enabled
TopKeyCount25The top 25 keys in each metric can be selected for faster retrieval
InKeyCount100In addition to the toppers, these many keys can be selected for caching

ResolveIP

This section controls the automatic IP address resolver.

How IP Address resolution works

  • Runs at fixed intervals automatically, typically every 15 minutes or so.
  • In Packet Capture mode , all IP address to hostnames are harvested fromDNSpackets automatically
  • In Netflow mode , the most important IP addresses that appears in “topper lists” are resolved usingDNSlookup.
<ResolveIP>
<Enable> True </Enable>
<Debug> True </Debug>
<Candidates>
<Internal>100</Internal>
<External>25</External>
</Candidates>
<AlwaysRefreshInternal>false</AlwaysRefreshInternal>
<AlwaysRefreshExternal>false</AlwaysRefreshExternal>
</ResolveIP>
ParametersDefaultsDescription
EnableTRUEMost important / visible IPs are resolved usingDNSlookup
DebugTRUEPrints resolved IPs for debugging purposes int_resolveip.logfile
CandidatesNumber of Top-K items per meter for Internal IPs vs External IPs. Internal IPs are those which fall into your Home Network
AlwaysRefreshExternalfalseDo a full refresh of External IPs. Normally, the resolver does not keep trying to resolve IPs that fail to resolve or those IPs which have already been recently resolved.
AlwaysRefreshInternalfalseDo a full refresh of Internal IPs. Use this option if you have an enterprise with dynamically changing IP → User names.

CleanPersist

The persist storage area collects key related information – such as IP to host name mappings etc. Over a long period of time this can grow to huge proportions. TheCleanPersistprocess prunes this storage area by randomly deleting 2% of keys each run.

ParametersDefaultsDescription
EnableTRUEArchiving is enabled

CatTrf

A database packer algorithm to speed up database reads and to defragement files.

ParametersDefaultsDescription
EnableTRUEArchiving is enabled

Rebucketizer

When Rebucketizer is enabled, data is repartitioned into resolutions of optimal sizes to optimize data distribution across large number of data points. Upon repartitioning the average of the repartitioned data are taken for data points. This evenly sized buckets improves analysis performance and reducing data skew.

<Rebucketizer>
<Enable> True </Enable>
<Resolutions>
<Resolution>
<ID>1</ID>
<BucketSize>300</BucketSize>
<TopperBucketSize>900</TopperBucketSize>
<ThresholdDays>1</ThresholdDays>
</Resolution>
<Resolution>
<ID>2</ID>
<BucketSize>1800</BucketSize>
<TopperBucketSize>900</TopperBucketSize>
<ThresholdDays>7</ThresholdDays>
</Resolution>
<Resolution>
<ID>3</ID>
<BucketSize>7200</BucketSize>
<TopperBucketSize>900</TopperBucketSize>
<ThresholdDays>28</ThresholdDays>
</Resolution>
<Resolution>
<ID>4</ID>
<BucketSize>86400</BucketSize>
<TopperBucketSize>900</TopperBucketSize>
<ThresholdDays>360</ThresholdDays>
</Resolution>
</Resolutions>
</Rebucketizer>
ParametersDefaultsDescription
EnableTRUERebucketizer is enabled
ID1Unique identifier for each configuration or bucket
BucketSize300The size of the bucket in seconds
TopperBucketSize900The size of the topper bucket in seconds
ThresholdDays1The threshold(in days) for moving data between buckets

So here by default, for ID=1, the bucket size for 1 day is partitioned into 5 minutes(300 seconds) interval and the topper bucket size for 1 day is partitioned into 15 minutes (900 seconds) interval and so on.

IPDR

These parameters are for the IPDR Service. IPDR is the IP Detailed Record logging service. This is a mode of storing very large number of raw flows for compliance and query purposes.

Set automatically

These parameters are typically set automatically when you put Trisul in the IPDR mode.

ParametersDefaultsDescription
OutputDirectoryCONTEXTROOT/runDirectory where theIPDRrecord query result is dumped
ControlDBCONTEXTROOT/config/IPDRCONTROL.SQDBThe control database location
ReportFormatfullThe format of theIPDRrecords. Available values are

full – The full record in columnar report format
fullcsv – Full report in CSV format
trai – Format for TRAI
AddCustomerInfotrueAdd the information from the IPDR Static IP customer mapping
AAADumpFilePathCONTEXTROOT/run/aaadumpfilesThe place where the RADIUS AAA server dumps the currently active sessions
SubscriberOptionAdd Subscriber ID or other ISP specific tag , this is taken from the RADIUS AAA log files
MaxRecords250,000When using the Request Full Database Dump this parameter controls the maximum number of records dumped.

Advanced DB Parameters

Database parameters to optimize. Generally these need to be only changed for very large deployments which are facing significant performance issues.

The following table lists some parameters that might be useful.

Under the node : DBParameters > FlowStream

ParametersDefaultsDescription
MicroSecondTimestampsTRUEDoes the flow database need microsecond timestamps. Use case : Compliance for large flow stores. Disabling microsecond timestamps for start and end time can save about 8 bytes / per flow
ZFLOWBLOCK_COMPRESSOR_CODElz4The compressor type for the flow database. Available parameter values are


- lz4

- lz4-fast16 : Advanced compression use only if necessary. Also supported lz4-fast5, and lz4-fast10

- lz4-ipv4-call-log-with-nat-pro : Use this for IPv4 onlyIPDRapplication withNATIP. Max compression

- lz4-ip-call-log-with-nat-pro-max : For both IPv4 and IPv6 withNATIP, Port, userid for full log

kFLOWS_PER_BLOCKThe number of flows per block. Default 4096
kBLOOM_AGG_SIZEThe number of flow blocks per bloom filter.
kBUMPX_AGG_SIZEThe number of flow blocks per full bitmap filter index.