Author Topic: Parsing Genesys log files  (Read 7721 times)

Offline genesysguru

  • Sr. Member
  • ****
  • Posts: 293
  • Karma: 12
    • Genesys Guru Blog
Parsing Genesys log files
« on: December 16, 2016, 11:01:48 PM »
Hi All,

Just wondering if anybody has tried parsing standard Genesys log files (Confserv, T-Server, SIP Server and URS lets say) using Grok and custom filters to parse out the unstructured data? Also since grok sits on top of regular expressions any regular expressions you might have are valid in grok as well. Just trying to same myself some time writing them from scratch .....

Thanks
Craig

Offline eugene

  • Jr. Member
  • **
  • Posts: 85
  • Karma: 2
Re: Parsing Genesys log files
« Reply #1 on: December 17, 2016, 02:28:34 PM »
This would absolutely rock if genesys guru you're able to create something please share.  I've been working with ors with ES + kibana and have found it to be an invaluable tool for troubleshooting.  I can see the potential of streaming genesys logs thru logstash and derive some analytics from it.

Offline Kubig

  • Hero Member
  • *****
  • Posts: 2734
  • Karma: 43
Re: Parsing Genesys log files
« Reply #2 on: December 19, 2016, 08:55:10 PM »
I had written parser using sed, awk and etc. shell's commands. This looks similiar, but little bit easy to use and develope
Genesys certified professional consultant (GVP, SIP, GIR and Troubleshooting)

Offline genesysguru

  • Sr. Member
  • ****
  • Posts: 293
  • Karma: 12
    • Genesys Guru Blog
Re: Parsing Genesys log files
« Reply #3 on: December 21, 2016, 06:11:16 AM »
Hi Eugene,

Clearly you get where I am coming from  :D In fact this is something I have been looking at for a number of years and have a shelved project which gets events from Genesys components via the PSDK and fires them into Esper for some complex event processing (CEP). Voxeo / Aspect also went down this log processing route using Splunk but in the wider context using Splunk for Genesys log processing was not cost effective. However the momentum of ELK in the last 12 months have changed this significantly and I think it's time for Genesys Management 2.0!

If you look at the current Genesys Management layer it's not exactly fit for purpose. Yes you can alarm and send SNMP traps but that just gets you into the Sh*t in Sh*t out (SISO) problem whereby too many alarms are sent meaning they just get ignored  because "that is normal". Worse still operational incidents occur for which there are no alarms - like SIP INVITEs not being received over a SIP trunk even though it is not OOS.

On top of Management 0.1 which has not changed for years Genesys have added the Log File Management Tool (LFMT) and the Log Masking Tool which is just a couple of Java lines of code around Regex! Neither are aimed at operational excellence - just making life easier for Genesys Support.

Hence the reason for the post originally - using an ELK stack for Genesys Management 2.0. Surely a few Logstash Grok filters to parse out the following conf server log lines into events with metadata like the log message Id would without stealing the "Spotlight" would be quite valuable:

16:29:54.229 Std 24200 Object: [CfgFolder], name [Demands], DBID: [268] is created by client, type [SCE], name: [default], user: [default]
16:30:33.262 Std 24202 Object: [CfgFolder], name [Demands], DBID: [268] is deleted by client, type [SCE], name: [default], user: [default]
16:31:20.017 Std 24201 Object: [CfgRouteDN], name [RES Prepayment - Gas], DBID: [283] is changed by client, type [SCE], name: [default], user: [default]

grok {
   match => { "message" => "%{TIME:timestamp} %{WORD:loglevel} %{WORD:logMsgId} %{GREEDYDATA:message}" }
   break_on_match => false
}

Time to get Grok-ing.

Regards
Craig


Offline Fra

  • Hero Member
  • *****
  • Posts: 856
  • Karma: -3
Re: Parsing Genesys log files
« Reply #4 on: December 21, 2016, 05:56:12 PM »
Craig,

I couldn't agree more.

For whatever reason, Genesys has left behind any development of the Management Layer which is now years behind the rest of their ecosystem.
I see huge gaps in the following areas:
  • ability to correlate alarms, i.e. events / conditions raised by different applications for the same host / application
  • ability to set criteria upon which a certain alarm is raised; there's no way at the moment to say if condition X is met Y times in Z interval, then I would like to receive a warning. Currently, either you clear a specific alarm to see the remaining lot coming through afterwards (= too many alarms) or you leave it there for a long time with the risk of masking further potential occurrences of the same condition (= too few alarms) 
  • the logic GVP messaging & alarming is built on is not coherent with the other solutions
  • proper SIP to ML mapping
  • no aggregation or proper parsing of Outbound PA Sessions
Food for thought :)

Fra

Offline eugene

  • Jr. Member
  • **
  • Posts: 85
  • Karma: 2
Re: Parsing Genesys log files
« Reply #5 on: December 22, 2016, 03:10:40 PM »
Craig, yep i totally see where you're going with this honestly just haven't had the time to learn Logstash and the various filters.


ELK +  the other products from Elastic stack I can see will bode really well for Genesys Management 2.0.

The biggest upside that i'm seeing is the sheer speed of using Kibana to drill into the range of interactions - i was able to essentially filter say 2 weeks of ORS interactions then filter it by connid to find the node, applications etc to start my analysis.

I'm in an environment where today there's 4 active ORS nodes with plan to scale another 4 so the tool has proven to be invaluable.

If you're able to come up with a few nifty filters please share....


Offline genesysguru

  • Sr. Member
  • ****
  • Posts: 293
  • Karma: 12
    • Genesys Guru Blog
Re: Parsing Genesys log files
« Reply #6 on: December 23, 2016, 01:20:14 AM »
Hi Eugene,

Progress so far ...

I'm using filebeat with different prospectors for each file type:

filebeat.prospectors:

- input_type: log
  paths:
    - c:\logs\confserv\*
  scan_frequency: 10
  close_inactive: 1m
  document_type: genesys_confserv
  tags: ["genesys"]
 
- input_type: log
  paths:
     - c:\logs\GIM\*
  scan_frequency: 10
  close_inactive: 1m
  document_type: genesys_gim
  tags: ["genesys"]

..... and so on

Then in logstash I have a beats input with a multline fliter to stitch multiple lines into a single event:

      codec => multiline {
         pattern => "%{TIME}(.*)"
         negate => true
         what => previous
      }

The in the filter section I grok out interesting stuff link this ...

   # Pull out useful metadata :-)
   
   # Common
   grok {
      break_on_match => false
      # 0001028debf65001
      match => { "message" => "(.*?)(?<connid>[0-9a-f]{16})(.*)" }
      # PU35RRP9HL2A59TA1L2JK5C938000001
      match => { "message" => "(.*?)(?<calluuid>[0-9A-Z]{32})(.*)" }
      
      # Attributes
      match => { "message" => "(.*?)(AttributeCallState\t)%{NUMBER:callstate}(.*)" }
      match => { "message" => "(.*?)(AttributeCallType\t)%{NUMBER:calltype}(.*)" }
      match => { "message" => "(.*?)(AttributeCallID\t)(?<callid>[0-9+]+)(.*)" }
      match => { "message" => "(.*?)(AttributeConnID\t)(?<connid>[0-9a-f]{16})(.*)" }
      match => { "message" => "(.*?)(AttributeCallUUID\t)'(?<calluuid>[0-9A-Z]{32})(.*)" }
      match => { "message" => "(.*?)(AttributeDNIS\t)(?<dnis>'[0-9+]+')(.*)" }
      match => { "message" => "(.*?)(AttributeANI\t)(?<ani>'[0-9+]+')(.*)" }
      match => { "message" => "(.*?)(AttributeThisDN\t)(?<thisdn>'[0-9+]+')(.*)" }
      match => { "message" => "(.*?)(AttributeThisQueue\t)(?<thisqueue>'[0-9+]+')(.*)" }
      match => { "message" => "(.*?)(AttributePartyUUID\t)'(?<calluuid>[0-9A-Z]{32})(.*)" }
      match => { "message" => "(.*?)(AttributeOtherDN\t)(?<otherdn>'[0-9+]+')(.*)" }
      
      # Events
      match => { "message" => "(.*?)(?<eventName>Event[A-Z]\w+)(.*)" }
   }   
      
   # Genesys Configuration Server
   if ([type] == "genesys_confserv") {
      grok {
         break_on_match => false
         # 12:08:38.187 Std 04523 Connection to client '376' closed, reason 'Server with this name is already running'
         match => { "message" => "%{TIME:timestamp} %{WORD:logLevel} %{NUMBER:logMsgId}( .*)" }
         # Object: [CfgFolder], name [Demands], DBID: [268] is deleted by client, type [SCE], name: [default], user: [default]
         match => { "message" => "(.*?)(Object: )(?<objecttype>\[(.*?)\])(,.*)(name )(?<objectname>\[(.*?)\])(,.*)(is )(?<action>(.*?))( .*)(user: )(?<username>\[(.*?)\])" }
      }
   }



Offline genesysguru

  • Sr. Member
  • ****
  • Posts: 293
  • Karma: 12
    • Genesys Guru Blog
Re: Parsing Genesys log files
« Reply #7 on: December 23, 2016, 01:23:33 AM »
PS Fra - ElastAlert on top of ElasticSearch provides the notifications you mentioned  :D

•   “Match where there are X events in Y time” (frequency type)
•   “Match when the rate of events increases or decreases” (spike type)
•   “Match when there are less than X events in Y time” (flatline type)
•   “Match when a certain field matches a blacklist/whitelist” (blacklist and whitelist type)
•   “Match on any event matching a given filter” (any type)
•   “Match when a field has two different values within some time” (change type)

Craig