PreEmptive Analytics Workbench User Guide


Before reviewing the information below, please review the Known Issues in case your issue is documented there.

Browser-Side Issues

Data is not appearing in the Portal

In this scenario, first check the RabbitMQ queue to detect whether the messages are actually being processed by the endpoint, and if so, whether they are being successfully entered into the MongoDB database through the Computation Service or ending up in the error queue.

Data does not appear after adding or updating a Pattern Indexer

By default, patterns extend all queries. If the pattern is not constructed properly, this may cause conflicts that prevent the portal from showing any information.

For example, a pattern referencing a field that is never properly added to a bin will always return empty. Similarly, a pattern that operates on the feature level may prevent session queries from completing, because feature information is not copied over to session bins.

Due to the design of the computation pipeline, any information processed while a faulty pattern is in place will still be properly indexed; only the portal query will be affected, and only so long as the pattern indexer is installed. Shutting down the workbench and removing the plugin will resolve the situation.

RabbitMQ status

RabbitMQ contains an administrative panel that provides full insight into its current state. It can be accessed at http://localhost:15672/#/queues, with a username of guest and password of guest (note that the RabbitMQ panel must be accessed on the host the Workbench is installed on, even if the Portal is available remotely).

The Workbench uses two queues within RabbitMQ, workbench-endpoint and workbench-error. The endpoint is the queue that parsed messages are pushed onto for processing by the Computation Service, while any messages that error within computation will be queued in workbench-error for error resolution (see next section for retrying these messages).

The administrative panel will show the total number of messages in a queue, as well as the incoming rate and the number of messages being processed. Check whether the endpoint queue is (nearly) empty (it should be), and whether the error queue is growing quickly (it should not be). If either queue is growing quickly, then the issue is probably within the Workbench. If neither queue is growing, then either data is not being delivered to the endpoint, or the data is being processed successfully.

If the issue is internal to the Workbench, all errors are logged in the Windows Event Log, with the logging source reflective of the component that detected the error (endpoint or computation). If the Computation Service is not pulling the messages, restart the Computation Service from the Windows Service panel and check the Event Viewer to confirm it has been enabled correctly.

Processing the error queue

Messages may have processing failures due to bugs in custom indexers or internal Workbench issues such as connectivity problems. These messages will be deposited in the workbench-error RabbitMQ queue for reprocessing once the underlying issue has been resolved. You can tell the Workbench to reprocess this queue by using the admin utility.

The progress of the error reprocessing can be monitored in the RabbitMQ administrative panel per the prior section.

If the error queue is not fully processed then please check the Windows Event Log and other sections of this troubleshooting guide for additional details.

A new report does not appear in the Portal

Remember to add the report to the config.json file.

Transformations are failing

Most transformation failures which do not give specific error messages are due to incorrect Vega transform specifications. Vega is a third party technology which favors speed over error-handling, so it can be difficult to debug issues. The Portal captures errors thrown while processing each Vega spec, but cannot narrow it down to a specific transform.

To identify where in a transformation an error is being thrown, use the log transform.

{ "type": "log", "label": "Position1", "meta": true }

Some common error messages encountered when configuring transforms include:

Syntax Error: ...

The JSON for this component is invalid. If you have trouble finding the issue, try copying the relevant text into a JSON linting tools such as: JSONFormatter or JSONLint.

TypeError: Cannot read property 'part2' of undefined

This is due to a transform trying to access data at a path such as part1.part2 where part1 is not present (undefined/null/not an object). Add a log transform before the failing transform to view the available properties. Remember that data for some properties will not be present on all entities within a dataset. If you are writing formulas/filters/sorts based transforms make sure you handle the case where part1 could be undefined or null. For example:

{ "type": "formula", "field": "abs", "type": "part1 ? abs(part1.part2) : 0" }


{ "type": "filter", "expr": "part1 ? part1.part2 < 10 : false" }

TypeError: Object function func(data) { ... } has no method 'expr'

Incorrectly specifying a property of a transform will throw an error like this. For example if you specify a filter transform using 'expr' instead of 'by' you will see this error message.

Server-Side Issues

MongoDB service will not start

If the MongoDB service fails to start, check the log (C:\mongodb\log\mongo.log) for errors. Note that the service won't start if the logpath in the configuration file is pointing to a non-existent folder, or if the server can't access it. It may also fail to start if the configuration file is improperly formatted. Ensure there are no TAB characters in the mongod.cfg file.

Custom indexer isn't working

Follow these guidelines if messages that should be undergoing custom indexer processing do not seem to be activating your plugin. This scenario will also be apparent if there are Portal widgets configured to use a Server Query defined by the plugin, since it will error and display the reason.

  • Deactivate the Computation Service through the Windows Service Admin -> PreEmptive Analytics Workbench Computation Service. Deactivate the IIS website.
  • Ensure that the DLL for the plugin is located in the config/Plugins folder and that any third party plugin dependencies are in config/Plugins/Dependencies. Include only the plugin and third party dependencies; there is no need to copy the Workbench dependencies.
  • Activate the Computation Service and IIS website.
  • If the problem remains, check the Windows Event Viewer. It will have a detailed message about the nature of the problem.

Custom geolocation isn't working

Geolocation is included with the default installation of the Workbench as a plugin (config/Plugins/MaxMindGeoLocation.dll). Also included is a free city lookup based on IP, in the config/Geoloc folder. Premium versions of the MaxMind® Geolocation city database can be used instead of the built in file, but make sure to rename the file to GeoIPCity.dat or the plugin will not be able to find it.

If implementing custom geolocation, make sure to include its API DLL in the config/Plugins/Dependencies folder, as well as referencing any IP database that may be required.

General Debugging Advice

Debugging Portal Issues

For most issues, start by checking the browser's console (usually accessed by pressing F12). This should give you a rough idea of where in the process any issues occurred.

The metadata query (see below) has information on queries, fields and filters that are available to the Portal.

The 'log' transform is extremely useful when debugging transformations (see Transformations are failing).

Metadata Query

The server provides a "metadata query" that catalogs all the available queries and their details (domain, name, fields, filters, and field metadata). You can access this directly to see which queries are available from the server, at http://localhost:88/Interaction/query_metadata (or substitute the appropriate host name).

This query can be viewed in JSON format as well, at http://localhost:88/Interaction/query_metadata?format=json

A couple of useful extensions for viewing JSON in the browser are:

Adding Logging to Custom Indexers

Logging can be added to custom indexers to help with troubleshooting. In order to enable logging, the indexer must have the logging facility injected into the constructor, such as in the SampleIndexer included in the WorkbenchSample project.

private readonly IFunctionalLogger logger;
public SampleIndexer(IFunctionalLogger logger,...)
    this.logger = logger;

The logger can then be used with the following function call:

this.logger.Log("Custom*", Message, Exception, LoggingLevel, params Variable_Values);

The arguments are:

  • The first parameter is the name of the log as presented through the NLog facility. It should be given a name that identifies the custom indexer but must start with "Custom" so that NLog's routing works correctly.
  • The Message should be descriptive and may include variables inside {}, for example "This is a sample message at time {Time}."
  • The Exception is the associated exception if the logging occurs in a catch/try block, or otherwise null.
  • LoggingLevel is one of (Debug|Info|Warn|Error|Fatal). The LoggingLevel is used by NLog to route the messages, refer to the NLog Website for details. By default, Warn messages and above will be directed to the Windows Event Viewer.
  • The last parameters are KeyValuePairs for each of the variables defined within the message.

The SampleIndexer demonstrates this with the following call:

this.logger.Log("CustomSampleIndexer", "This is a sample logging message recording {Bytes}",
                new KeyValuePair<string,object>("Bytes", mbs));

Configure Workbench Logging

The Workbench contains a logging facility that uses NLog to define targets. There is a default NLog configuration that records all messages with status warning and above into the Windows Event Viewer. The log can be accessed through Run -> Event Viewer -> Applications and Services Logs -> PA Workbench. This log is invaluable in diagnosing problems that occur in any Workbench component, as well as in the debugging of custom indexers.

Each entry in the log will display the log-level as well as its source. The sources are: Endpoint, Computation, Querying, and Custom. The first three refer to the PA components of the Workbench, while the latter is reserved for user extensions.

In addition to the default configuration, NLog can be configured to contain targets such as files or databases. Full information about configuration is available at the NLog Website.

MongoDB Logging

In rare circumstances, errors can occur due to internal MongoDB issues. In particular, these may occur due to:

  • Lack of disk space
  • Long SchemaIndex keys that are above MongoDB's limit of 1024 bytes
  • Too many unique applications (1000+) present in the Workbench

Most MongoDB installations have logging turned on (the instructions in this guide suggest P:\data\log as a location). If an error is occurring without an associated entry in the Windows Event Viewer, and there is the possibility of one of these scenarios, then please check these logs to confirm.

Some instances of MongoDB errors may be solvable quickly:

  • If an error message indicates that index keys are above MongoDB's limit, then the appropriate FieldKey can be changed to support large values, as specified in the customization overview.
  • If an error message indicates the paging file is too small (error 1455), then the MongoDB Production Notes recommends setting the Windows page file to the largest feasible multiple of 32 GB.

WMI Counters

The Workbench contains a suite of WMI counters for monitoring computation performance. These can be accessed through Run -> Performance Monitor -> Add Counters (the green plus sign) -> PA Workbench. These counters give insight into the Workbench status and processing performance. From here, it is possible to see whether the endpoint, computation and Query Web Service are enabled, as well as the number of messages processed, errored or rejected by the endpoint.

Workbench Version 1.2.0. Copyright © 2016 PreEmptive Solutions, LLC