Skip to content

Tyk Analytics Pump to move analytics data from Redis to any supported back end (multiple back ends can be written to at once).

License

Notifications You must be signed in to change notification settings

TykTechnologies/tyk-pump

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tyk Pump

Build Status

Tyk Pump is a pluggable analytics purger to move Analytics generated by your Tyk nodes to any back-end.

Table of Contents

Tyk Analytics Schema

The table below provides details on the fields within each tyk_analytics record.

Analytics Data Field Description Remarks Example
Method Request method GET, POST
Host Request Host header Includes host and optional port number of the server to which the request was sent. tyk.io, or tyk.io:8080 if port is included
Path Request path Displayed in decoded form. /foo/bar for /foo%2Fbar or /foo/bar
RawPath Request path Same value as Path. Does not provide the raw encoded path. /foo/bar for /foo%2Fbar or /foo/bar
ContentLength Request Content-Length header The number of bytes in the request body. 10 for request body 0123456789
UserAgent Request User-Agent header curl/7.86.0
Day Request day Based on TimeStamp field. 16 for 2022-11-16T03:01:54Z
Month Request month Based on TimeStamp field. 11 for 2022-11-16T03:01:54Z
Year Request year Based on TimeStamp field. 2022 for 2022-11-16T03:01:54Z
Hour Request hour Based on TimeStamp field. 3 for 2022-11-16T03:01:54Z
ResponseCode Response code Only contains the integer element of the response code. Can be generated by either the gateway or upstream server, depending on how the request is handled. 200 for 200 OK
APIKey Request authentication key Authentication key, as provided in request. If no API key is provided then gateway will substitute a default value. Unhashed auth_key, hashed 6129dc1e8b64c6b4, or 00000000 if no authentication provided.
TimeStamp Request timestamp Generated by the gateway, based on the time it receives the request from the client. 2022-11-16T03:01:54.648+00:00
APIVersion Version of API Definition requested Based on version configuration of context API definition. If API is unversioned then value is "Not Versioned". Could be an alphanumeric value such as 1 or b. Is Not Versioned if not versioned.
APIName Name of API Definition requested Foo API
APIID Id of API Definition requested 727dad853a8a45f64ab981154d1ffdad
OrgID Organisation Id of API Definition requested 5e9d9544a1dcd60001d0ed20
OauthID Id of OAuth client Value is empty string if not using OAuth, or OAuth client not present. my-oauth-client-id
RequestTime Duration of upstream roundtrip Equal to value of Latency.Total field. 3 for a 3ms roundtrip
RawRequest Raw HTTP request Base64 encoded copy of the request sent from the gateway to the upstream server. R0VUIC9nZXQgSFRUUC8xLjEKSG9zdDogdHlrLmlv
RawResponse Raw HTTP response Base64 encoded copy of the response sent from the gateway to the client. SFRUUC8xLjEgMjAwIE9LCkNvbnRlbnQtTGVuZ3RoOiAxOQpEYXRlOiBXZWQsIDE2IE5vdiAyMDIyIDA2OjIxOjE2IEdNVApTZXJ2ZXI6IGd1bmljb3JuLzE5LjkuMAoKewogICJmb28iOiAiYmFyIgp9Cg==
IPAddress Client IP address Taken from either X-Real-IP or X-Forwarded-For request headers, if set. Otherwise, determined by gateway based on request. 172.18.0.1
Geo Client geolocation data Calculated using MaxMind database, based on client IP address. {"country":{"isocode":"SG"},"city":{"geonameid":0,"names":{}},"location":{"latitude":0,"longitude":0,"timezone":""}}
Network Network statistics Not currently used. N/A
Latency Latency statistics Contains two fields; upstream is the roundtrip duration between the gateway sending the request to the upstream server and it receiving a response. total is the upstream value plus additional gateway-side functionality such as processing analytics data. {"total":3,"upstream":3}
Tags Session context tags Can contain many tags which refer to many things, such as the gateway, API key, organisation, API definition etc. ["key-00000000","org-5e9d9544a1dcd60001d0ed20","api-accbdd1b89e84ec97f4f16d4e3197d5c"]
Alias Session alias Alias of the context authenticated identity. Blank if no alias set or request is unauthenticated. my-key-alias
TrackPath Tracked endpoint flag Value is true if the requested endpoint is configured to be tracked, otherwise false. true or false
ExpireAt Future expiry date Can be used to implement automated data expiry, if supported by storage. 2022-11-23T07:26:25.762+00:00

Pumps / Back ends supported:

Configuration:

This will be your base config. We will then add 1 or more Pumps based off our selected data sinks.

JSON / Conf File

Create a pump.conf file:

{
  "analytics_storage_type": "redis",
  "analytics_storage_config": {
    "type": "redis",
    "host": "localhost",
    "port": 6379,
    "hosts": null,
    "username": "",
    "password": "",
    "database": 0,
    "optimisation_max_idle": 100,
    "optimisation_max_active": 0,
    "enable_cluster": false,
    "redis_use_ssl": false,
    "redis_ssl_insecure_skip_verify": false
  },
  "log_level":"info",
  "log_format":"text",
  "purge_delay": 1,
  "health_check_endpoint_name": "hello",
  "health_check_endpoint_port": 8083,
  "pumps": {
    "csv": {
      "type": "csv",
      "meta": {
        "csv_dir": "./"
      }
    },
	...
  },
  "dont_purge_uptime_data": true,
  "omit_detailed_recording": false,
  "max_record_size": 1000
}
Env Variables
TYK_PMP_OMITCONFIGFILE=true

TYK_PMP_ANALYTICSSTORAGETYPE=redis
TYK_PMP_ANALYTICSSTORAGECONFIG_TYPE=redis
TYK_PMP_ANALYTICSSTORAGECONFIG_ADDRS=tyk-redis:6379
TYK_PMP_ANALYTICSSTORAGECONFIG_USERNAME=
TYK_PMP_ANALYTICSSTORAGECONFIG_PASSWORD=
TYK_PMP_ANALYTICSSTORAGECONFIG_DATABASE=0
TYK_PMP_ANALYTICSSTORAGECONFIG_MAXIDLE=100
TYK_PMP_ANALYTICSSTORAGECONFIG_MAXACTIVE=100
TYK_PMP_ANALYTICSSTORAGECONFIG_ENABLECLUSTER=false
TYK_PMP_PURGEDELAY=2

TYK_PMP_DONTPURGEUPTIMEDATA=true

Base Configuration Fields Explained

analytics_storage_config

This is the Tyk Pump's primary database which it scrapes Tyk Gateway analytics from. Normally this is redis.

  "analytics_storage_config": {
    "type": "redis",
    "host": "localhost",
    "port": 6379,
    "hosts": null,
    "username": "",
    "password": "",
    "database": 0,
    "optimisation_max_idle": 100,
    "optimisation_max_active": 0,
    "enable_cluster": false,
    "redis_use_ssl": false,
    "redis_ssl_insecure_skip_verify": false
  },

redis_use_ssl - Setting this to true to use SSL when connecting to Redis

redis_ssl_insecure_skip_verify - Set this to true to tell Pump to ignore Redis' cert validation

Purge configuration

purge_delay - The number of seconds the Pump waits between checking for analytics data and purge it from Redis.

purge_chunk - The maximum number of records to pull from Redis at a time. If it's unset or 0, all the analytics records in Redis are pulled. If it's setted, storage_expiration_time is used to reset the analytics record TTL.

storage_expiration_time - The number of seconds for the analytics records TTL. It only works if purge_chunk is enabled. Defaults to 60 seconds.

Logs

log_level - Set the logger details for tyk-pump. The posible values are: info,debug,error and warn. By default, the log level is info.

log_format - Set the logger format. The possible values are: text and json. By default, the log format is text.

Health Check

You can configure the health check endpoint and port for the Tyk Pump:

  • health_check_endpoint_name - The default is "health"
  • health_check_endpoint_port - The default port is 8083

This returns a HTTP 200 OK response if the Pump is running.

{"status": "ok"}

Pump Configurations

Uptime Data

dont_purge_uptime_data - Setting this to false will create a pump that pushes uptime data to (Mongo) Uptime Pump, so the Tyk Dashboard can read it. Disable by setting to true

Take into account that you can also set log_level field into the uptime_pump_config to debug,info or warning. By default, the SQL logger verbosity is silent.

Mongo Uptime Pump

In uptime_pump_config you can configure a mongo uptime pump. By default, the uptime pump is going to be mongo type, so it's not necessary to specify it here.

The minimum required configurations for uptime pumps are:

  • collection_name - That determines the uptime collection name in mongo. By default, tyk_uptime_analytics.

  • mongo_url - The uptime pump mongo connection url. It is usually something like "mongodb://username:password@{hostname:port},{hostname:port}/{db_name}".

JSON / Conf File
{
  "uptime_pump_config": {
    "collection_name": "tyk_uptime_analytics",
    "mongo_url": "mongodb://username:password@{hostname:port},{hostname:port}/{db_name}",
    "log_level": "info"
  }
}
Env Variables:
TYK_PMP_UPTIMEPUMPCONFIG_COLLECTIONNAME=tyk_uptime_analytics
TYK_PMP_UPTIMEPUMPCONFIG_MONGOURL=mongodb://tyk-mongo:27017/tyk_analytics
TYK_PMP_UPTIMEPUMPCONFIG_MAXINSERTBATCHSIZEBYTES=500000
TYK_PMP_UPTIMEPUMPCONFIG_MAXDOCUMENTSIZEBYTES=200000
TYK_PMP_UPTIMEPUMPCONFIG_LOGLEVEL=info

SQL Uptime Pump

Supported in Tyk Pump v1.5.0+

In uptime_pump_config you can configure a SQL uptime pump. To do that, you need to add the field uptime_type with sql value. You can also use different types of SQL Uptime pumps, like postgres or sqlite using the type field.

JSON / Conf file Example
    "uptime_pump_config": {
        "uptime_type": "sql",
        "type": "postgres",
        "connection_string": "host=sql_host port=sql_port user=sql_usr dbname=dbname password=sql_pw",
        "table_sharding": false,
	"log_level": "info"
    },
Env Variables:
TYK_PMP_UPTIMEPUMPCONFIG_UPTIMETYPE=sql
TYK_PMP_UPTIMEPUMPCONFIG_TYPE=postgres
TYK_PMP_UPTIMEPUMPCONFIG_CONNECTIONSTRING=host=sql_host port=sql_port user=sql_usr dbname=dbname password=sql_pw
TYK_PMP_UPTIMEPUMPCONFIG_TABLESHARDING=false
TYK_PMP_UPTIMEPUMPCONFIG_LOGLEVEL=info

GrayLog

Example of integrating with GrayLog:

JSON / Conf file Example
"graylog": {
      "type": "graylog",
      "meta": {
        "host": "10.60.6.15",
        "port": 12216,
        "tags": [
          "method",
          "path",
          "response_code",
          "api_key",
          "api_version",
          "api_name",
          "api_id",
          "org_id",
          "oauth_id",
          "raw_request",
          "request_time",
          "raw_response",
          "ip_address"
        ]
      }
    },
Env Variables:
TYK_PMP_PUMPS_GRAYLOG_TYPE=graylog
TYK_PMP_PUMPS_GRAYLOG_META_GRAYLOGHOST=10.60.6.15
TYK_PMP_PUMPS_GRAYLOG_META_GRAYLOGPORT=12216
TYK_PMP_PUMPS_GRAYLOG_META_TAGS=method,path,response_code,api_key,api_version,api_name,api_id,org_id,oauth_id,raw_request,request_time,raw_response,ip_address

Resurface.io

Resurface provides data-driven API security, by making each and every API call a durable transaction inside a purpose-built data lake. Use Resurface for attack and failure triage, root cause, threat and risk identification, and simply just knowing how your APIs are being used (and misused!). By continously scanning your own data lake, Resurface provides retroactive analysis. It identifies what's important in your API data, sending warnings and alerts in real-time for fast action.

The only two fields necessary in the pump cofiguration are:

  • capture_url corresponds to the Resurface database capture endpoint URL. You might need to subsitute localhost for the corresponding hostname, if you're not running resurface locally.
  • rules corresponds to an active set of rules that control what data is logged and how sensitive data is masked. The example below applies a predefined set of rules (include debug), but logging rules are easily customized to meet the needs of any application.

Note: Resurface requires Detailed Logging to be enabled in order to capture API call details in full.

JSON / Conf File Example
"resurface": {
    "type": "resurfaceio",
    "meta": {
        "capture_url": "http://localhost:7701/message",
        "rules": "include debug"
    }
}
Env Variables
TYK_PMP_PUMPS_RESURFACEIO_TYPE=resurfaceio
TYK_PMP_PUMPS_RESURFACEIO_META_URL=http://localhost:7701/message
TYK_PMP_PUMPS_RESURFACEIO_META_RULES="include debug"

StatsD

Example of integrating with StatsD:

JSON / Conf file Example
{
  "pumps": {
    "statsd": {
      "type": "statsd",
      "meta": {
        "address": "localhost:8125",
        "fields": [
          "request_time"
        ],
        "tags": [
          "path",
          "response_code",
          "api_key",
          "api_version",
          "api_name",
          "api_id",
          "raw_request",
          "ip_address",
          "org_id",
          "oauth_id"
        ]
      }
    }
  }
}

By default, StatsD pump will put the analytic record method and path in your path field. From Pump 1.6+ you can set separated_method to true in your Statsd pump meta config in order to have the method attribute in a separated field.

Env Variables:
TYK_PMP_PUMPS_STATSD_TYPE=statsd
TYK_PMP_PUMPS_STATSD_META_ADDRESS="localhost:8125"
TYK_PMP_PUMPS_STATSD_META_FIELDS=request_time
TYK_PMP_PUMPS_STATSD_META_TAGS=path,response_code,api_key,api_version,api_name,api_id,org_id,oauth_id,raw_request,ip_address
TYK_PMP_PUMPS_STATSD_META_SEPARATEDMETHOD=false

Mongo & Tyk Dashboard.

There are 3 mongo pumps. You may use one or multiple depending on the data you want.

The Tyk Dashboard uses various Mongo collections to store and visualize API traffic analytics. Please visit this link for steps on configuration. Available Mongo instances are: Standard Mongo, DocumentDB (AWS), CosmosDB (Azure). All of them using the same configuration (CosmosDB does not support "expireAt" index, so it will be skipped)

JSON / Conf File
{
  ...
  "pumps": {
    "mongo": {
      "type": "mongo",
      "meta": {
        "collection_name": "tyk_analytics",
        "mongo_url": "mongodb://username:password@{hostname:port},{hostname:port}/{db_name}"
      }
    },
    "mongo-pump-aggregate": {
      "type": "mongo-pump-aggregate",
      "meta": {
        "mongo_url": "mongodb://username:password@{hostname:port},{hostname:port}/{db_name}",
        "use_mixed_collection": true,
        "store_analytics_per_minute": false,
        "aggregation_time": 50,
        "enable_aggregate_self_healing": true,
        "track_all_paths": false
      },
      "mongo-pump-selective": {
        "name": "mongo-pump-selective",
        "meta": {
          "mongo_url": "mongodb://tyk-mongo:27017/tyk_analytics",
          "use_mixed_collection": true
        }
      }
    }
  }
}
Env Variables
TYK_PMP_PUMPS_MONGO_TYPE=mongo
TYK_PMP_PUMPS_MONGO_META_COLLECTIONNAME=tyk_analytics
TYK_PMP_PUMPS_MONGO_META_MAXINSERTBATCHSIZEBYTES=80000
TYK_PMP_PUMPS_MONGO_META_MAXDOCUMENTSIZEBYTES=20112

TYK_PMP_PUMPS_MONGOAGG_TYPE=mongo-pump-aggregate
TYK_PMP_PUMPS_MONGOAGG_META_USEMIXEDCOLLECTION=true
TYK_PMP_PUMPS_MONGOAGG_META_STOREANALYTICSPERMINUTE=false
TYK_PMP_PUMPS_MONGOAGG_META_AGGREGATIONTIME=50
TYK_PMP_PUMPS_MONGOAGG_META_ENABLESELFHEALING=true
Self Healing

By default, the maximum size of a document in MongoDB is 16MB. If we try to update a document that has grown to this size, an error is received.

The Mongo Aggregate pump creates a new document in the database for each "aggregation period"; the length of that period is defined by aggregation_time. If, during that period (in minutes) the document grows beyond 16MB, the error will be received and no more records will be recorded until the end of the aggregation period (when a new document will be created).

The Self Healing option in the Mongo Aggregate Pump avoids this data loss by monitoring the size of the current document. When this hits the 16MB limit, Pump will automatically create a new document with the current timestamp and start writing to that instead.

When it does this, the Pump will halve the aggregation_time so that it aggregates records for half the time period originally configured before creating a new document, reducing the risk of repeatedly hitting the maximum document size.

This self healing is repeatable, however, such that if the document size does reach maximum (16MB) even with the new shorter aggregation period, a new document will be created and the aggregation_time will be halved again.

The minimum value for aggregation_time is 1; Self Healing cannot reduce it beyond this value.

For example, if the aggregation_time is configured as 50 (minutes) but the document hits the maximum size (16MB), a new document will be started and the aggregation_time will be set to 25 (minutes).

Note that store_analytics_per_minute takes precedence over aggregation_time so if store_analytics_per_minute is equal to true, the value of aggregation_time will be equal to 1 and self healing will not operate.

Mongo Graph Pump

As of Pump 1.7+, a new mongo is available called the mongo_graph pump. This pump is specifically for parsing GraphQL and UDG requests, tracking information like types requested, fields requested, specific graphql body errors etc.

Note that only Gateways v4.3.+ are compatible with this new pump. Previous versions would send the analytics data to the tyk_analytics collection

A sample config looks like this:

{
  "pumps": {
    "mongo-graph": {
      "type": "mongo-graph",
      "meta": {
        "collection_name": "graph_analytics",
        "mongo_url": "mongodb://localhost:27017/tyk_analytics"
      }
    }
  }
}

SQL Graph Pump

Similar to the Mongo graph pump, the sql-graph pump is a specialized pump for parsing and recording granular analytics for GraphQL and UDG requests. The difference, like the name says is this pump uses sql type databases as its storage db. Supported SQL databases are sqlite, postgres, mysql.

A sample config looks like this:

{
  "pumps": {
    "sql-graph": {
      "meta": {
        "type": "postgres",
        "table_name": "graph-records",
        "connection_string": "host=localhost user=postgres password=password dbname=postgres",
        "table_sharding": false
      }
    }
  }
}

table_sharding - This determines how the sql tables are created, if this is set to true, a new table is created for each day of records for the graph data. The name format for each table is <tablename>. Defaults to false.

Elasticsearch Config

"index_name" - The name of the index that all the analytics data will be placed in. Defaults to "tyk_analytics"

"elasticsearch_url" - If sniffing is disabled, the URL that all data will be sent to. Defaults to "http://localhost:9200"

"enable_sniffing" - If sniffing is enabled, the "elasticsearch_url" will be used to make a request to get a list of all the nodes in the cluster, the returned addresses will then be used. Defaults to false

"document_type" - The type of the document that is created in ES. Defaults to "tyk_analytics"

"rolling_index" - Appends the date to the end of the index name, so each days data is split into a different index name. E.g. tyk_analytics-2016.02.28 Defaults to false

"extended_stats" - If set to true will include the following additional fields: Raw Request, Raw Response and User Agent.

"version" - Specifies the ES version. Use "3" for ES 3.X, "5" for ES 5.X, "6" for ES 6.X, "7" for ES 7.X . Defaults to "3".

"disable_bulk" - Disable batch writing. Defaults to false.

bulk_config: Batch writing trigger configuration. Each option is an OR with eachother:

  • workers: Number of workers. Defaults to 1.
  • flush_interval: Specifies the time in seconds to flush the data and send it to ES. Default disabled.
  • bulk_actions: Specifies the number of requests needed to flush the data and send it to ES. Defaults to 1000 requests. If it is needed, can be disabled with -1.
  • bulk_size: Specifies the size (in bytes) needed to flush the data and send it to ES. Defaults to 5MB. If it is needed, can be disabled with -1.
Env Variables
TYK_PMP_PUMPS_ELASTICSEARCH_TYPE=elasticsearch
TYK_PMP_PUMPS_ELASTICSEARCH_META_INDEXNAME=tyk_analytics
TYK_PMP_PUMPS_ELASTICSEARCH_META_ELASTICSEARCHURL=http://localhost:9200
TYK_PMP_PUMPS_ELASTICSEARCH_META_ENABLESNIFFING=false
TYK_PMP_PUMPS_ELASTICSEARCH_META_DOCUMENTTYPE=tyk_analytics
TYK_PMP_PUMPS_ELASTICSEARCH_META_ROLLINGINDEX=false
TYK_PMP_PUMPS_ELASTICSEARCH_META_EXTENDEDSTATISTICS=false
TYK_PMP_PUMPS_ELASTICSEARCH_META_VERSION=5
TYK_PMP_PUMPS_ELASTICSEARCH_META_BULKCONFIG_WORKERS=2
TYK_PMP_PUMPS_ELASTICSEARCH_META_BULKCONFIG_FLUSHINTERVAL=60

Moesif Config

Moesif is a user-centric API analytics and monitoring service for APIs. More Info on Moesif for Tyk

  • "application_id" - Moesif Application Id. You can find your Moesif Application Id from Moesif Dashboard -> Top Right Menu -> API Keys . Moesif recommends creating separate Application Ids for each environment such as Production, Staging, and Development to keep data isolated.
  • "request_header_masks" - (optional) An option to mask a specific request header field. Type: String Array [] string
  • "request_body_masks" - (optional) An option to mask a specific - request body field. Type: String Array [] string
  • "response_header_masks" - (optional) An option to mask a specific response header field. Type: String Array [] string
  • "response_body_masks" - (optional) An option to mask a specific response body field. Type: String Array [] string
  • "disable_capture_request_body" - (optional) An option to disable logging of request body. Type: Boolean. Default value is false.
  • "disable_capture_response_body" - (optional) An option to disable logging of response body. Type: Boolean. Default value is false.
  • "user_id_header" - (optional) An optional field name to identify User from a request or response header. Type: String.
  • "company_id_header" - (optional) An optional field name to identify Company (Account) from a request or response header. Type: String.
  • "authorization_header_name" - (optional) An optional request header field name to used to identify the User in Moesif. Type: String. Default value is authorization.
  • "authorization_user_id_field" - (optional) An optional field name use to parse the User from authorization header in Moesif. Type: String. Default value is sub.
  • "enable_bulk" - Set this to true to enable bulk_config.
  • "bulk_config"- (optional) Batch writing trigger configuration.
    • "event_queue_size" - (optional) An optional field name which specify the maximum number of events to hold in queue before sending to Moesif. In case of network issues when not able to connect/send event to Moesif, skips adding new events to the queue to prevent memory overflow. Type: int. Default value is 10000.
    • "batch_size" - (optional) An optional field name which specify the maximum batch size when sending to Moesif. Type: int. Default value is 200.
    • "timer_wake_up_seconds" - (optional) An optional field which specifies a time (every n seconds) how often background thread runs to send events to moesif. Type: int. Default value is 2 seconds.
Env Variables
TYK_PMP_PUMPS_MOESIF_TYPE=moesif
TYK_PMP_PUMPS_MOESIF_META_APPLICATIONID=""

Hybrid RPC Config

Hybrid Pump allows you to install Tyk Pump inside Multi-Cloud or MDCB Worker installations. You can configure Tyk Pump to send data to the source of your choice (i.e. ElasticSearch), and in parallel, forward analytics to the Tyk Cloud. Additionally, you can set the aggregated flag to send only aggregated analytics to MDCB or Tyk Cloud, in order to save network bandwidth between DCs.

NOTE: Make sure your tyk.conf has analytics_config.type set to empty string value.

rpc_key - Put your organization ID in this field.

api_key - This the API key of a user used to authenticate and authorise the Gateway’s access through MDCB. The user should be a standard Dashboard user with minimal privileges so as to reduce risk if compromised. The suggested security settings are read for Real-time notifications and the remaining options set to deny.

aggregated - Set this field to true to send only aggregated analytics to MDCB or Tyk Cloud.

connection_string - The MDCB instance or load balancer.

use_ssl - Set this field to true if you need secured connection (default value is false).

ssl_insecure_skip_verify - Set this field to true if you use self signed certificate.

group_id - This is the “zone” that this instance inhabits, e.g. the DC it lives in. It must be unique to each slave cluster / DC.

call_timeout - This is the timeout (in milliseconds) for RPC calls. Default is 10.

rpc_pool_size - This is maximum number of connections to MDCB. Default is 5.

Env Variables
TYK_PMP_PUMPS_HYBRID_TYPE=hybrid
TYK_PMP_PUMPS_HYBRID_META_RPCKEY=5b5fd341e6355b5eb194765e
TYK_PMP_PUMPS_HYBRID_META_APIKEY=008d6d1525104ae77240f687bb866974
TYK_PMP_PUMPS_HYBRID_META_CONNECTIONSTRING=localhost:9090
TYK_PMP_PUMPS_HYBRID_META_AGGREGATED=false
TYK_PMP_PUMPS_HYBRID_META_USESSL=false
TYK_PMP_PUMPS_HYBRID_META_SSLINSECURESKIPVERIFY=false
TYK_PMP_PUMPS_HYBRID_META_GROUPID=""
TYK_PMP_PUMPS_HYBRID_META_CALLTIMEOUT=10
TYK_PMP_PUMPS_HYBRID_META_PINGTIMEOUT=60
TYK_PMP_PUMPS_HYBRID_META_RPCPOOLSIZE=5

Prometheus

Prometheus is an open-source monitoring system with a dimensional data model, flexible query language, efficient time series database and modern alerting approach.

Note - When run as docker image then "listen_address": ":9090"

Tyk expose the following counters:

  • tyk_http_status{code, api}
  • tyk_http_status_per_path{code, api, path, method}
  • tyk_http_status_per_key{code, key}
  • tyk_http_status_per_oauth_client{code, client_id}

And the following Histogram for latencies:

  • tyk_latency{type, api}

Note: base metric families can be removed by configuring the disabled_metrics property.

Custom Prometheus metrics

From Pump 1.6+ it's possible to add custom prometheus metrics using the custom_metrics configuration. For example:

"prometheus": {
  "type": "prometheus",
	"meta": {
		"listen_address": "localhost:9090",
		"path": "/metrics",
		"custom_metrics":[
      {
        "name":"tyk_custom_http_status_per_api_name",
        "description":"This is a custom counter",
        "metric_type":"counter",
        "labels":["response_code","api_name"]
      }
    ]
	}
},

This will create a metric for HTTP status code and API name.

There are 2 types of metric_type: counter and histogram.

If you are using histogram, its always going to use the request_time to observe, and you can also set the configuration option buckets where you can define the buckets into which observations are counted. buckets type is an array of float64 and its default value is [1, 2, 5, 7, 10, 15, 20, 25, 30, 40, 50, 60, 70, 80, 90, 100, 200, 300, 400, 500, 1000, 2000, 5000, 10000, 30000, 60000].

The labels configuration determines the label name and value extracted from the analytic record. The available values are: ["host","method", "path", "response_code", "api_key", "time_stamp", "api_version", "api_name", "api_id", "org_id", "oauth_id", "request_time", "ip_address", "alias"]

JSON / Conf File
{
    ...
    "pumps": {
      "prometheus": {
        "type": "prometheus",
        "meta": {
          "listen_address": "localhost:9090",
          "path": "/metrics"
        }
      }
    }
}
Env Variables
TYK_PMP_PUMPS_PROMETHEUS_TYPE=prometheus
TYK_PMP_PUMPS_PROMETHEUS_META_ADDR=localhost:9090
TYK_PMP_PUMPS_PROMETHEUS_META_PATH=/metrics
TYK_PMP_PUMPS_PROMETHEUS_META_CUSTOMMETRICS='[{"name":"tyk_http_requests_total","description":"Total of API requests","metric_type":"counter","labels":["response_code","api_name"]}]'
TYK_PMP_PUMPS_PROMETHEUS_META_DISABLEDMETRICS=[]

DogStatsD

  • address: address of the datadog agent including host & port
  • namespace: prefix for your metrics to datadog
  • async_uds: Enable async UDS over UDP https://github.com/Datadog/datadog-go#unix-domain-sockets-client
  • async_uds_write_timeout_seconds: Integer write timeout in seconds if async_uds: true
  • buffered: Enable buffering of messages
  • buffered_max_messages: Max messages in single datagram if buffered: true. Default 16
  • sample_rate: default 1 which equates to 100% of requests. To sample at 50%, set to 0.5
  • tags: List of tags to be added to the metric. The possible options are listed in the below example

If no tag is specified the fallback behavior is to use the below tags:

  • path
  • method
  • response_code
  • api_version
  • api_name
  • api_id
  • org_id
  • tracked
  • oauth_id

Note that this configuration can generate significant charges due to the unbound nature of the path tag.

"dogstatsd": {
  "type": "dogstatsd",
  "meta": {
    "address": "localhost:8125",
    "namespace": "pump",
    "async_uds": true,
    "async_uds_write_timeout_seconds": 2,
    "buffered": true,
    "buffered_max_messages": 32,
    "sample_rate": 0.5,
    "tags": [
      "method",
      "response_code",
      "api_version",
      "api_name",
      "api_id",
      "org_id",
      "tracked",
      "path",
      "oauth_id"
    ]
  }
},

On startup, you should see the loaded configs when initializing the dogstatsd pump

[May 10 15:23:44]  INFO dogstatsd: initializing pump
[May 10 15:23:44]  INFO dogstatsd: namespace: pump.
[May 10 15:23:44]  INFO dogstatsd: sample_rate: 50%
[May 10 15:23:44]  INFO dogstatsd: buffered: true, max_messages: 32
[May 10 15:23:44]  INFO dogstatsd: async_uds: true, write_timeout: 2s
Env Variables
TYK_PMP_PUMPS_DOGSTATSD_TYPE=dogstatsd
TYK_PMP_PUMPS_DOGSTATSD_META_ADDRESS=localhost:8125
TYK_PMP_PUMPS_DOGSTATSD_META_NAMESPACE=pump
TYK_PMP_PUMPS_DOGSTATSD_META_ASYNCUDS=true
TYK_PMP_PUMPS_DOGSTATSD_META_ASYNCUDSWRITETIMEOUT=2
TYK_PMP_PUMPS_DOGSTATSD_META_BUFFERED=true
TYK_PMP_PUMPS_DOGSTATSD_META_BUFFEREDMAXMESSAGES=32
TYK_PMP_PUMPS_DOGSTATSD_META_TAGS=method,response_code,api_version,api_name,api_id,org_id,tracked,path,oauth_id

Splunk

Setting up Splunk with a HTTP Event Collector

  • collector_token: address of the datadog agent including host & port
  • collector_url: endpoint the Pump will send analytics too. Should look something like:

https://splunk:8088/services/collector/event

  • ssl_insecure_skip_verify: Controls whether the pump client verifies the Splunk server's certificate chain and host name.
  • obfuscate_api_keys: (optional) Controls whether the pump client should hide the API key. In case you still need substring of the value, check the next option. Type: Boolean. Default value is false.
  • obfuscate_api_keys_length: (optional) Define the number of the characters from the end of the API key. The obfuscate_api_keys should be set to true. Type: Integer. Default value is 0.
  • fields: (optional) Define which Analytics fields should participate in the Splunk event. Check the available fields in the example below. Type: String Array [] string. Default value is ["method", "path", "response_code", "api_key", "time_stamp", "api_version", "api_name", "api_id", "org_id", "oauth_id", "raw_request", "request_time", "raw_response", "ip_address"]
  • ignore_tag_prefix_list: (optional) Choose which tags to be ignored by the Splunk Pump. Keep in mind that the tag name and value are hyphenated. Type: Type: String Array [] string. Default value is []
  • enable_batch: If this is set to true, pump is going to send the analytics records in batch to Splunk. Type: Boolean. Default value is false.
  • max_content_length: Max content length in bytes to be sent in batch requests. It should match the max_content_length configured in Splunk. If the purged analytics records size don't reach the amount of bytes, they're send anyways in each purge_loop. Type: Integer. Default value is 838860800 (~ 800 MB), the same default value as Splunk config.
  • max_retries: Max number of retries if failed to send requests to splunk HEC. Default value is 0 (no retries after failure). Connections, network, timeouts, temporary, too many requests and internal server errors are all considered retryable.
JSON / Conf File
    "splunk": {
      "type": "splunk",
      "meta": {
        "collector_token": "<token>",
        "collector_url": "<url>",
        "ssl_insecure_skip_verify": false,
        "ssl_cert_file": "<cert-path>",
        "ssl_key_file": "<key-path>",
        "ssl_server_name": "<server-name>",
        "obfuscate_api_keys": true,
        "obfuscate_api_keys_length": 10,
        "enable_batch":true,
        "max_retries": 2,
        "fields": [
          "method",
          "host",
          "path",
          "raw_path",
          "content_length",
          "user_agent",
          "response_code",
          "api_key",
          "time_stamp",
          "api_version",
          "api_name",
          "api_id",
          "org_id",
          "oauth_id",
          "raw_request",
          "request_time",
          "raw_response",
          "ip_address",
          "geo",
          "network",
          "latency",
          "tags",
          "alias",
          "track_path"
        ],
        "ignore_tag_prefix_list": [
          "key-",
          "org-",
          "api-",
          "original-path-",
        ]
      }
    },
Env Variables
TYK_PMP_PUMPS_SPLUNK_TYPE=splunk
TYK_PMP_PUMPS_SPLUNK_META_COLLECTORTOKEN="{TOKEN}"
TYK_PMP_PUMPS_SPLUNK_META_COLLECTORURL="{URL}"
TYK_PMP_PUMPS_SPLUNK_META_SSLINSECURESKIPVERIFY=false
TYK_PMP_PUMPS_SPLUNK_META_SSLCERTFILE="{CERT-PATH}"
TYK_PMP_PUMPS_SPLUNK_META_SSLKEYFILE="{KEY-PATH}"
TYK_PMP_PUMPS_SPLUNK_META_SSLSERVERNAME="{SERVER-NAME}"
TYK_PMP_PUMPS_SPLUNK_META_ENABLEBATCH=true
TYK_PMP_PUMPS_SPLUNK_META_BATCHMAXCONTENTLENGTH="{MAX-CONTENT-LENGTH}"

Logzio Config

Logz.io is a cloud observability platform providing Log Management built on ELK, Infrastructure Monitoring based on Grafana, and an ELK-based Cloud SIEM.

The following configuration values are available:

Example simplest configuration just needs the token for sending data to your logzio account.

JSON / Conf File
"logzio": {
      "type": "logzio",
      "meta": {
        "token": "<YOUR-LOGZ.IO-TOKEN>"
      }
    }
Env Variables
TYK_PMP_PUMPS_LOGZIO_TYPE=logzio
TYK_PMP_PUMPS_LOGZIO_META_TOKEN="{YOUR-LOGZIO-TOKEN}"

More advanced fields:

meta.url - If you do not want to use the default Logzio url i.e. when using a proxy. Default is https://listener.logz.io:8071 meta.queue_dir - The directory for the queue. meta.drain_duration - Set drain duration (flush logs on disk). Default value is 3s meta.disk_threshold - Set disk queue threshold, once the threshold is crossed the sender will not enqueue the received logs. Default value is 98 (percentage of disk). meta.check_disk_space - Set the sender to check if it crosses the maximum allowed disk usage. Default value is true.

Kafka Config

  • broker: The list of brokers used to discover the partitions available on the kafka cluster. E.g. "localhost:9092"
  • use_ssl: Enables SSL connection.
  • ssl_insecure_skip_verify: Controls whether the pump client verifies the kafka server's certificate chain and host name.
  • client_id: Unique identifier for client connections established with Kafka.
  • topic: The topic that the writer will produce messages to.
  • timeout: Timeout is the maximum amount of time will wait for a connect or write to complete.
  • compressed: Enable "github.com/golang/snappy" codec to be used to compress Kafka messages. By default is false
  • meta_data: Can be used to set custom metadata inside the kafka message
  • ssl_cert_file: Can be used to set custom certificate file for authentication with kafka.
  • ssl_key_file: Can be used to set custom key file for authentication with kafka.
JSON / Conf File
{
    ...
    "pumps": {
      "kafka": {
      "type": "kafka",
      "meta": {
        "broker": [
            "localhost:9092"
        ],
	"topic": "tyk-pump",
        "use_ssl": true,
        "ssl_insecure_skip_verify": false,
        "client_id": "tyk-pump",
        "timeout": 60,
        "compressed": true,
        "meta_data": {
            "key": "value"
        }
      }
    }
}
Env Variables
TYK_PMP_PUMPS_KAFKA_TYPE=kafka
TYK_PMP_PUMPS_KAFKA_META_BROKER=localhost:9092
TYK_PMP_PUMPS_KAFKA_META_TOPIC=tyk-pump
TYK_PMP_PUMPS_KAFKA_META_USESSL=true
TYK_PMP_PUMPS_KAFKA_META_SSLINSECURESKIPVERIFY=false
TYK_PMP_PUMPS_KAFKA_META_CLIENTID=tyk-pump
TYK_PMP_PUMPS_KAFKA_META_TIMEOUT=60
TYK_PMP_PUMPS_KAFKA_META_COMPRESSED=true
TYK_PMP_PUMPS_KAFKA_META_METADATA_KEY=value

Influx2 Config

Supported in Tyk Pump v1.5.1+

This pump uses the official Go client library for InfluxDB 2.x.

Configuration options:

  • "organization" - InfluxDB organization name.
  • "bucket" - InfluxDB bucket where the analytic data is going to be stored.
  • "create_missing_bucket" - Set this to true if you want to create the bucket if not exists. Defaults to false.
  • "new_bucket_config" - If "create_missing_bucket"is true, you can configure the new bucket configuration under "new_bucket_config":
    • "description" - Description of the bucket. This is going to be visible in the Influx UI.
    • "retention_rules"- This is a slice of retention rules for this bucket. An example of this would be:
    "retention_rules":[
    {
    "every_seconds":100000,
    "type":"expires"
    }
    ]
    which would mean that the data in the bucket expires every 100000 seconds.
  • "token" - Influx DB Auth token
  • "tags" - Which elements should work as a tag for the time series.
JSON / Conf File
"influx2": {
    "type": "influx2",
    "meta": {
      "organization": "my-org",
      "bucket": "my-bucket",
      "address": "http://localhost:8086",
      "token": "my-super-secret-auth-token",
      "fields": ["request_time"],
      "tags": [
        "api_id",
        "api_key",
        "api_name",
        "api_version",
        "ip_address",
        "method",
        "oauth_id",
        "org_id",
        "path",
        "raw_request",
        "response_code"
      ]
    }
}
Env Variables
TYK_PMP_PUMPS_INFLUX_TYPE=influx2
TYK_PMP_PUMPS_INFLUX_META_ORGANIZATION=myorg
TYK_PMP_PUMPS_INFLUX_META_BUCKET=tyk_analytics
TYK_PMP_PUMPS_INFLUX_META_ADDR=http://localhost:8086
TYK_PMP_PUMPS_INFLUX_META_TOKEN=ZzSnH2qRpEJd3ph3A6lPaCcP8BkJfaxeiadFG5DBMO8YIAn3mMzGunMqOQE2uPkAkewXE5Q6Gsye3vQTWmeTiQ==
TYK_PMP_PUMPS_INFLUX_META_CREATEMISSINGBUCKET=true
TYK_PMP_PUMPS_INFLUX_META_NEWBUCKETCONFIG_DESCRIPTION="Tyk gateway requests"
TYK_PMP_PUMPS_INFLUX_META_NEWBUCKETCONFIG_RETENTIONRULES_EVERYSECONDS=3600
TYK_PMP_PUMPS_INFLUX_META_FIELDS=request_time
TYK_PMP_PUMPS_INFLUX_META_TAGS=path,response_code,api_key,api_version,api_name,api_id,raw_request,ip_address,org_id,oauth_id

Syslog

Supported in Tyk Pump v1.0.0+

"transport" - Possible values are udp, tcp, tls in string form

"network_addr" - Host & Port combination of your syslog daemon ie: "localhost:5140"

"log_level" - The severity level, an integer from 0-7, based off the Standard: Syslog Severity Levels

"tag" - Prefix tag

When working with FluentD, you should provide a FluentD Parser based on the OS you are using so that FluentD can correctly read the logs

"syslog": {
  "name": "syslog",
  "meta": {
    "transport": "udp",
    "network_addr": "localhost:5140",
    "log_level": 6,
    "tag": "syslog-pump"
  }

Stdout

log_field_name - Root name of the JSON object the analytics record is nested in

format - Format of the analytics logs. Default is text if json is not explicitly specified. When JSON logging is used all pump logs to stdout will be JSON.

JSON / Conf File
"stdout": {
   "type": "stdout",
    "meta": {
      "log_field_name": "tyk-analytics-record",
      "format": "json"
    }
  }
Env Variables
TYK_PMP_PUMPS_STDOUT_TYPE=stdout
TYK_PMP_PUMPS_STDOUT_META_LOGFIELDNAME=tyk-analytics-record
TYK_PMP_PUMPS_STDOUT_META_FORMAT=json

SQL Pump

Supported in Tyk Pump v1.5.0+

type - The supported and tested types are sqlite and postgres. connection_string - Specifies the connection string to the database. For example, for sqlite it would usually work specifying the path/name of the database and for postgres, specifying the host, port, user, password and dbname. log_level - Specifies the SQL log verbosity. The possible values are: info,error and warning. By default, the value is silent, which means that it won't log any SQL query. table_sharding - Specifies if all the analytics records are going to be stored in one table or in multiple tables (one per day). By default, false. If table_sharding is false, all the records are going to be stored in tyk_analytics table. Instead, if it's true, all the records of the day are going to be stored in tyk_analytics_YYYYMMDD table, where YYYYMMDD is going to change depending on the date. batch_size - Specifies the amount of records that are going to be written each batch. Type int. By default, it writes 1000 records max per batch.

JSON / Conf File
    "sql": {
        "name": "sql",
        "meta": {
            "type": "postgres",
            "connection_string": "host=localhost port=5432 user=admin dbname=postgres_test password=test",
            "table_sharding": false
        }
    }
Env Variables
TYK_PMP_PUMPS_SQL_NAME=sql
TYK_PMP_PUMPS_SQL_META_TYPE=postgres
TYK_PMP_PUMPS_SQL_META_CONNECTIONSTRING="host=sql_host port=sql_port user=sql_usr dbname=dbname password=sql_pw"
TYK_PMP_PUMPS_SQL_META_TABLESHARDING=false

SQL Aggregate Pump

Supported in Tyk Pump v1.5.0+

type - The supported and tested types are sqlite and postgres. connection_string - Specifies the connection string to the database. For example, for sqlite it would usually work specifying the path/name of the database and for postgres, specifying the host, port, user, password and dbname. log_level - Specifies the SQL log verbosity. The possible values are: info,error and warning. By default, the value is silent, which means that it won't log any SQL query. track_all_paths - Specifies if it should store aggregated data for all the endpoints. By default, false which means that only store aggregated data for tracked endpoints. ignore_tag_prefix_list - Specifies prefixes of tags that should be ignored. table_sharding - Specifies if all the analytics records are going to be stored in one table or in multiple tables (one per day). By default, false. If table_sharding is false, all the records are going to be stored in tyk_aggregated table. Instead, if it's true, all the records of the day are going to be stored in tyk_aggregated_YYYYMMDD table, where YYYYMMDD is going to change depending on the date. batch_size - Specifies the amount of records that are going to be written each batch. Type int. By default, it writes 1000 records max per batch.

JSON / Conf File
    "sql_aggregate": {
        "name": "sql_aggregate",
        "meta": {
            "type": "postgres",
            "connection_string": "host=localhost port=5432 user=admin dbname=postgres_test password=test",
            "table_sharding": false
        }
    }
Env Variables
TYK_PMP_PUMPS_SQLAGGREGATE_TYPE=sql_aggregate
TYK_PMP_PUMPS_SQLAGGREGATE_META_TYPE=postgres
TYK_PMP_PUMPS_SQLAGGREGATE_META_CONNECTIONSTRING=host=sql_host port=sql_port user=sql_usr dbname=dbname password=sql_pw
TYK_PMP_PUMPS_SQLAGGREGATE_META_TABLESHARDING=true

Timestream Config

Authentication & Prerequisite

We must authenticate ourselves by providing credentials to AWS. This pump uses the official AWS GO SDK, so instructions on how to authenticate can be found on their documentation here.

Config Fields

write_rate_limit - Set to true in order to save any of the RateLimit measures, which are extracted from the headers of the raw response.

read_geo_from_request - If set true, we will try to read geo information from the headers of the raw request

write_zero_values - If set to false, numerical values with value zero, won't be recorded

When you initialize a Timestream Pump, the SDK uses its default credential chain to find AWS credentials. This default credential chain looks for credentials in the following order:

  • Environment variables.
    • Static Credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN)
    • Web Identity Token (AWS_WEB_IDENTITY_TOKEN_FILE)
  • Shared configuration files.
    • SDK defaults to credentials file under .aws folder that is placed in the home folder on your computer.
  • If your application uses an ECS task definition or RunTask API operation, IAM role for tasks.
  • If your application is running on an Amazon EC2 instance, IAM role for Amazon EC2.

If no credentials are provided, Timestream Pump won't be able to connect.

JSON / Conf File
    "timestream": {
      "type": "timestream",
      "meta": {
        "aws_region": "us-east-1",
        "timestream_table_name": "tyk-pump-table",
        "timestream_database_name": "tyk-pump",
        "write_rate_limit": true,
        "read_geo_from_request": true,
        "write_zero_values": false,
        "dimensions": [
          "Method",
          "Host",
          "Path",
          "RawPath",
          "APIKey",
          "APIVersion",
          "APIName",
          "APIID",
          "OrgID",
          "OauthID"
        ],
        "measures": [
          "ContentLength",
          "ResponseCode",
          "RequestTime",
          "NetworkStats.OpenConnections",
          "NetworkStats.ClosedConnection",
          "NetworkStats.BytesIn",
          "NetworkStats.BytesOut",
          "Latency.Total",
          "Latency.Upstream",
          "RateLimit.Limit",
          "Ratelimit.Remaining",
          "Ratelimit.Reset",
          "UserAgent",
          "RawRequest",
          "RawResponse",
          "IPAddress",
          "GeoData.Country.ISOCode",
          "GeoData.City.Names",
          "GeoData.Location.TimeZone",
          "GeoData.City.GeoNameID",
          "GeoData.Location.Latitude",
          "GeoData.Location.Longitude"
        ],
        "field_name_mappings":{
          "Path": "path",
          "APIKey": "api_key",
          "APIVersion": "module",
          "APIName": "email",
          "Method": "method",
          "APIID": "account",
          "measure_name": "request_metrics",
          "time": "time",
          "ResponseCode": "response_code",
          "GeoData.Country.ISOCode": "country_code",
          "Latency.Total": "latency_total",
          "RawResponseSize": "sesponse_size",
          "RequestTime": "request_time",
          "GeoData.City.Names": "city",
          "Latency.Upstream": "latency_upstream",
          "UserAgent": "user_agent",
          "IPAddress": "ip_address",
          "RateLimit.Limit": "quota_max",
          "Ratelimit.Remaining": "quota_remaining",
          "Ratelimit.Reset": "quota_renewal_rate"
        }
      }
    },
Env Variables
TYK_PMP_PUMPS_TIMESTREAM_TYPE=timestream
TYK_PMP_PUMPS_TIMESTREAM_META_AWSREGION=us-east-1
TYK_PMP_PUMPS_TIMESTREAM_META_TIMESTREAMTABLENAME=tyk-pump-table
TYK_PMP_PUMPS_TIMESTREAM_META_TIMESTREAMDATABASENAME=tyk-pump
TYK_PMP_PUMPS_TIMESTREAM_META_WRITERATELIMIT=true
TYK_PMP_PUMPS_TIMESTREAM_META_READGEOFROMREQUEST=true
TYK_PMP_PUMPS_TIMESTREAM_META_WRITEZEROVALUES=true
TYK_PMP_PUMPS_TIMESTREAM_META_DIMENSIONS=Method,Host,Path,APIKey
TYK_PMP_PUMPS_TIMESTREAM_META_MEASURES=ResponseCode,RequestTime,Latency.Total,RateLimit.Limit,RateLimit.Remaining,RateLimit.Reset,UserAgent,IPAddress,GeoData.Country.ISOCode,GeoData.City.Names
TYK_PMP_PUMPS_TIMESTREAM_META_FIELDNAMEMAPPINGS_PATH=path
TYK_PMP_PUMPS_TIMESTREAM_META_FIELDNAMEMAPPINGS_APIKEY=api_key
TYK_PMP_PUMPS_TIMESTREAM_META_FIELDNAMEMAPPINGS_METHOD=method
TYK_PMP_PUMPS_TIMESTREAM_META_FIELDNAMEMAPPINGS_HOST=host
TYK_PMP_PUMPS_TIMESTREAM_META_FIELDNAMEMAPPINGS_RESPONSECODE=response_code
TYK_PMP_PUMPS_TIMESTREAM_META_FIELDNAMEMAPPINGS_REQUESTTIME=request_time
TYK_PMP_PUMPS_TIMESTREAM_META_FIELDNAMEMAPPINGS_LATENCY_TOTAL=latency_total
TYK_PMP_PUMPS_TIMESTREAM_META_FIELDNAMEMAPPINGS_GEODATA_COUNTRY_ISOCODE=country_code
TYK_PMP_PUMPS_TIMESTREAM_META_FIELDNAMEMAPPINGS_GEODATA_CITY_NAMES=city
TYK_PMP_PUMPS_TIMESTREAM_META_FIELDNAMEMAPPINGS_USERAGENT=user_agent
TYK_PMP_PUMPS_TIMESTREAM_META_FIELDNAMEMAPPINGS_IPADDRESS=ip_address
TYK_PMP_PUMPS_TIMESTREAM_META_FIELDNAMEMAPPINGS_RATELIMIT_LIMIT=quota_max
TYK_PMP_PUMPS_TIMESTREAM_META_FIELDNAMEMAPPINGS_RATELIMIT_REMAINING=quota_remaining
TYK_PMP_PUMPS_TIMESTREAM_META_FIELDNAMEMAPPINGS_RATELIMIT_RESET=quota_renewal_rate

CSV Config

Enable this Pump to have Tyk Pump create or modify a CSV file to track API Analytics.

JSON / Conf File
    "csv": {
      "type": "csv",
      "meta": {
        "csv_dir": "./"
      }
    },
Env Variables
TYK_PMP_PUMPS_CSV_TYPE=csv
TYK_PMP_PUMPS_CSV_META_CSVDIR=./

SQS Config

Authentication & Prerequisite

We must authenticate ourselves by providing credentials to AWS. This pump uses the official AWS GO SDK, so instructions on how to authenticate can be found on their documentation here.

Config Fields

aws_queue_name - Specifies the name of the AWS Simple Queue Service (SQS) queue where messages will be sent

aws_message_group_id - Specifies the name of the AWS Simple Queue Service (SQS) queue where messages will be sent

aws_sqs_batch_limit - Sets the maximum number of messages to include in a single batch when sending messages to the SQS queue

aws_message_id_deduplication_enabled - Enables or disables the deduplication of messages based on unique message IDs to prevent unintended duplicates in the queu

aws_delay_seconds - Configures the delay (in seconds) before messages sent to the SQS queue become available for processing.

When you initialize an SQS Pump, the SDK uses its default credential chain to find AWS credentials. This default credential chain looks for credentials in the following order:

  • Environment variables.
    • Static Credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN)
    • Web Identity Token (AWS_WEB_IDENTITY_TOKEN_FILE)
    • Pump Environment Variables (TYK_PMP_PUMPS_SQS_AWSKEY, TYK_PMP_PUMPS_SQS_AWSSECRET, TYK_PMP_PUMPS_SQS_AWSTOKEN)
  • Shared configuration files.
    • SDK defaults to credentials file under .aws folder that is placed in the home folder on your computer.
  • If your application uses an ECS task definition or RunTask API operation, IAM role for tasks.
  • If your application is running on an Amazon EC2 instance, IAM role for Amazon EC2.

If no credentials are provided, SQS Pump won't be able to connect.

JSON / Conf File
    "sqs": {
      "type": "sqs",
      "meta": {
        "log_field_name": "tyk-analytics-record",
        "format": "json",
        "aws_queue_name": "access-logs-queue.fifo",
        "aws_region": "us-east-1",
        "aws_key": "key",
        "aws_secret": "secret",
        "aws_token": "token",
        "aws_endpoint": "http://aws-endpoint:4566",
        "aws_message_group_id": "message_group_id",
        "aws_sqs_batch_limit": 10,
        "aws_message_id_deduplication_enabled": true,
        "aws_delay_seconds": 0
      }
    },
Env Variables
# SQS Pump Configuration
TYK_PMP_PUMPS_SQS_TYPE=sqs
TYK_PMP_PUMPS_SQS_META_LOGFIELDNAME=tyk-analytics-record
TYK_PMP_PUMPS_SQS_META_FORMAT=json
TYK_PMP_PUMPS_SQS_META_AWSQUEUENAME=access-logs-queue.fifo
TYK_PMP_PUMPS_SQS_META_AWSREGION=us-east-1
TYK_PMP_PUMPS_SQS_META_AWSKEY=key
TYK_PMP_PUMPS_SQS_META_AWSSECRET=secret
TYK_PMP_PUMPS_SQS_META_AWSENDPOINT=http://aws-endpoint:4566
TYK_PMP_PUMPS_SQS_META_AWSMESSAGEGROUPID=message_group_id
TYK_PMP_PUMPS_SQS_META_AWSSQSBATCHLIMIT=10
TYK_PMP_PUMPS_SQS_META_AWSMESSAGEIDDEDUPLICATIONENABLED=true
TYK_PMP_PUMPS_SQS_META_AWSDELAYSECONDS=0

Kinesis Config

Authentication & Prerequisite

We must authenticate ourselves by providing credentials to AWS. This pump uses the official AWS GO SDK, so instructions on how to authenticate can be found on their documentation here.

Config Fields

stream_name - The name of your Kinesis stream in the specified AWS region

region - The AWS region your Kinesis stream is located - i.e. eu-west-2

batch_size - Optional. The maximum size of the records in a batch is 5MiB. If your records are larger in size setting this batch size paramter can guarantee you don't have failed delivery due to too large a batch. Default size if unset is 100.

JSON / Conf File
    "kinesis":{
      "type": "kinesis",
      "meta": {
        "stream_name": "my-stream",
        "region": "eu-west-2",
        "batch_size": 100
      }
    },
Env Variables
#Kinesis Pump Configuration
TYK_PMP_PUMPS_KINESIS_TYPE=kinesis
TYK_PMP_PUMPS_KINESIs_META_STREAMNAME=my-stream
TYK_PMP_PUMPS_KINESIS_META_REGION=eu-west-2
TYK_PMP_PUMPS_KINESIS_META_BATCHSIZE=100

Base Pump Configurations

The following configurations can be added to any Pump. Keep reading for an example.

Filter Records

You made add the following config field to each pump called filters and its structure is the following:

"filters":{
  "api_ids":[],
  "org_ids":[],
  "response_codes":[],
  "skip_api_ids":[],
  "skip_org_ids":[],
  "skip_response_codes":[]
}

The fields api_ids, org_ids and response_codes works as allow list (APIs and orgs where we want to send the analytics records) and the fields skip_api_ids, skip_org_ids and skip_response_codes works as block list.

The priority is always block list configurations over allow list.

Here we see how we can take a CSV Pump, and add a filters section to it:

JSON / Conf file Example
"csv": {
 "type": "csv",
 "filters": {
   "api_ids": ["123","789"]
 },
 "meta": {
   "csv_dir": "./bar"
 }
}
Env variables
TYK_PMP_PUMPS_CSV_TYPE=csv
TYK_PMP_PUMPS_CSV_META_CSVDIR=./bar
TYK_PMP_PUMPS_CSV_FILTERS_APIIDS=123,789

Timeouts

You can configure a different timeout for each pump with the configuration option timeout. Its default value is 0 seconds, which means that the pump will wait for the writing operation forever. In Mongo pumps, the default value is 10 seconds. If you want to disable the timeout, you can set the value to 0. Take into account that if you disable the timeout, the pump will wait for the writing operation forever, and it could block the pump execution.

"mongo": {
  "type": "mongo",
  "timeout": 5,
  "meta": {
    "collection_name": "tyk_analytics",
    "mongo_url": "mongodb://username:password@{hostname:port}/{db_name}"
  }
}
Env variables
TYK_PMP_PUMPS_MONGO_TYPE=mongo
TYK_PMP_PUMPS_MONGO_TIMEOUT=5

In case that any pump doesn't have a configured timeout, and it takes more seconds to write than the value configured for the purge loop in the purge_delay config option, you will see the following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try to set a timeout for this pump..

In case that you have a configured timeout, but it still takes more seconds to write than the value configured for the purge loop in the purge_delay config option, you will see the following warning message: Pump PMP_NAME is taking more time than the value configured of purge_delay. You should try lowering the timeout configured for this pump..

Omit Detailed Recording

omit_detailed_recording - Setting this to true on a Pump will avoid writing raw_request and raw_response fields for each request in pumps. Defaults to false.

Max Record Size

max_record_size defines maximum size (in bytes) for Raw Request and Raw Response logs, this value defaults to 0. Is not set then tyk-pump will not trim any data and will store the full information. This can also be set at a pump level. For example:

"csv": {
  "type": "csv",
  "max_record_size":1000,
  "meta": {
    "csv_dir": "./"
  }
}

Driver Type

The driver setting defines the driver type to use for Mongo Pumps. It can be one of the following values:

  • mongo-go (default from v1.9.0): Uses the official MongoDB driver. This driver supports Mongo versions greater or equal to v4. You can get more information about this driver here.
  • mgo: Uses the mgo driver. This driver is deprecated. This driver supports Mongo versions lower or equal to v4. You can get more information about this driver here
"mongo": {
  "type": "mongo",
  "meta": {
    "mongo_url": "mongodb://tyk-mongo:27017/tyk_analytics",
    "collection_name": "tyk_analytics",
    "driver": "mongo-go"
  }
}

Direct Connection

MongoDirectConnection informs whether to establish connections only with the specified seed servers or to obtain information for the whole cluster and establish connections with further servers too. If true, the client will only connect to the host provided in the ConnectionString and won't attempt to discover other hosts in the cluster. Useful when network restrictions prevent discovery, such as with SSH tunneling. Default is false. You can get more info from the official MongoDB driver docs.

"mongo": {
  "type": "mongo",
  "meta": {
    "mongo_url": "mongodb://tyk-mongo:27017/tyk_analytics",
    "collection_name": "tyk_analytics",
    "driver": "mongo-go",
    "mongo_direct_connection": true
  }
}

Ignore Fields

ignore_fields defines a list of analytics fields that will be ignored when writing to the pump. This can be used to avoid writing sensitive information to the Database, or data that you don't really need to have. Fields must be written using JSON tags. For example:

"csv": {
 "type": "csv",
 "ignore_fields":["api_id","api_version"],
 "meta": {
   "csv_dir": "./bar"
 }
}

Decode Raw Request & Raw Response

raw_request_decoded and raw_response_decoded decode from base64 the raw request and raw response fields before writing to Pump. This is useful if you want to search for specific values in the raw request/response. Both are disabled by default. This setting is not available for Mongo and SQL pumps, since dashboard will decode the raw request/response.

"csv": {
  "type": "csv",
  "raw_request_decoded": true,
  "raw_response_decoded": true,
  "meta": {
    "csv_dir": "./"
  }
}

Compiling & Testing

  1. Download dependent packages:
go get -t -d -v ./...
  1. Compile:
go build -v ./...
  1. Test
go test -v ./...

Demo Mode

You can run Tyk Pump in demo mode, which will generate fake analytics data and send it to the configured pumps. This is useful for testing and development. To enable demo mode, use the following flags:

  • --demo=<ORG_ID> - Enables demo mode and sets the organization ID to use for the demo data. This is required to enable Demo Mode.
  • --demo-api=<API_ID> - Configure the value to be recorded as the API_ID in the demo transactions. If this option is not set, the Pump Demo mode will use a random API_ID. Note that the same API_ID will be used for all transaction logs.
  • --demo-days=<DAYS> - Sets the number of days of demo data to generate. Defaults to 30.
  • --demo-records-per-hour=<RECORDS_PER_HOUR> - Sets the number of records to generate per hour. The default value is a random number between 300 and 500.
  • --demo-track-path - Enables tracking of the request path in the demo data. Defaults to false (disabled). Note that setting track_all_paths to true in your Pump configuration will override this option.
  • --demo-future-data - By default, the demo data is generated for the past X days (configured in demo-days flag). This option will generate data for the next X days. Defaults to false (disabled).

About

Tyk Analytics Pump to move analytics data from Redis to any supported back end (multiple back ends can be written to at once).

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Languages