Tickerplant
What is it?
In a kdb+ tick system, the tickerplant is like the traffic cop standing in the middle of the intersection, directing traffic, knowing at all times which cars have to go where and when to send them there. To continue with the analogy, in a tick system the traffic cop also remembers exactly which cars they sent and where they sent them to, so they could recall the entire day's traffic if needed.
Basics
The Tickerplant performs the following functions in a tick system:
Receive data from upstream and write each update to a logfile
Manage subscriptions made by downstream subscribers/processes, including getting new subscribers 'up to date'
Publish updates to subscribers (either immediately or at regular intervals)
Initiate the end of day activity for all downstream subscribers
Starting the Tickerplant
When starting the tickerplant, the syntax is as follows:
q tick.q SRC DST -p PORT -t TIMER
tick.q
The main script used by the Tickerplant. Detailed breakdown available here.
SRC - The tickerplant must know the schemas of the tables that it will be processing. This input should point to a schema file with empty table definitions of all tables that will be used by the tickerplant. This input should not include the .q at the end of the filename. An example of a schema file is here. If no input is provided, the schema file will default to sym.q.
DST - The Tickerplant saves down every single update received into a logfile. This input represents the location of that logfile. If no input is provided, no logfile will be created/used.
PORT - The port on which the process is to run. If no input is provided, the port will default to 5010.
TIMER - If this is configured, the process will start with the timer function active with this value as the interval. This determines whether or not the process will start in batch or real-time mode.
Batch vs realtime
The two modes in which the Tickerplant can run are batch mode and real-time mode.
In real-time mode, the Tickerplant publishes updates to subscribers as soon as it receives them i.e. the updates are published in 'real-time.'
In batch mode, the Tickerplant 'holds on' to the updates and instead publishes them at predetermined intervals e.g. every second, every five seconds.
Realtime mode behaviour with trade table updates at T, T+1 and T+2
Batch mode behaviour with trade table updates at T, T+1 and T+2. The batch timer runs at T+2
In tick.q, which mode to use is determined by whether or not the timer is set when starting the process (using the -t command line option). If the timer is set, then the Tickerplant runs in batch mode with the publish interval being the timer setting.
Why would you choose one over the other? Real-time Tickerplants will be chosen for systems in which low latency is a priority i.e. the downstream subscribers absolutely must get the latest data as soon as possible. Not all downstream subscribers have this requirement however, for example a downstream subscriber that runs a report every 5 minutes does not need to get updates many times per second.
The other consideration is performance/throughput. The Tickerplant is actually at its most efficient in batch mode. Which makes sense when you think about it - kdb+/q is most adept when working with large datasets. Batch mode should reduce CPU and memory usage on the Tickerplant, and CPU usage should be reduced on the downstream subscribers too. This is shown in the Kdb+tick profiling for throughput optimization (Ian Kilpatrick) whitepaper:
"...when publishing on the timer, the tickerplant upd function still takes roughly the same time as in zero-latency mode, but we are only publishing data 10 times a second which reduces the overall load: the TP CPU usage has decreased from 31% to 22%. The RDB CPU usage decreases from 12% to 0.1% as it is only doing 10 bulk updates per second instead of 10,000 single updates per second."
It should be noted that the Tickerplant does not 'batch' the updates to the logfile in the vanilla tick setup - these are always performed when the update is received (this is something that could be changed for a performance improvement at the expense of recoverability - check the whitepaper above for more info).
Upstream sources publishing data to the tickerplant
'Upstream sources' usually (although not always) refers to Feedhandlers. The Feedhandlers will convert updates from external data feeds into a kdb-friendly format and publish them to the Tickerplant.
All updates published to the Tickerplant will call the 'u.upd' function. This is similar to the 'upd' function that other q processes use to receive updates and takes the same inputs - 't' for table name, and 'x' for the data in that table.
The default definition for .u.upd depends on whether the Tickerplant is in batch mode or real-time mode. In both modes, some validation is done on the data, ensuring that the first column is of type 'timespan' and if it isn't then a timespan column of the current time is added. In batch mode, the data just received is inserted into the target table (to be processed later as part of a batch). In real-time mode, the data is immediately published in the upd function by calling .u.pub. In both modes, the Tickerplant logfile is updated as part of each update.
Example subscription calls:
h(`.u.sub;`trade;`AAPL)
h".u.sub[`quote;`]"
Downstream subscribers subscribing to the tickerplant
'Downstream subscribers' - usually refers to q processes subscribed to the tickerplant. For a process to subscribe to the Tickerplant, it must connect to the Tickerplant and call the '.u.sub' function with two inputs: table name to subscribe to and list of syms to filter for that table.
The tickerplant will then use .u.add to add a subscription for that subscriber with the relevant subscription information.
.u.add updates the global variable u.w. which is a dictionary of tables matched with the subscribers and sym filters for those tables. .u.w might look like this after the subscriptions in the above section:
quote| 5i `
trade| 5i `AAPL
As part of a new subscription, the Tickerplant returns to the subscriber the table name and an empty schema for that table. Note how this is used in r.q.
Tickerplant publishing to downstream subscribers
When the tickerplant is publishing data to downstream subscribers, it uses the .u.pub function and filters the dataset for each active subscriber for the table in .z.w before publishing the data using an asynchronous message to the handle for that subscriber. It calls the 'upd' function on the target process.
The message itself is transferred between the processes using the TCP protocol.
End of day
The tickerplant is checking if midnight has passed on and a new day begun on each upd and each timer run via the .u.ts function. In realtime mode the timer runs once per second.
As part of the tickerplant end of day, the following actions are taken:
Tell all downstream subscribers to run .u.end with today's date
Increase .u.d by one day
Close the handle to (what is now) yesterday's logfile (if present)
Open a new handle to today's logfile
Usage profile
The tickerplant is not (or should not be) resource intensive. While data is being received and published it will utilise the CPU, but not heavily. As the tickerplant stores no (or very little) data in memory, memory usage is very low. If the tickerplant is consuming more and more memory throughout the day, there is likely a slow subscriber/consumer.
Diagrams taken from the excellent AquaQ Architecture Workshop
Slow subscribers and chained tickerplants
As described above, the tickerplant publishes data to downstream subscribers via TCP/IP. This is an asynchronous publish so the tickerplant does not have to wait for the subscriber to receive or process the update, it just publishes it and moves on. However, a downstream subscriber that takes a long time to process updates can still negatively affect the tickerplant.
This scenario occurs when the downstream subscriber is slow to process updates. For example, if you had a subscriber with a complex upd function that performed many actions, it may take longer on average to process each update than the average time between updates (if in batch mode this would happen if it took longer to process an average batch than the batch publish frequency.
If this happens, the tickerplant will be publishing updates to the downstream subscriber even though it is still processing a previous update. Those unprocessed updates will build up in a queue. At first, the updates will build up in the downstream subscriber's TCP receive buffer. This is something specific to the TCP/IP protocol, not kdb. The following quote from Len Holgate on StackOverflow explains it well:
With TCP there's a TCP Window which is used for flow control. TCP only allows a certain amount of data to remain unacknowledged at a time. If a server is producing data faster than a client is consuming data then the amount of data that is unacknowledged will increase until the TCP window is 'full' at this point the sending TCP stack will wait and will not send any more data until the client acknowledges some of the data that is pending.
Once this scenario occurs and no more data is being sent by the TCP stack, the publishing process, the tickerplant, has no option but to start building the queue itself, in memory. This is a problem. The queue is growing, which means that the memory usage of the tickerplant is growing. With no intervention, the tickerplant memory usage could become critical and either the process may abort or the OS kernel may kill it. The tickerplant failing is a disastrous scenario for most kdb systems.
What is the solution to this problem? First, the system should have some kind of monitoring to detect this scenario. Once detected, either a developer can be alerted to diagnose the issue and rectify it, or an automatic failsafe can take place to close the handle between the TP and the slow subscriber (in both cases .z.W would tell us which process had the longest/biggest queue). It may be that the tickerplant is publishing a small number of updates frequently in realtime mode, but the process in question would be more suited to receiving batched updates. Rather than touching the tick code, which adds a level of complexity we want to avoid, a chained tickerplant could be added.
A chained tickerplant is essentially a second tickerplant subscribed to the main tickerplant. This tickerplant could be in batched mode and its downstream processes would still get the same updates as the other processes in the system, but they would be batched rather than realtime. There are other reasons for having chained tickerplants - keeping non-essential subscribers separated from the main tickerplant, publishing data across servers, etc. Generally they will not have a logfile as they will receive updates from the main tickerplant in the event that a replay of the data is required.
The tickerplant logfile
If a file location is provided as one of the startup parameters, the tickerplant will log every update to this file, called the logfile (sometimes called the journal). Upon startup the tickerplant will create the logfile in the format 'directory/schemaFiledate' e.g. logdir/sym2022.02.02.
The logfile is initialised by opening a handle to the file and updates are written to the logfile as messages on that handle.
If the tickerplant is restarted mid-day it will check if a logfile exists and if so it will replay the logfile and re-process all of the updates in it.
If a process subscribes to a new topic, it needs to receive all of the updates so far today for that topic. One way to do this would be to have the tickerplant replay all of the day's updates on that topic to the new subscriber, but this adds unneeded work and complexity to the tickerplant. Instead, the process can subscribe to the new topic and ask the tickerplant how many updates it has received that day. It can then immediately replay the logfile up to that point, following which it can process the latest updates from the tickerplant. The RDB uses the .u.rep function for this (see r.q).
A new logfile is created at end of day.
Important variables in the tickerplant
.z.W | Dictionary containing open handles and a list of the sizes of the messages in the queue to be sent to that handle
.u.t | Tables that exist in the tickerplant
.u.d | Today's date
.u.i | Count of replayable updates in the logfile
.u.j | Count of updates processed by the tickerplant (used when TP is in batch mode)
.u.l | Logfile handle
.u.L | Logfile location
.u.w | Dictionary containing table names and a list of handle/subscribed sym pairs for those tables