This section explains the goals and non-goals of the system design. These principles guide the implementation and motivate use of existing technology. Having a clear understanding what the system is supposed to do and what not helps to establish a strict focus, without falling victim to feature creep.
We designed VAST with the following principles in mind:
High-throughput import: a typical deployment location of VAST is near the uplink of an enterprise network, where network monitoring systems process packet data and generate logs at the rate of 10-100k events/second. Moreover, we consider raw packet also telemetry, meaning that VAST can operate as bulk packet recorder. However, a 1 Gbps network link produces ~1.5M packets per second. To cope with such a high volume, the ingestion path must receive special attention.
Low-latency export: to support real-time investigations and automated correlations across space and time, quick access to the relevant subset of data is necessary to effectively work with the system. A result should build up incrementally and asynchronously, because inspecting a "taste" of the entire result often suffices to triage the relevance of a particular query.
Type-rich data model: the majority of telemetry and log data has a semi-structured nature where specific event types follow a schema that describes the types. VAST should be able to retain the structure and model it without losing domain-specific semantics.
Scalability: at input rates of 10k to 1M events per second, VAST quickly accumulates TBs of data. The system design must therefore include first-class mechanisms for scaling out horizontally.
Interoperability: with a user group at the intersection of security experts and data scientists, we do not want to dictate the framework for analytics. Instead, VAST should seamlessly integrate into the existing ecosystem of data science, such as Spark, R, Python, and more.
OLTP: On the spectrum of OLTP and OLAP, VAST resembles an OLAP engine with a focus on multi-dimensional analytical queries. Since VAST works with immutable data that represents activity, mutating operations are out of scope. However, VAST does support ageing out old data.
Generic Computation: The primary job of VAST is deliver the relevant subset of telemetry data to downstream consumers, such as human analysts or data science applications. To avoid data silo behavior that restricts users to a fixed set of supported operations, VAST purposefully has very minimal support for computation to enable a large variety of use cases. The zero-copy Apache Arrow bridge allows for efficient sharing of data with much more complex processing engines downstream.