An open source, standard data file format for graph data storage and retrieval
Graph processing serves as the essential building block for a diverse variety of real-world applications such as social network analytics, data mining, network routing, and scientific computing.
GraphAr (short for "Graph Archive") is a project that aims to make it easier for diverse applications and systems (in-memory and out-of-core storages, databases, graph computing systems, and interactive graph query frameworks) to build and access graph data conveniently and efficiently.
It can be used for importing/exporting and persistent storage of graph data, thereby reducing the burden on systems when working together. Additionally, it can serve as a direct data source for graph processing applications.
To achieve this, GraphAr project provides:
- The GraphAr format: a standardized system-independent format for storing graph data
- Libraries: a set of libraries for reading, writing and transforming GraphAr format data
By using GraphAr, you can:
- Store and persist your graph data in a system-independent way with the GraphAr format
- Easily access and generate GraphAr format data using the libraries
- Utilize Apache Spark to quickly manipulate and transform your graphar format data
The GraphAr format is designed for storing property graphs. It uses metadata to record all the necessary information of a graph, and maintains the actual data in a chunked way.
A property graph consists of vertices and edges, with each vertex contains a unique identifier and:
- A text label that describes the vertex type.
- A collection of properties, with each property can be represented by a key-value pair.
Each edge contains a unique identifier and:
- The outgoing vertex (source).
- The incoming vertex (destination).
- A text label that describes the relationship between the two vertices.
- A collection of properties.
The following is an example property graph containing two types of vertices ("person" and "comment") and three types of edges.
Each type of vertices (with the same type) constructs a logical vertex table, with each vertex assigned with a global index inside this type (called internal vertex id) starting from 0, corresponding to the row number of the vertex in the logical vertex table. An example layout for a logical table of vertices under the type "person" is provided for reference.
Given an internal vertex id and the vertex type, a vertex is uniquely identifiable and its respective properties can be accessed from this table. The internal vertex id is further used to identify the source and destination vertices when maintaining the topology of the graph.
The logical vertex table will be partitioned into multiple continuous vertex chunks for enhancing the reading/writing efficiency. To maintain the ability of random access, the size of vertex chunks for the same type is fixed. To support to access required properties avoiding reading all properties from the files, and to add properties for vertices without modifying the existing files, the columns of the logical table will be divided into several column groups.
Take the person
vertex table as an example, if the chunk size is set
to be 500, the logical table will be separated into sub-logical-tables
of 500 rows with the exception of the last one, which may have less than
500 rows. The columns for maintaining properties will also be divided
into distinct groups (e.g., 2 for our example). As a result, a total of
4 physical vertex tables are created for storing the example logical
table, which can be seen from the following figure.
Note
For efficiently utilize the filter push-down of the payload file format like Parquet, the internal vertex id is stored in the payload file as a column. And since the internal vertex id is continuous, the payload file format can use the delta encoding for the internal vertex id column, which would not bring too much overhead for the storage.
For maintaining a type of edges (that with the same triplet of the source type, edge type, and destination type), a logical edge table is established. And in order to support quickly creating a graph from the graph storage file, the logical edge table could maintain the topology information in a way similar to CSR/CSC, that is, the edges are ordered by the internal vertex id of either source or destination. In this way, an offset table is required to store the start offset for each vertex's edges, and the edges with the same source/destination will be stored continuously in the logical table.
Take the logical table for person knows person
edges as an example,
the logical edge table looks like:
As same with the vertex table, the logical edge table is also partitioned into some sub-logical-tables, with each sub-logical-table contains edges that the source (or destination) vertices are in the same vertex chunk. According to the partition strategy and the order of the edges, edges can be stored in GraphAr following one of the four types:
- ordered_by_source: all the edges in the logical table are ordered and further partitioned by the internal vertex id of the source, which can be seen as the CSR format.
- ordered_by_dest: all the edges in the logical table are ordered and further partitioned by the internal vertex id of the destination, which can be seen as the CSC format.
- unordered_by_source: the internal id of the source vertex is used as the partition key to divide the edges into different sub-logical-tables, and the edges in each sub-logical-table are unordered, which can be seen as the COO format.
- unordered_by_dest: the internal id of the destination vertex is used as the partition key to divide the edges into different sub-logical-tables, and the edges in each sub-logical-table are unordered, which can also be seen as the COO format.
After that, a sub-logical-table is further divided into edge chunks of a predefined, fixed number of rows (referred to as edge chunk size). Finally, an edge chunk is separated into physical tables in the following way:
- an adjList table (which contains only two columns: the internal vertex id of the source and the destination).
- 0 or more edge property tables, with each table contains a group of properties.
Additionally, there would be an offset table for ordered_by_source or ordered_by_dest edges. The offset table is used to record the starting point of the edges for each vertex. The partition of the offset table should be in alignment with the partition of the corresponding vertex table. The first row of each offset chunk is always 0, indicating the starting point for the corresponding sub-logical-table for edges.
Take the person knows person
edges to illustrate. Suppose the vertex
chunk size is set to 500 and the edge chunk size is 1024, and the edges
are ordered_by_source, then the edges could be saved in the
following physical tables:
Our experiments are conducted on an Alibaba Cloud r6.6xlarge instance, equipped with a 24-core Intel(R) Xeon(R) Platinum 8269CY CPU at 2.50GHz and 192GB RAM, running 64-bit Ubuntu 20.04 LTS. The data is hosted on a 200GB PL0 ESSD with a peak I/O throughput of 180MB/s. Additional tests on other platforms and S3-like storage yield similar results.
Here we show statistics of datasets with hundreds of millions of vertices from Graph500 and LDBC. Other datasets involved in the experiment can be found in paper.
Abbr. | Graph | |V| | |E| |
---|---|---|---|
G8 | Graph500-28 | 268M | 4.29B |
G9 | Graph500-29 | 537M | 8.59B |
SF30 | SNB Interactive SF-30 | 99.4M | 655M |
SF100 | SNB Interactive SF-100 | 318M | 2.15B |
SF300 | SNB Interactive SF-300 | 908M | 6.29B |
Two baseline approaches are considered: 1) “plain”, which employs plain encoding for the source and destination columns, and 2) “plain + offset”, which extends the “plain” method by sorting edges and adding an offset column to mark each vertex’s starting edge position. The result is a notable storage advantage: on average, GraphAr requires only 27.3% of the storage needed by the baseline “plain + offset”, which is due to delta encoding.
In (a) indicate that GraphAr significantly outperforms the baseline (CSV), achieving an average speedup of 4.9×. In Figure (b), the immutable (“Imm”) and mutable (“Mut”) variants are two native in-memory storage of GraphScope. It demonstrates that although the querying time with GraphAr exceeds that of the in-memory storages, attributable to intrinsic I/O overhead, it significantly surpasses the process of loading and then executing the query, by 2.4× and 2.5×, respectively. This indicates that GraphAr is a viable option for executing infrequent queries.
Performance of simple condition filtering. For each graph, we perform experiments where we consider each label individually as the target label for filtering. GraphAr consistently outperforms the baselines. On average, it achieves a speedup of 14.8× over the “string” method, 8.9× over the “binary (plain)” method, and 7.4× over the “binary (RLE)” method.
Performance of complex condition filtering. For each graph, we combine two labels by AND or OR as the filtering condition. The merge-based decoding method yields the largest gain, where “binary (RLE) + merge” outperforms the “binary (RLE)” method by up to 60.5×.
GraphAr offers a collection of libraries for the purpose of reading, writing and transforming files. Currently, the following libraries are available, and plans are in place to expand support to additional programming language.
See GraphAr C++ Library for details about the building of the C++ library.
See GraphAr Spark Library for details about the Scala with Spark library.
Note
The Java library is under development.
The GraphAr Java library is created with bindings to the C++ library (currently at version v0.10.0), utilizing Alibaba-FastFFI for implementation. See GraphAr Java Library for details about the building of the Java library.
Note
The Python with PySpark library is under development.
The PySpark library is developed as bindings to the GraphAr Spark library. See GraphAr PySpark Library for details about the PySpark library.
- Start with Contributing Guide.
- Submit Issues for bug reports, feature requests.
- Discuss at dev mailing list (subscribe / unsubscribe / archives).
- Asking questions on GitHub Discussions.
GraphAr is distributed under Apache License 2.0. Please note that third-party libraries may not have the same license as GraphAr.
- Xue Li, Weibin Zeng, Zhibin Wang, Diwen Zhu, Jingbo Xu, Wenyuan Yu, Jingren Zhou. GraphAr: An Efficient Storage Scheme for Graph Data in Data Lakes. PVLDB, 18(3): 530 - 543, 2024.
@article{li2024graphar,
author = {Xue Li and Weibin Zeng and Zhibin Wang and Diwen Zhu and Jingbo Xu and Wenyuan Yu and Jingren Zhou},
title = {GraphAr: An Efficient Storage Scheme for Graph Data in Data Lakes},
journal = {Proceedings of the VLDB Endowment},
year = {2024},
volume = {18},
number = {3},
pages = {530--543},
publisher = {VLDB Endowment},
}
The source code, data, and/or other artifacts of the research paper have been made available at the research branch.