ANF: A Fast and Scalable Tool For Data Mining in Massive Graphs
ANF: A Fast and Scalable Tool For Data Mining in Massive Graphs
Christopher R. Palmer
Computer Science Dept Carnegie Mellon University Pittsburgh, PA
Phillip B. Gibbons
Intel Research Pittsburgh Pittsburgh, PA
Christos Faloutsos
Computer Science Dept Carnegie Mellon University Pittsburgh, PA
[email protected] ABSTRACT
[email protected] [email protected]
tant nodes in such a graph [2]. Broder et al. studied the connectivity information of nodes in the Internet [13, 3]. The networking community has used dierent measures of node importance to build a hierarchy of the Internet [20]. Another source of graph data that has been studied are citation graphs [18]. Here, each node is a publication and each edge is a citation from one publication to another and we wish to know the most important papers. There are many more examples of graphs which contain interesting information for data mining purposes. For example, the telephone calling records from a long distance carrier can be viewed as a graph, and by mining the graph we may help identify fraudulent behaviour or marketing opportunities. DNA sequencing can also be viewed as a graph, and identifying common subsequences is a form of mining that could help scientists. Circuit design, for example from a CAD system, forms a graph and data mining could be used to nd commonly used components, points of failure, etc. In fact, any binary relational table is a graph. For example, in this paper we use a graph derived from the Internet Movie Database [10] where we let each actor and each movie be a node and add an undirected edges between and actor, a, and a movie, m, to indicate that a appeared in m. It is also common to dene graphs for board positions in a game. We will consider the simple game of tic-tac-toe. From all of these data sources, we nd some prototypical questions which have motivated this work: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. How robust is the Internet to failures? Is the Canadian Internet similar to the French? Does the Internet have a hierarchical structure? Are phone call patterns (caller-callee) in Asia similar to those in the U.S.? Does a new circuit design appear similar to a previously patented design? What are the most inuential database papers? Which set of street closures would least aect trac? What is the best opening move in tic-tac-toe? Are there gender dierences in movie appearances? Cluster movie genres.
Graphs are an increasingly important data source, with such important graphs as the Internet and the Web. Other familiar graphs include CAD circuits, phone records, gene sequences, city streets, social networks and academic citations. Any kind of relationship, such as actors appearing in movies, can be represented as a graph. This work presents a data mining tool, called ANF, that can quickly answer a number of interesting questions on graph-represented data, such as the following. How robust is the Internet to failures? What are the most inuential database papers? Are there gender dierences in movie appearance patterns? At its core, ANF is based on a fast and memory-ecient approach for approximating the complete neighbourhood function for a graph. For the Internet graph (268K nodes), ANFs highlyaccurate approximation is more than 700 times faster than the exact computation. This reduces the running time from nearly a day to a matter of a minute or two, allowing users to perform ad hoc drill-down tasks and to repeatedly answer questions about changing data sources. To enable this drill-down, ANF employs new techniques for approximating neighbourhood-type functions for graphs with distinguished nodes and/or edges. When compared to the best existing approximation, ANFs approach is both faster and more accurate, given the same resources. Additionally, unlike previous approaches, ANF scales gracefully to handle disk resident graphs. Finally, we present some of our results from mining large graphs using ANF.
1. INTRODUCTION
Graph-based data is becoming more prevalent and interesting to the data mining community, for example in understanding the Internet and the WWW. These entities are modeled as graphs where each node is a computer, administrative domain of the Internet, or a web page, and each edge is a connection (network or hyperlink) between the resources. Google is interested in nding the most impor-
It is possible to answer all of these questions by computing three graph properties pertaining to the connectivity or neighbourhood structure of the graph: Graph Similarity: Given two graphs, G1 and G2 , do the graphs have similar connectivity / neighbourhood structure
(independent of their sizes). Such a similarity measure is useful for answering questions 1, 4, and 5. Subgraph Similarity: Given two subsets of the vertices of the graph, V1 and V2 , compare how these two induced subgraphs are connected within the graph. Such a similarity measure is useful for answering questions 2, 4, 8, 9, and 10. Vertex Importance: Assign an importance to each node in the graph based on the connectivity. This importance measure is useful for answering questions 1, 3, 6, and 7. We answer questions 1, 7 and 10 in this paper, one from each of the three types. The remaining questions can be answered in a similar fashion, using various forms of the Neighbourhood Function . The basic neighbourhood function, N (h), for a graph, also called the hop plot [8], is the number of pairs of nodes within a specied distance h, for all distances h. In section 2 we will dene this more formally and present a more general form of the neighbourhood function that can be used to compute all three graph properties. The main contribution of this paper is a tool that allows us to compute these three graph properties, thereby enabling us to answer interesting questions like those we suggested. Beyond simply answering the questions, we want our tool to be fast enough to allow drill-down tasks. That is, we want it to be possible to interactively answer users requests. For example, to determine the best roads to close for a parade, the city planner would want to interactively consider various sets of street closures and compare the eect on trac. Almost in contrast to the need to be able to run interactively on graphs, we also want a tool that scales to very large graphs. In [3, 13], measuring properties about the web required hardware resources that are beyond the means of most researchers. Instead, we produce a data mining tool that is useful given any amount of RAM. These two goals give rise to the following list of properties that our tool must satisfy when analyzing a graph with n nodes and m edges: Error guarantees: estimates must be accurate at all distances (not just in the limit). Fast: scale linearly with # of nodes and # edges (n, m). Low storage requirements: use only O(n) additional storage. Adapts to the available memory: when the node set does not t in memory, make eective use of the available memory. Parallelizable: for massive graphs, must be able to distribute the work over multiple processors and/or multiple workstations, with low overheads. Sequential scans of the edge le: avoid random accesses to the graph. Random accesses exhibit horrible paging performance for the common case that the graph is larger than the available memory. Estimates per node: must be able to estimate the neighbourhood function from each node, not just for the graph as a whole. This paper presents such a tool, which we call ANF for Approximate Neighbourhood Function . In the literature, we have found two existing approaches that could prove useful for computing the neighbourhood function. We show that neither meets our requirements, primarily because neither scales well to very large graphs. This can be seen in Figure 1,
90 80 70 60 50 40 30 20 10 0 0
RI Approx. - 32 Trials 0.15% Exact-on-sample ANF-0 - 32 Trials ANF - 32 Trials ANF-C - 32 Trials
10 Millions of edges
15
20
Figure 1: ANF algorithms scale but not the others which plots the running time versus the graph size for some randomly-generated graphs. The two existing approaches (the RI approximation scheme [4] and a random sampling approach) scale very poorly while our ANF schemes scale much more gracefully and make it possible to investigate much larger graphs. In section 2 we provide background material, denitions and a survey of the related work. In section 3 we describe our ANF algorithms. In section 4, we present experimental results demonstrating the scalability of our approach. In addition, we show that, given the same resources, (1) ANF is much more accurate and faster then the RI approximation scheme, and (2) ANF is more than 700 times faster than the exact computation for a snapshot of the Internet graph (268K nodes). In section 5, we use ANF to answer some of the prototypical questions posed in this introduction.
To deal with subgraphs, we generalize these two denitions slightly. Let S be a set of starting nodes and C be a set of concluding nodes . We are interested in the number of pairs starting from a node in S to a node in C within distance h: Def. 3. The generalized individual neighbourhood function for u at h given C is the number of nodes in C within distance h. IN + (u, h, C ) = | {v : v C, dist(u, v ) h} |. Note that IN (u, h) = IN + (u, h, V ).
a breadth-rst search results in an expensive random-like access to the disk blocks. This appears to be the state of the art solution for exact computation of N (h). The transitive closure is N () or, equivalently, N (d), where d is the diameter of the graph. Lipton and Naughton [14] presented an O(n m) algorithm for estimating the transitive closure that uses an adaptive sampling approach for selecting starting nodes of breadth-rst traversals. Motivated by this work, in section 4 we will experimentally evaluate a similar sampling strategy to discover that it scales poorly to large graphs and, due to the random-like access to the edge le, it does not scale to graphs larger than the RAM. Most importantly, however, we will nd that the quality of this approximation can be quite poor (we show an example where even a sample as large as 15% does not provide a useful approximation!). Lipton and Naughtons work was improved by Cohen, who gave an O(m) time algorithm using only O(n + m) memory [4]. Cohen also presented an O(m log n) time algorithm for estimating the individual neighbourhood functions, IN (u, h). This appears to be the only previous work which attempts to approximate the neighbourhood function. More details of this algorithm, which we refer to as the RI approximation , appear in section 4.1.1 when we experimentally compare it to our new approximation. Our experiments demonstrate that the RI approximation is ill-suited for large graphs; this is due to its extensive use of random-like access (for breadth rst search, heap data structures, etc.). The problems of random access to a disk resident edge le has been addressed in [15]. They nd that it is possible to dene good storage layouts for undirected graphs but that the storage blowup can be very large. Given that we are interested only in very large graphs and graphs with directed edges, this does not solve the problems related to large edge les. Instead, we will need to nd a new computation strategy which avoids random access to disk. State-of-the-art approaches to understanding/characterizing the Internet and the Web very often make use of neighbourhood information [3, 13, 1, 20]. Other recent work in data mining for graphs has focused on mining frequent substructures. Essentially, the problem of nding frequent itemsets is generalized to frequent subgraphs. Systems for this include SUBDUE [5] and AGM [11]. Graphs have been used to improve marketing strategies [7]. A survey of work on citation analysis appears in [18].
Def. 4. The generalized neighbourhood function at h is the number of pairs of a node from S and a node from C that are within distance h or less. N + (h, S, C ) = | {(u, v ) : u S, v C, dist(u, v ) h} | N + (h, S, C ) = uS IN +(u, h, C ). Note that N (h) = N + (h, V, V ).
In section 5 we will use the neighbourhood function to characterize graphs. We will compare NG1 (h) to NG2 (h) to measure the similarity in connectivity/neighbourhood structure of graphs G1 and G2 . For example, if we want to know the structural similarly of the Web from 1999 to todays Web, we can compute their neighbourhood functions and compare them at all points. Comparing subgraphs induced by vertex sets V1 and V2 can be done by comparing N + (h, V1 , V1 ) to N (h, V2 , V2 ). E.g., let V1 be the routers in the Canadian domain and V2 be the routers in the French domain. Finally, we will use the individual neighbourhood function for a node to characterize its importance, with respect to the connectivity. E.g., the most important router is the one that in some way reaches the most routers the fastest. Thus, if we can compute all these variants of the neighbourhood function eciently, then we can answer the graph questions that we posed in the introduction.
// Set M(x, 0) = {x} FOR each node x DO M (x, 0) = concatenation of k bitmasks each with 1 bit set (P (bit i) = .5i+1 ) FOR each distance h starting with 1 DO FOR each node x DO M (x, h) = M (x, h 1) // Update M(x, h) by adding one step FOR each edge (x, y) DO M (x, h) = (M (x, h) BITWISE-OR M (y, h 1)) // Compute the estimates for this h FOR each node x DO (x, h) = (2b )/.77351 Individual estimate IN where b is the average position of the least zero bits in the k bitmasks (h) = The estimate is: N all x IN (x, h)
x 0 1 2 3 4
M (x, 0) 100 100 001 010 100 100 100 001 100 100 100 100 100 010 100
M (x, 1) 110 110 101 110 101 101 110 101 100 100 111 100 100 110 101
M (x, 2) 110 111 101 110 111 101 110 111 101 110 111 101 110 111 101
Figure 2: Introduction to the basic ANF algorithm the data structure to disk and to create an algorithm that meets all of our requirements. Finally, we will extend this algorithm with bit compression to further increase its speed.
concatenation of k bitmasks each with 1 bit set (P (bit i) = .5i+1 ) FOR each distance h starting with 1 DO FOR each node x DO M last(x) = M cur (x) FOR each edge (x, y) DO M cur (x) = (M cur (x) BITWISE-OR M last(y)) FOR each node x DO + (x, h, C ) = (2b )/.77351, where b is the average IN position of the least zero bit in the k bitmasks + (h, S, C ) = N xS IN (x, h, C )
Figure 4: ANF-0: In-core ANF any of them (bit 1 is not set), then we probably saw about 4 or less nodes. So, the approximation of the size of the set M(x, h) is proportional to 2b , where b is the least bit number in M (x, h) that has not been set. We refer the reader to [9] for a derivation of the constant of proportionality and a proof that this estimate has good error bounds. A single approximation is obviously not very robust. We do k parallel approximations by treating M (x, h) as a bitstring of length k(log n+r) bits. Figure 2 shows the complete algorithm implementing the edge-scan based ANF. Example. Figure 3 shows the bitmasks and approximations for a simple example of our most basic ANF algorithm. The purpose is to clarify the concatenation of the bitmasks and to illustrate the computation. The input is a 5 node undirected cycle and we used parameters k = 3 and r = 0. The rst FOR loop of the algorithms generates the table of random bitmasks M (x, 0). That is, using an exponential distribution, we randomly set one bit in each of the three concatenated bitmasks. (In the gure, bit 0 is the leftmost bit in each 3-bit mask.) Then, each iteration uses the OR operation to combine the nodes that it could reach in h 1 steps plus the ones that its neighbours could reach in h 1 steps. For example, M (2, 1) is just M (1, 1) OR M (2, 1) OR M (3, 1), because nodes 1 and 3 are the neighbors of node (2, 1), are computed from 2. The estimates, for example IN , and the average of the least zero bit positions (2, 1, 1 = 4 3 24/3 /.77359 = 3.25). The algorithm in Figure 2 uses an excessive amount of memory and does not estimate the more general forms of the neighbourhood function. Figure 4 depicts the same algorithm, with the following improvements: M (x, h) uses M (y, h 1) but never M (y, h 2). Thus we use M cur(x) to hold M (x, h) and M last(y) to hold M (y, h 1) during iteration h. The starting nodes, S , just changes the estimate by summing over x S instead of x V . In terms of implementation, this can be done by extending M cur to hold a marked bit indicating membership in S .
The concluding nodes change the h = 0 case. Now M(x, 0) is {} if x / C since it can reach no nodes in C in zero steps. Thus nodes not in C are initially assigned a bitmask of all 0s. The ANF-0 algorithm meets all but one of the requirements set out in the introduction: Error guarantees: each IN + (x, h, C ) is provably estimated with low error with high condence. Fast: running time is O((n + m)d) which we expect to be fast since d is typically quite small (veried in section 4). Low storage requirements: only additional memory for M cur and M last. Adapts to the available memory? No! We will address this issue in the next section. Easily parallelizable: Partition the nodes among the processors and then each processor may independently compute M cur for each x in its set. Synchronization is only needed after each iteration. Sequential scans of the edge le: Yes. Estimates IN (x, h): Yes, with provable accuracy.
Select the number of buckets b1 and b2 Partition the edges into the buckets (sorted by bucket) FOR each node x DO IF x C THEN M cur (x) = concatenation of k bitmasks each with 1 bit set (P (bit i) = .5i+1 ) IF x S THEN mark(M cur (x)) IF current buer is full THEN switch buers and perform I/O Flush any buers that need to be written FOR each distance h DO Fetch the data for the rst bucket of M cur and M last Prefetch next buckets of M cur and M last FOR each edge (x, y) DO IF M cur (y) is not in memory THEN We have been ushing and prefetching it Wait for it if necessary Asynchronously ush modied buer Begin prefetching next buer IF M last(x) is not in memory THEN We have been prefetching it Wait for it if necessary Begin prefetching next buer. M cur (x) = (M cur (x) OR M last(y)) // Copy M cur (u) to M last(u) as we stream through M cur (u) // computing the estimate est = 0 Fetch the data for the rst bucket of M cur FOR each node x DO IF M cur (x) is not in memory THEN We have been prefetching it Wait for it to be available Start prefetching the next buer M last(x) = M cur (x) If x is the last element in its bucket of M last THEN Asynchronously ush the buer Continue processing in the double buer IF marked(M cur (x)) THEN IN + (x, h, C ) = (2b )/.77351 est += IN + (x, h, C ) where b is the average position of the least zero bits in the k bitmasks + output N (h, S, C ) = est
4. EXPERIMENTAL EVALUATION
In this section we present an experimental validation of our ANF approximation. Two alternative approaches will be introduced and then we will describe our data sets. Next, we propose a metric for comparing two neighbourhood functions (functions over a potentially large domain). We conduct a sensitivity analysis of the parameter r. Then, we pick a value of r and we compare ANF to the approximation presented in [4] for various settings of the parameter k. We then show that sampling can provide very poor estimates and, nally, we examine the scalability of all approaches. The key results from this section are to answer these questions: 1. 2. 3. 4. Is ANF sensitive to r, the number of extra bits? Is ANF faster than existing approaches? Is ANF more accurate than existing approaches? Does ANF really scales to very large graphs? Do the others?
4.1.3
We have collected three real data sets and generated three synthetic data sets. These data sets have a variety of properties and cover many of the potential applications of the neighbourhood function. Some summary information is provided in Table 2. Prac. Diam. is the Practical Diameter which we use informally to mean the distance which includes most of the pairs of points. We use three real data sets: Router: Undirected Internet routers data from ISI [19], including scans done by Lucent Bell Laboratories [12]. Cornell: A crawl of the Cornell web site by Mark Craven. Cora: The CORA project at JustResearch found research papers on the web and provided a citation graph [6]. and four synthetic data sets: Cycle: A single simple undirected cycle (circle). Grid: A 2D planar grid (undirected). Uniform: Graph with random (undirected) edges. 80-20: Very skewed graph generated in an Internet like fashion with undirected edges using the method in [17].
4.1 Framework
4.1.1 RI approximation scheme
The RI approximation algorithm is based on the approximate counting scheme proposed in [4]. To estimate the number of distinct elements in a multi-set, assign each a random value in [0, 1] and record the least of these values added to the set. The estimated size is the reciprocal of the least value seen, minus 1. This approximate counting scheme was used to estimate the individual neighbourhood functions with the following algorithm. We need to know for each node, u the minimum value vh of a node reachable from u in h hops. Then, the estimate for IN (u, h) is v1 1. h An equivalent, but more ecient algorithm was presented which uses breadth-rst searches. It was shown that this improved procedure takes only O(m log n) time (with high probability). To reduce the variance in the estimates, the entire algorithm is repeated, averaging over the estimates.
4.1.4
Evaluation Metric
4.1.2
Sampling
We are approximating functions dened over d points. Let be the candiN be the true neighbourhood function and N (h), we use date approximation. To measure the error of N the standard relative error metric. To measure the overall we use the Root Mean Square (RMS) of pointerror of N wise relative errors. Thus, the error function, e, is: (h)) = rel(N (h), N ) = e(N, N
d h=2
We can sample by selecting random edges, random nodes or random starting nodes for the breadth-rst search. Randomly selecting a set of nodes (and all edges for which both end-points are in this set) and randomly selecting a set of edges is unlikely to produce a useful sample. For example, imagine sampling a cycle anything but a very large sample will leave disconnected arcs which have very different properties. For completeness we veried that these approaches produced useless estimates. The last approach is much more compelling. It is akin to the sampling done in [14]. Recall that the neighbourhood function is: N (h) = uV IN (u, h). Rather than summing over all nodes, u, we could sum over only a sample of the nodes while using breadth-rst searches to compute the exact IN (u, h). We call this method exact-on-sample and it has the potential to provide great estimates a single sample of a cycle will provide an exact solution. However, experimentally we nd that this approach also has the potential to provide very poor estimates. Additionally, we nd that it does not scale to large graphs because of the random-like access to the edge le due to its use of breadth-rst search.
qP
Note that the RMS is computed beginning with h = 2. Since N (0) = |V | and N (1) = |E | we do not require approximations for these points.
4.2 Results
4.2.1 Parameter Sensitivity
ANF has two parameters: the number of parallel approximations, k, and the number of additional bits, r. The number of approximations, k, is a typical trade-o between time and accuracy. We consider this in section 4.2.2 and x k = 64 for the time being. Additional experiments were run with other values of k which produced similar results. To measure the sensitivity we averaged the RMS error over 10 trials for dierent values of r and the dierent data sets. These results appear in Figure 6 and we see that the accuracy is not very sensitive to the value of r. (The lines between the
1 extra bit 3 extra bits 5 extra bits 7 extra bits 9 extra bits
cora
router
Figure 6: Results are not sensitive to r points are a visual aid only.) We nd r = 5 or r = 7 provide consistent results.
very few will be from the s source nodes. This will result in an error that is a factor of around s/p for exact-on-sample using a p% sample. We measure the error and the running time over 20 trials for a variety of sample sizes ranging from .1% to 15% on a graph generated with N = 25, 000, s = 100, t = 100 and d = 6. Figure 8(b) shows the large errors, more than 20%, even for very large samples. To illustrate the scalability issues for exact-on-sample, we constructed a graph with N = 250, 000, s = t = 5 and d = 6. We then increase s and t proportionately to generate larger graphs. Figure 8(c) shows that as the graph grows larger exact-on-sample scales about as well as ANF but as soon as the edge le no longer ts in memory (approximately 27 million edges) we see approximately a two order of magnitude increase in the running time of the exact-onsample approach. Thus, we conclude that exact-on-sample scales very poorly to graphs that are larger than the available memory.
4.2.2
Accuracy
We now examine the accuracy of the ANF approximation. To do so, we compare our accuracy with a highly tuned implementation of the RI approximation (the only existing approach). Now we x r = 7 and consider three values of k: 32, 64 and 128. We average the error over 10 trials of each approximation scheme. The rst row of Figure 7 shows the accuracy of each of the k values for each data set for each algorithm, while the second row shows the corresponding running times. We see that: ANFs error is independent of the data sets. RI approximations error varies signicantly between data sets. ANF achieves less than 10%, 7% and 5% errors for k = 32, k = 64 and k = 128, respectively. RI has errors of 27%, 14% and 12% for k = 32, k = 64 and k = 128, respectively. ANF is much faster than RI, particularly on the larger graphs, with up to 3 times savings. Using much less time, ANF is much more accurate than RI. Thus, even for the case of graphs that may be stored in memory, we have a signicant improvement.
4.2.4
Table 3 reports wall-clock running times on an otherwise unloaded Pentium II-450 machine for both the exact computation (Breadth-First search) and ANF with k = 64 parallel approximations. We chose k = 64 since it provides much less than a 10% error, which should be acceptable for most situations. The approximations are quite fast and, for the Router data set, we have reduced the running time from approximately a day down to less than 2 minutes. This makes it possible to run drill down tasks on much larger data sets than before. Overall, we nd that ANF is up to 700 times faster than the exact computation on our data sets. ANF also scales to much larger graphs than the alternatives. We generated random graphs placing edges randomly between nodes. We increased the number of nodes and edges while preserving an edge:node ratio of 8:1 (based on the average degree found in two large crawls of the Web [3]). Figure 1 (in the introduction) shows the running times for the ANF variants, the RI approximation and example-onsample. Parameters for each alternative were chosen such that they all had approximately the same running time for the rst data point. These values are k = 32 for the ANF variants, k = 8 for RI and p = 0.0015 for exact-on-sample. We nd that: 1. Exact-on-sample scales much worse than linearly. For a xed sampling rate, we expect it to scale quadratically when we increase the number of nodes and edges. 2. RI very quickly exhausts its resources due to its data structures. Because RI was not designed to avoid the random accesses, it has horrible paging behaviour and, after about 2 million edges, we had to stop its timing experiment. 3. ANF-0 suers from similar swapping issues when it
4.2.3
Sampling
There are three problems with the described exact-on-sample approach. First, it has heavy memory requirements because fast breadth-rst search requires that the edge le t in memory. Second, the quality is dependent on the graph because there are no bounds on the error. Third, it is not possible to compute the individual neighbourhood functions . We now provide an example which demonstrates the rst two problems. Figure 8(a) helps illustrate our example graph. First, create a chain of d 2 nodes that start from a node r and end at a node x. Add N nodes to the center of the graph, each of which has a directed edge to r and a directed edge from x. This graph has diameter d and a neighbourhood function that is O(N ) for each distance less than d and O(N 2 ) for distance d. Finally, dene a set of s source nodes that have an edge to each of the N center nodes and a set of t terminal nodes that have an edge from each of the N center nodes. If N s and N t, then the majority of the sampled nodes will be from the N center nodes and
30
20
14 12
25 15 RMS (relative error) RMS (relative error) 20 RMS (relative error) cornell cycle grid uniform 80-20 Data Set cora router
10 8 6 4 2
15
10
10
5 5
(a) Accuracy, k = 32
4 3.5 3 2.5 2 1.5 1 0.5 0 cornell cycle grid uniform 80-20 Data Set cora router RI Approx. - 32 trials ANF - 32 trials Wall clock running time (Minutes) 6
(b) Accuracy, k = 64
RI Approx. - 64 trials ANF - 64 trials Wall clock running time (Minutes) 12
10
(d) Time, k = 32
(e) Time, k = 64
Figure 7: Our ANF algorithm provides more accurate and faster results than the RI approximation
140 120 100000
Exact-on-sample for .1% - 15% ANF using 2 - 256 trials Time (seconds) in log scale
t nodes
RMS (relative error)
...
10000
...
...
x
N nodes
1000
...
s nodes
100
10 0 10 20 Millions of edges 30 40
Figure 8: Sampled breadth-rst search can provide huge errors and does not scale to very large edge les exhausts the memory at around 8 million edges, and it too had to be stopped. 4. Approximate counting methods [9, 4] are not enough for disk resident graphs. 5. ANF/ANF-C scale the best, growing piece-wise linearly with the size of the graph. The break points are: all data ts in memory (about 8 million edges), M cur ts in memory (about 16 million edges) and neither ts in memory (the rest). This is as expected. 6. ANF-C oers up to a 23% speed-up over ANF. Thus, ANF is the only algorithm that scales to large graphs and does so with a linear increase in running time. those questions. However, all 10 questions can be answered by the same approaches that we will now demonstrate. The approach is to compute various neighbourhood functions and then to compare them. Our tool allows for a detailed comparison of these functions. However, comparing neighbourhood functions requires that we compare two functions over potentially large domains (the domain is {1, 2, , d}). Instead, in this paper we will focus on a summarized statistic derived from the neighbourhood function, called the hop exponent. Many real graphs [8] have a neighbourhood function that follows a power law N (h) hH . The exponent, H, has been dened as the hop exponent (similarly, Hx is the individual hop exponent for a node x). There are three interesting observations about the hop exponent that make it an appealing metric. First, if the powerlaw holds, the neighbourhood function will have a linear section with slope H when viewed in log-log scale. Second, the hop exponent is, informally, the intrinsic dimensionality
2.50 2.37 2.50 2.36 2.57 2.36 2.50 2.36 2.51 (a) ANF importance
-.07 -.20 -.07 -.21 -.21 -.07 -.21 -.06 (b) Delta importance
Figure 9: ANF nds the best starting move for X of the graph. A cycle has a hop exponent of 1 while a grid has a hop exponent of 2, which corresponds with some idea of their dimensionality. Third, if two graphs have dierent hop exponents, there is no way that they could be similar. While not all neighbourhood functions will actually follow a power-law, we have found that using the hop exponent still fairly reasonably captures the growth of the neighbourhood function. To compute the hop exponent, we rst truncate the neighbourhood function at de , the eective diameter , then we compute the least-squares line of best t in log-log space to extract the slope of this line. The slope is the hop exponent of the graph and we use it as our measure of the growth of a neighbourhood function. We dene de to be the least h such that it include 90% of the pairs of nodes. We use the individual hop exponent, Hx , as a measure of a nodes importance with respect to the connectivity of a graph. We can answer some of the proposed questions.
Film-Noir Animation Short Adult Fantasy Documentary Family Mystery Musical Western War Sci-Fi Romance Horror Adventure Crime Thriller Action Comedy Drama
Figure 10: Movie genre clusters sorted in increasing hop exponent value dramas, comedies, etc). For each genre, we take the set of movies in that genre and the set of actors which appear in one or more of those movies. We then cluster these graphs by computing the hop exponents and forming clusters that have similar hop exponents (less than 0.1 dierence). This clustering appears in Figure 10. One interesting cluster is mystery, musical, western, war which actually corresponds to movies that are typically older. Finally, other fringe genres such as Adult turn out to be well separated from the others.
5.1 Tic-Tac-Toe
Tic-tac-toe is a simple game in which two players, one using X and the other using O, alternatively place their mark on an unoccupied square in a 3x3 grid. The winner is the rst player to connect 3 of their symbols in a row, column or diagonal. The best opening move for X is the center square, the next best is any of the 4 corners and the worst moves are the 4 remaining squares. To verify that our notion of importance has some potential use, we will use our ANF tool to discover this same rule. Construct a graph where each node is a valid board and add an edge from board x to board y to indicate that this is a possible move. Let C , the concluding set, be the set of all boards in which X wins. Compute the individual neighbourhood functions for each of the 9 possible rst moves by X, which is their importance (speed at which they attain winning positions from each of these moves). Figure 9 shows these importances along with the dierence between each and the best move. ANF determined the correct importance of each opening move. Using Figure 9(b), we see that the center is only slightly better than a corner square which is, in turn, much better than the remaining 4 squares. This shows both the correct ordering of the starting moves and the relative importance of each.
6. CONCLUSIONS
In this paper we presented 10 interesting data mining questions on graph data, proposed an ecient and accurate approximation algorithm that gives us the tool, ANF, we needed to answer these questions, and presented results for three of these questions on real-world data. We have found ANF to be quite useful for these and other questions that can be addressed by studying the neighbourhood structure of the
(Uniform) random selection Decreasing individual hop exponent Decreasing node degree Hop exponent
5 4 3 2 1 0 (Uniform) random selection Decreasing individual hop exponent Decreasing node degrees 10K 20K 30K 40K 50K 60K 70K 80K 90K Number of nodes deleted (graph had approx. 285K nodes)
6e+10 5e+10 4e+10 3e+10 2e+10 1e+10 0 10K 20K 30K 40K 50K 60K 70K 80K 90K Number of nodes deleted (graph had approx. 285K nodes)
(a) Number of pairs of nodes that can communicate vs. number of deleted nodes
Figure 11: Eect of router failures on the Internet underlying graphs (e.g., we have used ANF to study the most important movie actors). We experimentally veried that ANF provides the following advantages: Highly-accurate estimates: Provable bounds which we also veried experimentally, nding less than a 7% error when using k = 64 parallel approximations (for all our synthetic and real-world data sets). Is orders of magnitude faster: On the seven data sets used in this paper, our algorithm is up to 700 times faster than the exact computation. It is also up to 3 times faster than the RI approximation scheme. Has low storage requirements: Given the edge le, our algorithm uses only O(n) additional storage. Adapts to the available memory: We presented a diskbased version of our algorithm and experimentally veried that it scales with the graph size. Can be parallelized: Our ANF algorithm may be parallelized with very few synchronization points. Employs sequential scans: Unlike prior approximations of the neighbourhood function, our algorithm avoids random access of the edge le and performs one sequential scan of the edge le per hop. Individual neighbourhood functions for free: ANF computed approximations of the individual neighbourhood functions as a byproduct of the computation. These approximations proved to be very useful in identifying the important nodes in a graph. Even for the case that graphs (and data structures) t into memory, ANF represents a signicant improvement in speed and accuracy. When graphs get too large to be processed effectively in main memory, ANF makes it possible to answer questions that would have been at least infeasible, if not impossible, to answer before. In addition to its speed, we found the neighbourhood measures to be useful for discovering the following answers to our prototypical questions: 1. We found the best opening moves to tic-tac-toe. 2. We clustered movie genres. 3. We found that the Internet is resilient to random failures while targeted failures can quickly create disconnected components. 4. We found that sampling the Internet actually preserves some connectivity patterns while targeted failures truly distort it.
7. REFERENCES
[1] L. A. Adamic. The small world Web. In Proceedings of the European Conf. on Digital Libraries, 1999. [2] S. Brin and L. Page. The anatomy of a large-scale hypertextual Web search engine. Computer Networks and ISDN Systems, 30(17):107117, 1998. [3] A. Broder, R. Kumar, F. Maghoul, P. Raghavan, and R. Stata. Graph structure in the Web. In Proceedings of the 9th International World Wide Web Conference, pages 247256, 2000. [4] E. Cohen. Size-estimation framework with applications to transitive closure and reachability. Journal of Computer and System Sciences, 55(3):441453, December 1997. [5] Cook and Holder. Graph-based data mining. ISTA: Intelligent Systems & their applications, 15, 2000. [6] CORA search engine. http://www.cora.whizbang.com. [7] P. Domingos and M. Richardson. Mining the network value of customers. In KDD-2001, pages 5766, 2001. [8] M. Faloutsos, P. Faloutsos, and C. Faloutsos. On power-law relationships of the internet topology. In SIGCOMM, 1999. [9] P. Flajolet and G. N. Martin. Probabilistic counting algorithms for data base applications. Journal of Computer and System Sciences, 31:182209, 1985. [10] IMDB. http://www.imdb.com. [11] A. Inokuchi, T. Washio, and H. Motoda. An apriori-based algorithm for mining frequent substructures from graph data. In PDKK, pages 1323, 2000. [12] http://cs.bell-labs.com/who/ches/map/. [13] S. R. Kumar, P. Raghavan, S. Rajagopalan, D. Sivakumar, A. Tomkins, and E. Upfal. The Web as a graph. In ACM SIGMODSIGACTSIGART Symposium on Principles of Database Systems, pages 110, 2000. [14] R. J. Lipton and J. F. Naughton. Estimating the size of generalized transitive closures. In Proceedings of 15th International Conference on Very Large Data Bases, pages 315326, 1989. [15] M. H. Nodine, M. T. Goodrich, and J. S. Vitter. Blocking for external graph searching. In Proc. ACM PODS Conference (PODS-93), pages 222232, 1993. [16] C. R. Palmer, G. Siganos, M. Faloutsos, C. Faloutsos, and P. Gibbons. The connectivity and fault-tolerance of the Internet topology. In Workshop on Network-Related Data Management (NRDM-2001), 2001. [17] C. R. Palmer and J. G. Stean. Generating network toplogies that obey power laws. In IEEE Globecom 2000, 2000. [18] G. Salton and M. McGill. Introduction to Modern Information Retrieval. McGraw-Hill, 1983. [19] http://www.isi.edu/scan/mercator/maps.html. [20] S. L. Tauro, C. Palmer, G. Siganos, and M. Faloutsos. A simple conceptual model for the Internet topology. In IEEE Globecom 2001, 2001.