In a bold attempt to tame network analytic literature, I have boiled down network analytic projects to the following nine phases:
- Pose the question;
- Build the network G;
- Choose a set of subgraphs H;
- Design a measure on H in G;
- Choose an appropriate null-model, i.e., choose a suitable random graph model G';
- Measure significance of H with respect to G';
- Design a model M that evolves all significant structures in H;
- Design a process P on top of the network;
- Analyze the interplay between M and P.
You don't find this scheme in your favourite network analytic paper? So, let's check with the classic papers of
Watts and Strogatz and
Barabasi and Albert.
A first example: The Collective Dynamics of Small-World Networks by D. Watts and S. Strogatz
- The authors start with the question of whether real-world networks are rather of the ordered, local type or whether they are of the purely random type.
- To check this question, they construct three types of networks: the known neural connections of the worm Caenorhabditis elegance; the actors that were co-casted for any film in the IMDB at that time; and the U.S. power grid.
- They actually define two sets of subgraphs: one contains all shortest paths in G, the other contains for each node the so-called ego-network, i.e., the set of edges to and between its neighbors.
- The measures they define are once the average length of all paths, and the clustering coefficient of each ego-network.
- The null-model they choose is the classic random graph model G(n,p).
- Although they do not directly assess the significance of the values in G, they compare the observed values with the one in the according random graph. From the difference between these values it is quite obvious that also the significance of these values is strong.
- The small-world model captures the newly found significant structure, namely the combination of high clustering coefficient and low average path length. Moreover, this model has only one parameter p that enables them to scale between a rather random graph and a strongly ordered one.
- They also define some processes on networks, namely a version of an iterated prisoner's dilemma, synchronization of coupled oscillators, and a simple voting model.
- In a last step they analyze the interplay of the outcome of any of the three process with respect to the chosen parameter p.
A second example: Emergence of Scaling in Random Networks by A.-L. Barabási and R. Albert
- The authors pose the question of what kind of degree distribution we normally see in a real-world network.
- They used the same actor and power grid networks as Watts and Strogatz and added a network constructed from a partial crawl of the web at that time.
- The set of subgraphs they defined is given by each vertex and its incident edges.
- The measure on this set is simply the number of edges in each subgraph, the degree of each vertex. It is this time aggregrated and viewed as a measure of the whole graph.
- Again, the authors use G(n,p) as the suitable random graph model and also the newly introduced small-world model.
- They compare the expected degree distributions in both null-models with the ones observed in the real networks and it is obvious that these deviate significantly from each other (although this is not formally assessed).
- The preferential attachment model is then introduced to capture these newly found significant structures.
- In a second paper (Error and Attack Tolerance of Complex Networks) they define two processes on networks, namely the random removal of nodes (error/random failure) and the deliberate removal of the best connected nodes (attack).
- They then analyze the behavior of these two processe with respect to the underlying graph model, namely the fully random graph from the G(n,p) model and the newly designed preferential attachment model.
You see that the phases capture these papers quite well. Of course, we restrict ourselves to those network analytic projects on static networks, so far. I am convinced that most of our network analytic pet methods like centrality measures, clustering, network motifs, and position/role assignment can be meaningfully described within this framework. Let me know your opinion!