Saturday, June 18, 2011

Arts, Humanities, and Complex Networks at NETSCI 2011 – Fueling my scientific batteries



Due to the generosity of various sponsors, the AHCNworkshop  did not have a registration fee – but since the location was limited in size, participants were asked to  register for a cost-free ticket. At the evening before the workshop I found out that I was 83rd on the waiting list for such a ticket! Nonetheless, I just went there next morning and was lucky that some of the registered participants had not shown up. But boy, they really missed something!

I had very high expectations towards this workshop since a similar one at the ECCS’10 (also organized by Maximilian Schich) had already been splendid – but all my expectations were met and even exceeded. The workshop took place in the very modern building of the Ludwig Muzeum, directly located at the Danube. Everything was splendidly and most generously organized: no fee, but an extraordinary good caterer with plenty of coffee and pogacsa supply (small Hungarian bread), and the most inspiring talks I heard on the whole conference. Do I sound enthusiastic? I definitely am! The broad field of topics, the enthusiasm of the many people which just started to explore the possibilities of network analysis for their field was just energizing.  I’ll give you an overview of the topics and some of the (subjectively) most interesting points.

The first keyword was given by Marek Claassen, on how to automatically score artists. Of course, there is the obvious way by simply summing up the money paid for their works, but Claassen proposed a more detailed model in which fame and recognition are quantified. The main idea is that an exhibition, especially if it is a solo exhibition at a renowned institute, gives more attention to the user and that this is like a currency. Of course, such a talk rises a lot of questions and even emotions  but it was an interesting start into a workshop which focuses exactly on this frontier: are we able to explore human culture by quantifying stochastical traces in our digital, virtual world?

Next, Tom Brughman showed his enthusiasm for using network analysis in archeology: he showed how he links various sites by the types of artifacts found at these sites. This research is still young and ít is not yet clear what kind of questions can be answered with it – which makes it all the more interesting for network analysists. 

An absolute highlight was the keynote talk by Jean BaptisteMichel about Culturomics  [1][2]. So, he and his co-authors wondered: Hey, within 10 years we can either read a few books very concentrated---or we could read 5 million books very superficial – let’s find out how much we can do with the latter! And they started to use 5 million digitized books from Google books which amounts to approximately 4% of all books ever published. In these books they looked for the probability that an irregular word becomes regularized, the time it takes until a new invention like the car, telephone or washing machines makes it into a book, or which profession is the most percepted and the one most persistently found in books. The result of the latter is clear: if you’re looking for fame you need to become a politician, but never a mathematician! I was very much astonished how far their approach took them – amazing talk. I’m sure we will hear more of him and his co-authors.  

Even after this really great talk, the next invited speaker Natalie Henry Riche had absolutely no problem to switch our focus to a totally different but equally fascinating topic: visualization of large complex networks. She has developed different software tools which all look very interesting:  in her PhD she tried out various ways to display graphs as adjacency matrices or to integrate adjacency matrices (e.g., of dense groups) with a ‘normal’ 2D-embedding of the graph. Within the adjacency matrix approach she, e.g., had the idea to fold parts of it away so that you can compare distant rows or columns with each other without losing track of what belongs where. Quite cool idea! Her homepage does not directly link to downloads of her software but she mentioned that by request there might be a way to get it. 

The next talk that really grabbed my attention was by RobinWilkins  which presented her and her co-authors' work on how the brain reacts to different types of music . For all persons in the experiment, the experimenter knew which song they loved most and which type of music they liked in general. The scientists put together a long record of all songs plus the favorite song of the respective test person. Looking at the working brain it became clear that the brain reacts totally different to songs it likes than to those it does not understand (music from a different culture) or those that it does not like. Especially, Wilkins et al. looked at how well the different parts of the brain, e.g., those doing the hearing or those concerned with emotions and memory, were synchronized during the different songs.

In summary: all of the talks were full of enthusiasm for bringing together arts, the humanities, and network analysis. It was just very refreshing to feel this enthusiasm, and I’ll make sure next time to register for Max’ great workshops at the earliest. So, please make sure you keep up the good work, Max!

Thursday, June 9, 2011

Circuits of profits – a satellite event of NetSci 2011

Maven7 managed to create an open atmosphere, attracting a good mixture of business managers, business consultants, and scientists in the well-equipped building of the Central European University. The density of ties and suits was much higher than in a normal physics or computer science conference – which also guaranteed better coffee and cake selections: the caterer was from the famous Gundel restaurant.

The keynote speech was given by Albert-László Barabási who summarized different aspects of the evolution of network analysis, which might be most influential for economy: he took us on a time travel, beaming us through the preferential attachment model and its close cousin the fitness model. He pointed out clear connection to markets: it is not always the first move into a market that will make you big; if you have a genuinely new function to provide, you can still win the market even if you come later. It is known that this model can lead to a Bose-Einstein-condensate if one of the nodes has an extremely high fitness, i.e., even a late-comer can grab a big share of all subsequent customers: In the evolution of such a network this will turn into a constant percentage of edges attached to this one node while in the normal preferential attachment model, the maximal degree will rather have a shrinking relative share of all edges. Barabási showed that essentially the operating system market might be of this fitness type since Microsoft has had a market share of 85% for the last two decades. I’m not sure whether this model holds since this network is fundamentally different from the preferential attachment model: most nodes in the operating system network are normal users, not companies producing and selling operating systems. Thus, a new node is much more likely to be a user and she can only choose from those producers that are already in the market. It would certainly be interesting to test whether the model applies to this slightly different situation.  He also briefly mentioned the link community algorithm  by Sune Lehmann, which, in my personal view, is one of the best ideas about clustering in networks in years. The link points to the enormous supplemental material presented along with the article.


The next invited speaker was Hamid Benbrahim who made a very interesting point: he introduced the “sigma-delta-disconnect”, namely that we either know much about aggregates (sums, represented by the sigma) or about the difference between things (represented by delta). In essence, he pronounced that there is a gap in our understanding of the macro and the micro level of our complex financial markets. He also pointed out that due to the new technology our markets have now synchronized much more than 10-20 years ago because any small leverage between markets is now easily identified and can be harnessed within milliseconds – virtually at the speed of light. He also showed a case in which two software agents created a small crash on the price of some stock because the first one wanted to sell a huge chunk of it. To not decrease the price, the agent is of course smart enough to offer it in small portions – however, the second agent is programmed in detecting behavior like this and guesses that there will be a large chunk sold and bets against the price. This leads to the predicted decrease of the price until finally the process was stopped. After a five seconds break, other agents realized that the price of the stock was undervalued and started buying it and after around an hour the price was back to normal. This shows how the modern trading system can fall into positive feedback loops that leverage the system out of its equilibrium position.


Alberto Calero made an interesting remark that  the Gartner report on Ten Technology Trends 2011 contained three key technologies related to network analysis: Social communication and collaboration, social analytics, and next generation analytics. He also shared his experience on how to convince fellow CEOs that network analysis can help in business decisions: he did not really disclose the details but it became clear that network analysis was an eminent part in advertising the new services made available by mobile phones in the 90s like SMS. He also reported on a new campaign in which customers are rewarded in groups, and he emphasized that attracting groups of people to a mobil phone provider might become much more important than attracting single customers. This was of course also a big topic in the later talks that reported on different churn models and the strategies to prevent it.


Valdis Krebs finally spoke on how to communicate network analysis results. He reported from his experience with various case studies and emphasized that we need new measures and new ways to report them. One of his customers once showed him a simple chart with a curve: the curve goes up – everything is fine. The curve stalls: watch out! The curve goes down – not good at all. Curve goes up again: you’re back into business. He asked whether it was possible to turn network analytic results into something as simple as that. So, a first step in this seems to be to replace all occurrences of ‘social network analysis’ by ‘organization network analysis’. Valdis’ answer to that request is, e.g., not to show networks, but to show a Venn diagram representation of an embedded and clustered network instead since that is more digestible. He also emphasized to develop a standardized procedure, use only a few standardized programs, and allow for the transfer of best practice. In general his advice is: make it simple.

This last point basically was the outcome of many of the talks later as well: do not make a research project out of every case study but communicate your results in a simple and short manner. In the round table discussion, the invited speakers agreed (make that ‘round chair discussion’) that we need to follow a schematized, standardized way of doing a network analysis report on economical networks – be it communication, trust, or inspiration networks. Maven7 is helping in that by promoting their first software tool, aiming at consultants that want to include network analysis to their repertoire. A second main point was that maybe we focus too much on single nodes and their position in a network. Helping a company should not boil down to “try to connect these three people more” but rather in creating an atmosphere in which positive network structures can emerge.