Running .NET on Heroku

Posted on | | 7 Comments on Running .NET on Heroku

Since joining Heroku I’ve been wanting to get .NET running on the platform and I’m happy to report that we now have a reasonably workable Mono buildpack. My goal was to be able to take an ASP.NET MVC solution created with Visual Studio on Windows, create a Heroku app, run git push heroku master and have Heroku build, deploy and run the app using Mono and the XSP web server.

The result is based heavily on previous work by my colleague Brandur.

Getting started

To use the .NET buildpack, create an ASP.NET MVC 4 web site and enable NuGet package restore. There are then a few tweaks required to make the solution palatable to Mono and xbuild (striked-out issues have been fixed in buildpack and are not necessary anymore):

Hopefully, we can get these obstacles eliminated through improvements to either Mono, NuGet or the buildpack.

Now, deploy to Heroku:

    $ heroku create
    $ heroku config:add BUILDPACK_URL=https://github.com/friism/heroku-buildpack-mono/
    $ git push heroku master

I’ve created a few samples that are known to work. TestingMono is an extremely simple MVC site with a background worker that logs a message once every second. To run the background worker, add a Procfile that defines the command worker: mono WorkerTest.exe and scale it to 1 with heroku scale worker=1. The other sample is forked from an AppHarbor sample and demonstrates simple use of a Heroku PostgreSQL database. Note that the connectionstring has to be read from the environment, not Web.config as is usual in .NET. You can find the sample running here.

Overview

Here’s what works:

  • Running ASP.NET MVC 4 on top of Mono 3.0.11 and XSP 3.0.11
  • NuGet package restore so you don’t have to include library dependencies in your repo
  • Caching of build output and incremental builds, including caching of already-downloaded NuGet packages
  • Running executables in background worker dynos

Here’s what needs love:

  • Insertion of config into appSettings in Web.config
  • Make more of the default Visual Studio templates work out of the box
  • Look for XSP replacement (likely nginx)

Also see the TODO in the README. Contributions are very welcome. I hope to cover how buildpack dependencies (Mono and XSP in this case) are generated in a future blog post.

And yes, I’m working on getting Visual Basic running.

Compressed string storage with NHibernate

Posted on | | 1 Comment on Compressed string storage with NHibernate

This blog post demonstrates how to use a IUserType to make NHibernate compress strings before storing them. It also shows how to use an AttributeConvention to configure the relevant type mapping.

By compressing strings before storing them you can save storage space and potentially speed up your app because fewer bits are moved on and off physical storage. In this example, compression is done using the extremely fast LZ4 algorithm so as to not slow data storage and retrieval.

The downside to compressing string stored in the database is that running ad-hoc SQL queries (such is mystring like '%foo%') is not possible.

Background

I was building an app that was downloading and storing lots HTML and for convenience I was storing the HTML in a SQL Server database. SQL Server has no good way to compress stored data so the database files grew very quickly. This bugged me because most of the content would compress well. I was using Entity Framework and started throwing around for ways to hook into how EF serializes data or for a way to create a custom string type that could handle the compression. Even with the EF6 pre-releases, I couldn’t find any such hooks.

NHibernate IUserType

So I migrated to NHibernate which lets you define custom datatypes and control how they’re stored in the database by implementing the IUserType interface. The best tutorial I’ve found for implementing IUserType is this one by Jacob Andersen. You can check out my full implementation of a compressed string IUserType on GitHub. The two most interesting methods are NullSafeGet() and NullSafeSet():

	public object NullSafeGet(IDataReader rs, string[] names, object owner)
	{
		var value = rs[names[0]] as byte[];
		if (value != null)
		{
			var deCompressor = LZ4DecompressorFactory.CreateNew();
			return Encoding.UTF8.GetString(deCompressor.Decompress(value));
		}

		return null;
	}

	public void NullSafeSet(IDbCommand cmd, object value, int index)
	{
		var parameter = (DbParameter)cmd.Parameters[index];

		if (value == null)
		{
			parameter.Value = DBNull.Value;
			return;
		}

		var compressor = LZ4CompressorFactory.CreateNew();
		parameter.Value = compressor.Compress(Encoding.UTF8.GetBytes(value as string));
	}

The actual compression is done by LZ4Sharp which is a .NET implementation of the LZ4 compression algorithm. LZ4 is notable, not for compressing data a lot, but for compressing and uncompressing data extremely quickly. A single modern CPU core can LZ4-compress at up to 300 MB/s and uncompress much faster. This should minimize the overhead of compressing and uncompressing data as it enters and leaves the database.

For SqlTypes we use BinarySqlType(int.MaxValue):

	public SqlType[] SqlTypes
	{
		get { return new[] { new BinarySqlType(int.MaxValue) }; }
	}

This causes the type to be mapped to a varbinary(max) column in the database.

Mapping

To facilitate mapping, we’ll use an Attribute:

	[AttributeUsage(AttributeTargets.Property)]
	public class CompressedAttribute : Attribute
	{
	}

And an AttributeConvention for FluentNHibernate to use:

	public class CompressedAttributeConvention : AttributePropertyConvention
	{
		protected override void Apply(CompressedAttribute attribute, IPropertyInstance instance)
		{
			if (instance.Property.PropertyType != typeof(string))
			{
				throw new ArgumentException();
			}

			instance.CustomType(typeof(CompressedString));
		}
	}

Here’s how to use the convention with AutoMap:

	var autoMap = AutoMap.AssemblyOf()
		.Where(x => typeof(Entity).IsAssignableFrom(x))
		.Conventions.Add(new CompressedAttributeConvention());

The full SessionFactory is on GitHub.

With this, we get nice, clean entity classes with strings that are automatically compressed when stored:

	public class Document : Entity
	{
		[Compressed]
		public virtual string Text { get; set; }
	}

Limitations

As mentioned in the introduction you can’t do ad-hoc SQL queries because compressed strings are stored in the database as binary blobs. Querying with NHibernate as also somewhat limited. Doing document.Text == "foo" actually works because NHibernate runs “Foo” through the compression. Queries that involve Contains() will (silently) not work, unfortunately. This is because NHibernate translates this to a like query, which won’t work with the compressed binary blob. I haven’t looked into hooking into the query engine to fix this.

Danish state budget data

Posted on | | 1 Comment on Danish state budget data

A couple of weeks ago, Peter Brodersen asked me whether I had made a tree-map visualization of the 2013 Danish state budget. Here it is. It’s on Many Eyes and requires Java (sorry). You can zoom in on individual spending areas by right-clicking on them:

MANY EYES SERVICE NO LONGER EXISTS

About the data

I started scraping and analyzing budget data at Ekstra Bladet in 2010. The goal was to find ways to help people understand how the Danish state uses it’s money and to let everyone rearrange and balance out the 15 billion DDK long term deficit that was frequently cited in the run-up to the 2011 parliamentary election. We didn’t get around to this, unfortunately.

The Danish state burns through a lot of money, which is inherently interesting. The budget published online is also very detailed, which is great. Showing off the magnitude and detail in an interesting way turns out to be difficult though, and the best I’ve come up with is the Many Eyes tree-map.

To see if anyone can do a better job, I’m making all the underlying data available in a Google Fusion Table. The data is hierarchical with six levels of detail (this is also why the zoomable tree-map works fairly well). Here’s an example hierarchy, starting from the ministry using money (Ministry of Labor), down to what the money was used for (salaries and benefits):

Beskæftigelsesministeriet
    Arbejdsmiljø
        Arbejdsmarkedets parters arbejdsmiljøindsats
            Videncenter
                Indtægtsdækket virksomhed
                    Lønninger / personaleomkostninger.

In the Fushion table data there’s a line with an amount for each level. That means that the same money shows up six times, once for each level in the hierarchy. To generate the tree-map, one would start with lines at line-level 5 (the most detailed) and use the ParentBudgetLine to find the parent lines in the hierarchy. The C# code that accomplishes this is found here.

The Fushion table contains data for budgets from 2003 to 2013. The “Year” column is the budget year that this line belongs to. “Linecode” is the code used in the budget. “CurrentYearBudget” is the budgeted amount for the year that this particular budget was published (ie. the projected spend in 2013 for the 2013 state budget). Year[1-3]Budget are the projected spends for the coming three years (ie. 2014-2016 for the 2013 budget). PreviousYear[1-2]Budget are the spends actually incurred for the previous two years (ie. 2011 and 2012 for the 2013 budget).

We have data for multiple years and comparing projected numbers in previous years with actual numbers in later years might yield interesting examples of departments going over budget and other irregularities.

Since we have data for multiple years, we can also visualize changes in spending for individual ministries over time. This turns out to be slightly less interesting than one might suspect because changing governments have a tendency to rename, create or close down ministries fairly often. Here’s a time-graph example:

MANY EYES SERVICE NO LONGER EXISTS

The source code that parses the budget and outputs it in various ways can be found on GitHub. The code was written on Ekstra Bladet’s dime.

Dedication: This blog post is dedicated to Aaron Swartz. Aaron committed suicide sometime around January 11th, 2013. He had many cares and labors, and one of them was making data more generally available to the public.

Tax records for Danish companies

Posted on | | 4 Comments on Tax records for Danish companies

This week, the Danish tax-authorities published an interface that lets you browse information on how much tax companies registered in Denmark are paying. I’ve written a scraper that has fetched all the records. I’ve published all 243,711 records as a Google Fusion Table that will let you explore and download the data. If you use this data for analysis or reporting, please credit Michael Friis, http://friism.com/. The scraper source code is also available if you’re interested.

UPDATE 1/9-12: Niels Teglsbo has exported the data from Google Fusion tables and created a convenient Excel Spreadsheet for download.

The bigger picture

Tax records for individuals (and companies presumably) used to be public in Denmark and still are in Norway and Sweden. If you’re in Denmark, you can probably head down to your local municipality, demand the old tax book and look up how much tax your grandpa paid in 1920. The municipality of Esbjerg publishes old records online in searchable form. Here’s a record of Carpenter N. Møller paying kr. 6.00 in taxes in 1892.

The Danish business lobby complained loudly when the move to publish current tax records was announced. I agree that the release of this information by a center-left government is an example of political demagoguery and that’s yucky, but apart from that, I don’t think there are any good reasons why this information should not be public. It’s also worth noting that publicly listed companies are already required to publish financial statements and non-public ones are required to submit yearly financials to the government which then helpfully resells them to anyone interested.

It’s good that this information is now completely public: Limited liability companies and the privileges and protections offered by these are an awesome invention. In return for those privileges, it’s fair for society to demand information about how a company is being run to see how those privileges are being put to use.

The authorities announced their intention to publish tax records in the summer of 2012 and it has apparently taken them 6 months to build a very limited interface on top of their database. The interface lets you look up individual companies by id (“CVR nummer”) or name and inspect their records. You have to know the name or id of any company that you’re interested in because there’s no way to browse or explore the data. Answering a simple question such as “Which company paid the most taxes in 2011?” is impossible using the interface.

Having said that, I think it’s great whenever governments release data and I commend the Danish tax authorities for making this data available. And even with very limited interfaces like this, it’s generally possible to scrape all data and analyze it in greater detail and that is what I’ve done.

So what’s in there

The tax data-set contains information on 243,711 companies. Note that this data does not contain the names and ids of all companies operating in Denmark in 2011. Some types of corporations (I/S corporations and sole proprietorships for example) have their profits taxed as personal income for the individuals that own them. That means they won’t show up in the data.

UPDATE 12/30-12: Magnus Bjerg pointed out that some companies are duplicated in the data. This seems to be the case at least for all (roughly 48) companies that pay tariffs for extraction of oil and gas. Here are some examples: Shell 1 and Shell 2 and Maersk 1 and Maersk 2. The numbers for these companies look very similar but are not exactly the same. The duplicated companies with different identifiers are likely due to Skat messing up CVR ids and SE ids. Additional details on SE ids can be found here here. My guess is that Skat pulled standard taxes and fossil fuel taxes from two different registries and forgot to merge and check for duplicates.

Here are the Danish companies that reported the greatest profits in 2011. These companies also paid the most taxes:
  1. SHELL OLIE- OG GASUDVINDING DANMARK B.V. (HOLLAND), DANSK  FILIAL
  2. A/S Dansk Shell/Eksportvirksomhed
  3. A.P. MØLLER – MÆRSK A/S
  4. A.P.Møller – Mærsk A/S/ Oil & Gas Activity
  5. Novo A/S
Here are the companies that booked the greatest losses:
  1. FLSMIDTH & CO. A/S – lost kr. 1,537,929,000.00
  2. Sund og Bælt Holding A/S – lost kr. 1,443,935,000.00
  3. DONG ENERGY A/S – lost kr. 1,354,480,560.00
  4. TAKEDA A/S – lost kr. 786,286,000.00
  5. PFA HOLDING A/S – lost kr. 703,882,104.00
Here are companies that are reporting a lot of profit but paying few or no taxes:
  1. DONG ENERGY A/S – kr. 3,148,994,114.00 profit, kr. 0 tax
  2. TAKEDA A/S – kr. 745,424,000.00 profit, kr. 0 tax
  3. Rockwool International A/S – kr. 284,696,514.00 profit, kr. 0 tax
  4. COWI HOLDING A/S – kr. 177,272,657.00 profit, kr. 2,399,803.00 tax
  5. DANAHER TAX ADMINISTRATION ApS. – kr. 155,222,377.00 profit, kr. 0 tax

Benford’s law

Benford’s law states that numbers in many real-world sources of data are much more likely to start with the digit 1 (30% of numbers) than with the digit 9 (less than 5% of numbers). Here’s the frequency distribution of first-digits of the numbers for profits, losses and taxes as reported by Danish companies plotted against the frequencies predicted by Benford:

 

The digit distributions perfectly match those predicted by Benford’s law. That’s great news: If Danish companies were systematically doctoring their tax returns and coming up with fake profit numbers, then those numbers would likely be more uniformly distributed and wouldn’t match Benford’s predictions. This is because crooked accountants trying to come up with random-looking numbers will tend to choose numbers starting with digits like 9 too often and numbers starting with the digit 1 too rarely.

UPDATE 12/30-12: It’s important to stress that the fact that the tax numbers conform to Benfords law does not imply that companies always pay the taxes they are due. It does suggest, however, that Danish companies–as a rule–do not put made-up numbers on their tax returns.

Technical details

To scrape the tax website I found two ways to access tax information for a company:
  1. Access an individual company using the x query parameter for the CVR identifier: http://skat.dk/SKAT.aspx?oId=skattelister&x=29604274
  2. Spoof the POST request generated by the UpdatePanel that gets updated when you hit the “søg” button

The former is the simplest approach, but the latter is preferable for a scraper because much less HTML is transferred from the server when updating the panel compared to requesting the page anew for each company.

To get details on a company, one has to know it’s identifier. Unfortunately there’s no authoritative list of CVR identifiers, although the government has promised to publish such a list in 2013. The contents of the entire Danish CVR register was leaked in 2011, so one could presumably harvest identifiers from that data. The most fool-proof method though, is to just brute-force through all possible identifiers. CVR identifiers consist of 7 digits with an 8th checksum-digit. The process of computing the checksum is documented publicly. Here’s my implementation of the checksum computation. Please let me know if you think it’s wrong:

	private static int[] digitWeights = { 2, 7, 6, 5, 4, 3, 2 };

	public static int ToCvr(int serial)
	{
		var digits = serial.ToString().Select(x => int.Parse(x.ToString()));
		var sum = digits.Select((x, y) => x * digitWeights[y]).Sum();
		var modulo = sum % 11;
		if (modulo == 1)
		{
			return -1;
		}
		if (modulo == 0)
		{
			modulo = 11;
		}
		var checkDigit = 11 - modulo;
		return serial * 10 + checkDigit;
	}

My guess is that the lowest serial (without the checksum) is 1,000,000 because that’s the lowest serial that will yield an 8-digit identifier. The largest serial is likely 9,999,999. I could be wrong though, so if you have any insights please let me know. Roughly one in eleven serials are discarded because the checksum is 10, which is invalid. That leaves about 8 million identifiers to be tried. It’s wasteful to have to submit 8 million requests to get records for a couple of hundred thousand companies, but one can hope that 8 million requests will get the governments attention and that they’ll start publishing data more efficiently.

Screen scraping with WatiN

Posted on | | 1 Comment on Screen scraping with WatiN

This post describes how to use WatiN to screen scrape web sites that don’t want to be scraped. WatiN is generally used to instrument browsers to perform integration testing of web applications, but it works great for scraping too.

Screen scraping websites can range in difficulty from very easy to extremely hard. When encountering hard-to-scrape sites, the typical cause of difficulty is fumbling incompetence on the part of the people that built the site to be scraped. Every once in a while however, you’ll encounter a site openly displaying data to the casual browser, but with measures in place to prevent automatic scraping of that data.

The Danish Patent and Trademark Office is one such site. The people there maintain a searchable database that lets you search and peruse Danish and international patents. Unfortunately, computers are not allowed. If one tries to issue HTTP POST to the resource that generally performs searches and shows patents, an error is returned. If one emulates visiting the site with a real browser by providing a browser-looking User Agent setting, collecting cookies etc. (for example by using a tool like SimpleBrowser), the site sends a made-up 999 HTTP response code and the message “No Hacking”.

Faced with such an obstruction, there are two avenues of attack:

  1. Break out Wireshark or Fiddler and spend a lot of time figuring out what it takes to fabricate requests that fools the site into thinking they originate from a normal browser and not from your bot
  2. Instrument an actual browser so that the site will have no way (other than timing analysis and IP address request rate limiting) of knowing whether requests are from a bot or from a normal client

The second option turns out to be really easy because people have spent lots of time building tools for automatically testing web applications using full browsers, tools like WatiN. For example, successfully scraping the Danish Patent Authorities site using WatiN is as simple as this:

private static void GetPatentsInYear(int year)
{
	using (var browser = new IE("http://onlineweb.dkpto.dk/pvsonline/Patent"))
	{
		// go to the search form
		browser.Button(Find.ByName("menu")).ClickNoWait();

		// fill out search form and submit
		browser.CheckBox(Find.ByName("brugsmodel")).Click();
		browser.SelectList(Find.ByName("datotype")).Select("Patent/reg. dato");
		browser.TextField(Find.ByName("dato")).Value = string.Format("{0}*", year);
		browser.Button(Find.By("type", "submit")).ClickNoWait();
		browser.WaitForComplete();

		// go to first patent found in search result and save it
		browser.Buttons.Filter(Find.ByValue("Vis")).First().Click();
		GetPatentFromPage(browser, year);

		// hit the 'next' button until it's no longer there
		while (GetNextPatentButton(browser).Exists)
		{
			GetNextPatentButton(browser).Click();
			GetPatentFromPage(browser, year);
		}
	}
}

private static Button GetNextPatentButton(IE browser)
{
	return browser.Button(button =>
		button.Value == "Næste" && button.ClassName == "knapanden");
}

Note that in this example, we’re using Internet Explorer because it’s the easiest to setup and use (WatiN also works with Firefox, but only older versions). There’s definitely room for improvement, in particular it’d be interesting to explore parallelizing the scraper to download patents faster.  The – still incomplete – project source code is available on Github. I’ll do a post shortly on what interesting data can be extracted from Danish patents.

Raw updated data on Danish business leader groups

Posted on | | 1 Comment on Raw updated data on Danish business leader groups

Last summer, I published data on the members of Danish business leader groups, obtained with code written while I was still at Ekstra Bladet. I’ve cleaned up the code and removed the parts that fetched celebrities from various other obscure sources. You can fork the project on Github.

The code is fairly straightforward. The scraper itself is less than 150 loc. The scraper is configured to be run in a background worker on AppHarbor and will conduct a scrape once a month (I don’t know how often the VL-people update their website, but monthly updates seems sufficient to keep track of coming and goings). The resulting data can be fetched using a simple JSON API. You can find a list of scraped member-batches here (there’s just one at the time of writing). Hitting http://vlgroups.apphb.com/Member will always net you the latest batch.

I was motivated to revisit the code after this week’s dethroning of Anders Eldrup from his position as CEO of Dong Energy. Anders Eldrup sits in VL-gruppe 1, the most prestigious one. Let’s see if he’s still there next time the scraper looks. 14 other Dong Energy executives are members of other groups, although interestingly, Jakob Baruël Poulsen (Eldrup’s handsomely rewarded sidekick) is nowhere to be found. I think data like this in an important piece of the puzzle to figure out what relations exist between business leaders in Denmark and the Anders Eldrup debacle demonstrates why keeping track is important.

Nordic Newshacker

Posted on | | Leave a Comment on Nordic Newshacker

The excellent people at the Danish newspaper Information are hosting a competition to promote data journalism. It’s called “Nordisk Nyhedshacker 2012“. Data journalism was what I spent some of my time at Ekstra Bladet doing, and the organizers have been kind enough to put me on the jury. The winner will get a scholarship to go work at The Guardian for a month, sponsored by Google. Frankly, I’d prefer working at Information, but I guess The Guardian will do. If you’re a journalist that can hack or if you’re hacker interested in using your craft to make people more informed about the world we live in, you should use this opportunity to come up with something interesting and be recognized for it.

Hopefully, you already have awesome ideas for what to build. Should you need some inspiration, here a few interesting pieces of data you might want to consider (projects using this data will not be judged differently than others).

  • Examine the US Embassy Cables released by Wikileaks. I’ve tried to filter out the ones related to Denmark.
  • Examine the power relationships of members of Danish business leader groups. I have extracted the membership info from their web site. It’d be extra interesting if you combine this information with data about who sits on the boards of big Danish companies, perhaps to make the beginnings of something like LittleSis so that we can keep track of what favours those in power are doing each other.
  • Do something interesting with the CVR database of Danish companies that was leaked on The Pirate Bay last year.
  • Ekstra Bladet has been kind enough to let me open source the code for the award-winning Krimikort (Crime Map) I built while working there. It’s not quite ready to be released yet, but we’re making the current data available now. There’s 62,753 nuggets of geo-located and categorised crime ready for you to look at. You can download a rar file (50 MB) here. To use the data, you have to get a free copy of SQL Server Express and mount the database (Google will tell you how).

I’m afraid I won’t be able be participate in many of the activities preceding the actual competition but I can’t wait to see what people come up with!

US Embassy Cables Related to Denmark

Posted on | | Leave a Comment on US Embassy Cables Related to Denmark

As you may know, Wikileaks has released the full, un-redacted database of US Embassy cables. A torrent file useful for downloading all the data is available from Wikileaks, at the bottom of this page. It’s a PostgreSQL data dump. Danish journalists seem to be completely occupied producing vacuous election coverage, so to help out, I’ve filtered out the Denmark-related cables and are making them available as Google Spreadsheets/Fusiontables.

The first set (link) are cables (146 in all) from the US Embassy in Copenhagen, with all the “UNCLASSIFIED” ones filtered out (since they are typically trivial, if entertaining in their triviality). Here’s the query:

copy (
	select * 
	from cable 
	where origin = 'Embassy Copenhagen' 
		and classification not like '%UNCLASSIFIED%'
	order by date desc)
to 'C:/data/cph_embassy_confidential.csv' with csv header

The second set, at 1438 rows, (link) mention either “Denmark” or “Danish”, are from embassies other than the one in Copenhagen and are not “UNCLASSIFIED”. Query:


copy (
	select * 
	from cable 
	where origin != 'Embassy Copenhagen' 
		and classification not like '%UNCLASSIFIED%'
 		and (
 			content like '%Danish%' or
 			content like '%Denmark%'
 		)
	order by date desc
)
to 'C:/data/not_cph_embassy_confidential.csv' 
	with csv header 
	force quote content
	escape '"'

Members of Danish VL Groups

Posted on | | 2 Comments on Members of Danish VL Groups

Denmark has a semi-formalised system of VL-groups. “VL” is short for “Virksomhedsleder” which translates to “business leader”. The groups select their own members, and the whole thing is organised by the Danish Society for Business Leadership. The groups are not composed only of business people — top civil servants and politicians are also members. The groups meet up regularly to talk about whatever people from those walks of life talk about when they get together.

Before doing what I currently do, I worked for Ekstra Bladet, a Danish tabloid. Other than giving Danes their daily dose of sordidness, Ekstra Bladet spends a lot of time holding Denmarks high’n-mighty to account. To that end, I worked on building a database of influential people and celebrities so that we could automatically track when their names crop in court documents and other official filings (scared yet, are we?). The VL-group members obviously belong in this database. Fortuitously, group membership is published online and is easily scraped.

In case you are interested, I’ve created a Google Docs Spreadsheet with the composition of the groups as of August 2011.  I’ve included only groups in Denmark proper — there are also overseas groups for Danish expatriates and groups operating in the Danish North Atlantic colonies. The spreadsheet (3320 members in all) is embedded at the bottom of this post.

Now, with this list in hand, any well-trained Ekstra Bladet employee will be brainstorming what sort of other outrage can be manufactured from the group membership data. How about looking at the gender distribution of the members? (At this point I’d like to add a disclaimer: I personally don’t care whether the VL-groups are composed primarily of men, women or transgendered garden gnomes so I dedicate the following to Trine Maria Kristensen. Also, an Ekstra Bladet journalist wrote this story up some months after I left, but I wanted to make the underlying data available).

To determine the gender of each group member, I used the Department of Family Affairs lists of boys and girls given names (yes, the Socialist People’s Kingdom of Denmark gives parents lists of pre-approved names to choose from when naming their children). Some of the names are ambigious (eg. Kim and Bo are permitted for both boys and girls). For these names, the gender-determinitation chooses what I deem to be the most common gender for that name in Denmark.

Overall, there are 505 females out of 3320 group members (15.2%). 8 groups of 95 have no women at all (groups 25, 28, 52, 61, 63, 69, 104 and 115). 12 groups include a single woman, while 6 have two. There is also a single all-female troupe, VL Group 107.

Please take advantage of the data below to come up with other interesting analysis of the group compositions.

Non-trivial Facebook FQL example

Posted on | | Leave a Comment on Non-trivial Facebook FQL example

This post will demonstrate a few non-trivial FQL calls from Javascript, including batching interdependent queries in one request. The example queries all participants of a public Facebook event and gets their names and any public status updates they’ve posted recently. It then goes on to find all friend-relations between the event participants and graphs those with an InfoVis Hypertree. I haven’t spent time on browser-compatibility in result-rendering (sorry!), but the actual queries work fine across browsers. You can try out the example here. The network-graph more-or-less only works in Google Chrome.

The demo was created for a session I did with Filip Wahlberg at the New Media Days conference. The session was called “Hack it, Mash it” and involved us showing off some of the stuff we do at ekstrabladet.dk and then demonstrating what sort of info can be pulled from Facebook. Amanda Cox was on the next morning and pretty much obliterated us with all the great interactive visualizations the New York Times builds, but that was all right.

Anyway, on to the code. Here are the three queries

var eventquery = FB.Data.query(
	'select uid from event_member ' +
	'where rsvp_status in ("attending", "unsure") ' +
		'and eid = 113151515408991 '
);

var userquery = FB.Data.query(
'select uid, name from user ' +
'where uid in  ' +
	' (select uid from {0})', eventquery
);

var streamquery = FB.Data.query(
	'select source_id, message from stream ' +
	'where ' +
	'updated_time > "2010-11-04" and ' +
	'source_id in ' +
		'(select uid from {0}) ' +
	'limit 1000 '
	, eventquery
);

FB.Data.waitOn([eventquery, userquery, streamquery],
	function () {
		// do something interesting with the data
	}
);

Once the function passed to waitOn executes, all the queries have executed and results are available. The neat thing is that FB.Data bundles the queries so that, even though the last two queries depend on the result of the first one to execute, the browser only does one request. Facebook limits the number of results returned from queries on the stream table (which stores status updates and similar). Passing a clause on ‘updated_time’ seems to arbitrarily increase this number.

So now that we have the uid’s of all the attendees, how do we get the friend-relations between those Facebook users? Generally, Facebook won’t give you the friends of a random user without your app first getting permission from said user. Facebook will tell you whether any two users are friends and this is done by querying the friend table. So I wrote this little query which handily gets all the relations in a set of uids. Assume you’ve stored all the uids in an array:

var uidstring = uids.join(",");
var friendquery = FB.Data.query(
	'select uid1, uid2 ' +
	'from friend ' +
	'where uid1 in ({0}) and uid2 in ({0})'
	, uidstring
);

FB.Data.waitOn([friendquery], function () {
	// do something with the relations, like draw a graph
});

Neat huh? The full script can be found here: http://friism.com/wp-content/uploads/2010/11/nmdscript.js

Older Posts Newer Posts