The gist I'm getting is that the methodology/formula used by Mann would produce the 'hockey stick effect'(whatever that is) no matter what data it was applied to. This, as far as I can tell, seems to be acknowledged by everyone.
I'm not sure this is the case. The 'randomised data' referred to was indeed 'noise' but with characteristics which mimic real climate data... see e.g.
http://deepclimate.org/2010/11/16/replication-and-due-diligence-wegman-style/
Obviously the author knows which side of the debate he's on :-) but it's a pretty detailed critique of these claims. Read it all and the comments...
"Finally, I’ll return to the central claim of Wegman et al – that McIntyre and McKitrick had shown that Michael Mann’s “short-centred” principal component analysis would mine “hockey sticks”, even from low-order, low-correlation “red noise” proxies . But both the source code and the hard-wired “hockey stick” figures clearly confirm what physicist David Ritson pointed out more than four years ago, namely that McIntyre and McKitrick’s “compelling” result was in fact based on a highly questionable procedure that generated null proxies with very high auto-correlation and persistence."
In other words, the 'random noise' was in fact not random, but actually (deliberately?) generated in a fashion which resulted in a 'hockey stick' shape when analysed the way Mann analysed real data.