Search This Blog

Friday 29 December 2006

Virtual Single Use Credit Cards

My bank used to offer plebs like me a very nifty "virtual" single use credit cards via an application\service called O-Card from Orbiscom. It had stagnated a bit over the years and I must say that I was getting a bit concerned that it hadn't been updated since 2002 or so however I was really disappointed whne they told me in July that they were discontinuing it and that from now on Verified by Visa or somesuch would keep my credit card safe in future. Frankly I was not impressed - Verified by Visa is all fine and dandy but the model still requires me to send my actual credit card details over the net and that is the part that the O-Card really sorted out. I could happily shop on www.wearaetotallydodgy.com and know that I had full control of the risk I was getting in to.

The O-Card worked as follows:
  1. You get to a checkout on an on line site looking for credit card details.
  2. Fire up the O-Card application and log in.
  3. Select the credit limit for the one time Credit Card.
  4. It gives you a Credit Card number with the same user name and billing address as your real card that has the following differences:
    1. It can only be used once. As soon as the vendor clears the transaction it can no longer be used for anything.
    2. It has a low credit limit - provided you chose to do this of course.
    3. The Card number and CVV2 number are different
    4. The issue date is the current month and the expiry date is next month.
  5. I give these details to the online vendor and my order clears.
  6. If they are evil and chose to try and reuse the number or are unlucky and get hacked by some Zero day sploit or are stupid\inept and just let my details get stolen later I don't care. In all cases the card number is useless.
  7. All I have to worry about is whether I get my stuff and my real credit card remains safe.
In general it took less time to get a card number from it that it took for me to get a credit card out of my wallet. Really sweet, and I'm sure you can tell that I was rightly pissed off when they chose to kill it off rather than beef up the security or do whatever deal it was that Orbiscom wanted in order to keep it alive.

In their defence the O-Card application model probably had some serious security problems but frankly since there hadn't been a single update to the client app since 2002 (and maybe even 2001) I think that no one was really making any effort to make the client any better. Suggesting that we all just trust "Verified by Visa" is certainly a lot easier for them though and I suspect that their risk assessment process just told them to dump the service since it wasn't very popular. Its low popularity had a lot to do with the fact that their marketing of it was abysmal but what do I know about marketing eh?

There is a very costly alternative available in the form of 3V Vouchers but their charges and terms of use make money lending look like a socially responsible business. OK that's unfair but I find the Euro 5 - 7.50 per transaction fixed fee detailed in their terms and conditions to be outrageous given that these are really targeted at folks who can't afford a real credit card and these are a totally risk free pre-paid voucher as far as the issuing card company is concerned. Compared to the zero cost per transaction of the O-Card it really doesn't seem right to me but I suppose they have to make a shilling after all. Frankly I suspect that the demise of the O-Card and the rise of these vouchers is related but I might just be getting too paranoid.

All is not lost however because it seems that Paypal are launching something similar. This blog post from Techimo points to this Paypal info page that describes a new Paypal service\utility that is not hugely dissimilar to the O-Card. I'm quite keen to see this come out of restricted beta and check how well they have implemented this. It's the first positive sign that I've seen that one of the large operators in the online payments game is making a serious effort to give end users a more concrete way of managing the risk they are prepared to handle when paying for things online. For me I'm just looking forward to being able to shop with confidence at www.wearaetotallydodgy.com again. Happy days.

Thursday 28 December 2006

Parallelism Schmarallelism

The blog entry from Robert O'Callaghan that I linked to earlier via GR reminded me that I had a rant I wanted to explore a bit so bear with me. Before I start just let me be clear that I'm actually a major fan of the multi core cpu trend at the moment and I genuinely realize that multi-core systems and highly parallelized architectures (a la the PS3) are going to be a very important part of all things processing related in the future and we are going to need some major software rewrites to make efficient use of it. However a very interesting article that I came across at work brought back some of the stuff I learned at college all those decades ago and I thought a similar riff of my own here wouldn't be a bad idea. At least it might stop me trying to expound on this to folks in the pub.

Robert's article accurately points out that most current codebases perform very badly on multi processor systems and he deduces that since we are moving to a mega-multi-core world that a major die off of this bad old code will happen leading to an exciting new generation of software that only really shines when running on multi core systems. I think he's mostly right in the article but I think we need to think about what sort of multi core systems we are likely to see in the medium to long term before undertaking some of the really major rewrites. Also we need to start to focus on more than just the processors - the sequential nature of the communications protocols we use for almost everything is a major performance bottleneck but that's a rant for another day. Right now I just want to explore the potential performance envelope that this multi core trend will enable for us and when we can expect it to end.

As Robert pointed out most computing tasks that we have today don't fit the parallel paradigm too well, in fact some are so poorly suited to parallel architectures that they run slower on multi core cpu's than on older single core cpu's at the same clock speed. My hope, though, is that Robert's vision will come about sooner rather than later and we will get to a point where ~95% of of the workload that we want to do can be distributed [across multiple cores]. I'm not hugely confident that this will happen; certain things - signaling, thread\process coordination, general system housekeeping overhead, user interface feedback, authentication\authorization sequencing, challenge response handshaking\key exchange and some time sensitive code to name a few will forever remain almost entirely sequential. However, I'm not really an expert so I'll err on the side of extreme optimism here and use that 5%\95% ratio in my arguments.

Now we must hark back to 1967 when the great grand daddy of parallel computing architectures, Gene Amdahl, sat back and thought long and hard about this. He realized that the little bitty fraction of code that had to forever remain sequential would ultimately prevent parallelism scaling indefinitely. Specifically he pointed out that there is a hard limit to the increased performance that can be realized by adding more computing cores that depends solely on the relative fraction of your code that has to remain sequential. In short where F is the fraction that can never be made parallel that limit is 1/F. More generally where N is the number of processors\cores available the maximum performance improvement attainable when using N processors is:
\frac{1}{F + (1-F)/N}
So in my assumed near nirvana state of 95% parallelizable we end up with a hard limit of 20 x performance gain even if we were to add a million cores. The drop off in marginal performance kicks in quickly and as N passes 100 or so with the additional benefit of each extra CPU drops to less than 15% of a single CPU on its own. Even assuming that we have highly efficient idle power management capabilities that is going to be a bugger to justify just from an electrical power perspective. My guess is that we'll stop building general purpose symmetric multi-cores long before that - probably stopping at no more than 32 although 16 might well be the sweet spot. Despite the fact that Intel have demo'ed an 80 core CPU and have pledged to have them commercially available by 2010 I don't really think it will happen, or if it does then not all 80 cores will be equal. I wouldn't be an expert now mind you so this is just my opinion remember.

Amdahl's law above quite specifically deals with symmetric systems and doesn't deal with the asymmetric demands of a modern "system". Modern OS's and computing usage can benefit hugely from architectures that can provide important (for a given value of important) processes with dedicated processing hardware since many functions are effectively functionally independant of each other. A multicore system could productively dedicate cores to some individual tasks - one for the User Interface alone, one for malware scanning, one for keeping track of crypto keys\ authentication\ authorization states, one for housekeeping and optimizing storage performance on the fly etc etc. So there are benefits to be gained from additional cores there that are different to those that Amdahl's law governs but I still can't see them raising the bar above 32 cores in total especially since most of the really useful things also require multiplication of other resources (memory, network bandwidth etc).

Now note that we have quad core x86 CPU's available today in the consumer market. At a guess that will double to 8 cores in 2007, and double again to 16 in 2009 so we'll probably be banging up against 32 cores for consumer hardware sometime in 2010 assuming that the performance drop off or the idle electrical power consumption at 16 cores doesn't put the brakes on sooner. To keep pace with Moore's law we would definitely need to hit 32 cores at a minimum by 2010.

The general purpose CPU market has had to freeze clock speeds in the 3-4Ghz range since 2002 because everyone has failed to build any that go faster efficiently. No one has any proven solution to that barrier today. So we are now looking at a situation where the current dominant architecture has no demonstrated way to increase desktop CPU performance significantly after 2010. There's nothing new there and the CPU business has never been clear about how it was going to be able to build something 5x more powerful four years in the future but I have a strong suspicion that these two issues today are much harder than the problems faced in the past.

There are already some extremely highly parallelizable tasks that end users want to use their systems for (3D graphics, ripping\transcoding of Video and Audio, Image Processing) and the ideal solution for these is to use dedicated (and highly parallel) hardware for some very limited tasks with a general purpose CPU managing the show. Sounds a lot like the Cell CPU doesn't it?

I think that what will emerge is that we are going to see a major shake up in architectures over the next two to three years as the symmetric multi-core performance wall becomes undeniable. By the end of 2008 an asymmetric multi core client system architecture will emerge that will clearly define the shape of client desktop computing for the next decade, and that won't be simply a massive replication of x86 cores. The Cell could well turn out to have been well ahead of its time and what I'd love to see is an x86\GPU "hybrid" - say with a 16 core general purpose primary unit , a high speed ultra high bandwidth memory\IO controller and many simple cores (10's maybe hundreds) for highly compact integer and floating point matrix\vector cmputational tasks. We'll see.

Sunday 24 December 2006

Saturday 23 December 2006

Auto-discovery of Browser Search Plugins

The search box at the top of the page links into a Google Customised Search Engine that uses my Google Reader feed list's OPML as its definition so while searching it isn't quite as scary as actually searching my brain it will almost certainly find anything topical that I'm half likely to be interested in. The point of all this though was to have some raw material to play around with browser search plugins. The Amazon A9 OpenSearch standard\microformat defines a really neat little XML structure that should allow you to define a customised Search Engine plugin for all supported browsers. If you're using either FireFox 2 or IE 7 you should notice the following effects:

With FireFox 2 the Search Icon to the left of the default search box (circled in red below) should now be highlighted indicating that the browser has discovered a new search plugin. If you click the drop down to the right of the icon you will be presented with some options that now include adding "Search Joe Mansfield's Brain" which sets my CSE as the default and it will replace the default Search Icon with my mini-aegishjalmr icon to give you a visual clue about the new setting. I honestly wouldn't expect anyone to want to do this for anything other than academic curiosity.















On IE7 the effect is slightly different. Once again the changed UI element is circled in red. Again this can be expanded and the additional Search engine can be added.














All of this would be great if folks stuck to the spec but for some reason nothing ever seems to work out quite that simple. I've managed to hack together a combined XML that works for both browsers using the base A9 OpenSearch spec and one component ( moz:SearchForm ) defined by Mozilla's variant .

I found a couple of things that were useful along the way:
SearchPlugins.net provides a neat online utility for generating the whole thing. Sweet.
IE7's default "Find More Providers" option within the Search Dialog drop down links to the Microsoft "Add Search Providers to Internet Explorer 7" page which includes a nice simple online generator. It's not perfect but it does the job for IE7 and having it directly findable with just 2 clicks from within the browser is excellent.
However to get it to work (for me) for both browsers and have an Icon included for FF2 I had to do the following manually once I had the base XML file as produced by SearchPlugins.net:
  1. Add in xmlns:moz="http://www.mozilla.org/2006/browser/search/" to the OpenSearch Description namespace tag.
  2. Despite trying to get an image included by the generator I had to do this myself by adding in the image manually by pointing an image tag to raw data in the form of a 16x16 GIF with all non alphanumerics replaced with html escaped hex (i.e. the image tag content starts off with data:image/gif,GIF89a%10%00%... )
  3. Add in a moz:SearchForm tag pointing to the root of the CSE.
  4. The Searchplugins.net XML did not define the encoding type in the XML version header - IE7 didn't seem to like that so I put an encoding="UTF-8" back in there. Seemed to work.
You can find the whole thing here. No doubt I'll have to add some more to get it to work with Opera whenever I get around to testing that. Given that the final XML is only 11 lines long it seems to be that we have some more work to be done in getting this thing really "standardized" but maybe that's just me, eh?

Once you have your XML you need to host it somewhere and Blogger doesn't seem to give me the option so I threw mine up on the one of my Google Page Creator Pages. I then stuffed in a reference to this in the Blogger header code by using a link rel tag that I would show here if I could figure out how to quote raw html without borking this WYSIWYG posting thinggy. For those interested the place to insert the tag is around line 7 or so in the raw HTML just below the head tag and the Mozilla docs clearly describe how to format it.

We are all just fleeting impressions in the snow.

I came across this earlier and linked to it via GR but I have to add that I think this is a really succinct summary of Life, The Universe and Everything:

"See, there was a bunny hopping through the forest, then a bird came down and killed his ass."

Follow the link, the picture is amazing.

Friday 22 December 2006

Google Reader Shared Stuff

I've got a Google Reader shared links list available here for those of you interested in the sort of things that make me go hmmh.

For a laugh you can try out the Search Joe's Brain function above - I've had loads of fun with Google's Customized Search Engine, you should give it a try.