Search This Blog

Friday 31 August 2007

Friday Links

The Sony Rolly I'm at a loss as to what market this is supposed to appeal to but I suppose the fact that I'm blogging about it says it must be sad geeks with no sense of art or rhythm.

Casio go nuts with their
new prototype consumer super - zoom camera. 60fps 6 megapixel shooting? The highest display resolution I can find at the moment only hits 5 megapixels ( QSXGA 2560x2048 ) This is triple the pixel count of so called "Full HD" at more than twice the frame rate. I seriously hope it comes with a firewire port to plug it into the multi-terabytes of external storage it's going to need.

Rumour has it that European cellular providers are going to turn on GSM EDGE data services to facilitate the roll out of the iPhone. I won't call bullshit on it just yet because some of the cellular providers I've dealt with are easily stupid enough to consider doing this but even I'd be surprised if that happens. Anyone got any idea if the iPhone is still selling? I got my hands on a HTC Touch during the week and I've got to say that it is surprisingly tiny and lovely piece of kit but frankly I still like my phones to have a keypad.

There will be an update on the Meteor Spotting project over the weekend - this week hasn't been very amenable to giving me the time to work on it unfortunately.

Sunday 26 August 2007

More Meteor Spotting

So I left this at a point last week where I was able to get a compact representation of the changes between two images and I had a simple script that attempted to identify which sets of images were more interesting by looking for long(ish) lines on the image.

As it turns out the script I posted was pretty poor at doing this and only yielded good results if you had pretty clean data. Worse still it had some major flaws and a number of head scratching tests led me to rewrite it quite a bit. First I added code that identified the blocks of changed sectors that the code was grouping together into units and then I used that to debug the issues with the script. By yesterday I had code that could produce images like this one without making mistakes - the script posted last week has some major bugs that cause it to ignore large sections of an image. The revised script is listed below.



As you can see here the corrected script now identifies adjacent blocks correctly and colour codes them so I can quickly see what it is really doing. Small groups are ignored (where the distance covered by the group of adjacent sectors is less than 10), blue means that the mean square of the distance of the points from the longest axis of the group is less than 10, green means that it is less than 50 and red means that it is greater than 50. In general this will mean that blue marked sectors are properly interesting lines but you can see that it's not always true.

This version is a bit better at identifying the stuff I've told it to look for but that stuff isn't actually good enough. After more testing against my sample data I've found that while I can get to a point where I identify 80% of the interesting stuff with abut a 50% false positive rate on relatively clean data I about 5x that rate of false positives when I feed in very noisy data (shots of fast moving cloud cover).

A bit more research over the past few days pointed me to line finding algorithms based on Hough Transforms. This is a very cool way of switching the image mapping domain into a polar form (sort of ) that allows you to identify the "line like" features in an image. The basic idea is simple enough and one very kind soul has provided the source code for a Java demo here . I've spent most of today building the code to do this efficiently and I've finally gotten to a point where I think the transform is correct but I need to complete the mapping of the detected lines back on to the source images before I can be sure it really is working.

Tune in next week for the next exciting episode.. :)

Updated Line Finder..


$datafile=shift();
$highlightfile=shift();
$sectorfile=shift();
SetupData ($datafile);

foreach $item (@data) {
@items=split(/[^\d]/,$item);
$itemcount++;
$mapdata{$items[0]}{$items[1]}=$itemcount;
$mapitems[$itemcount]=[$items[0],$items[1],$items[2]]; # First item is 1 which means is not currently used in a line
$livemap[$itemcount]=1;

}
for ($indexitem=0;$indexitem<$itemcount;$indexitem++) {
if ($livemap[$indexitem]) {
$mstring=liner("",$indexitem);
$nscore = Analyze($mstring);
$score=$nscore if ($nscore>$score);
}
}
print "$score";
SaveSectorFile();

sub liner { # Depth First Recursive Scan for Adjacent sectors
my $lstring=shift();
my $position=shift();
$lstring .= "$position ";
$livemap[$position]=0;
for (my $traverse=0;$traverse<=7;$traverse++) {
my $new_x=$mapitems[$position][0]+$traces[$traverse][0];
my $new_y=$mapitems[$position][1]+$traces[$traverse][1];
if (exists($mapdata{$new_x}{$new_y})) {
my $new_position=$mapdata{$new_x}{$new_y};
my $bstring="";
if ($livemap[$new_position]) {
$lstring .= liner("",$new_position);
}
}
}
return($lstring);
}

sub Analyze { # Test a list of sector and score for "line-ness" Bigger scores are more line like.
my $mstring=shift();
my @points=split(/\s/,$mstring);
my ($leng,$depth,$dist,$score,$min,$max)=(0,0,0,0,$points[0],$points[0]);
$minp=int(($mapitems[$min][0])**2+($mapitems[$min][1])**2);
$maxp=int(($mapitems[$max][0])**2+($mapitems[$max][1])**2);;
foreach my $pixel (@points) {
my $tx=$mapitems[$pixel][0];
my $ty=$mapitems[$pixel][1];
my $dist2=int((($tx)**2+($ty)**2));
if ($dist2<$minp) {
$minp=$dist2;
$min=$pixel;
}
if ($dist2>$maxp) {
$maxp=$dist2;
$max=$pixel;
}
$depth++;
}
($x0,$y0)=($mapitems[$min][0],$mapitems[$min][1]);
($x1,$y1)=($mapitems[$max][0],$mapitems[$max][1]);
if ( (($x1-$x0)+($y1-$y0)) != 0 ) {
foreach my $pixel (@points) {
$x=$mapitems[$pixel][0];
$y=$mapitems[$pixel][1];
$dist +=((($y0-$y1)*$x+($x1-$x0)*$y+($x0*$y1-$x1*$y0))/sqrt( ($x1-$x0)**2+($y1-$y0)**2))**2;
}
$leng=sqrt(($x1-$x0)**2+($y1-$y0)**2);
}
if ($leng>10) {
$dist2=int($dist/$leng);
$dist=int($dist);
$leng=int($leng);
$colour=$red;
$colour=$blue if ($dist2<10);
$colour=$green if (($dist2>=10) and ($dist2<50));
highlight (\$image,$mstring,$colour,$dist2.$sectsize);
if (($dist2<10)>10)) {
if ($dist2 > 0) {
$score=$leng/$dist2;
} else {
$score=$leng*2;
}
}
print "$score $dist2 $leng $depth ($x0,$y0) ($x1,$y1)\n" if ($comments);
}
return($score);
}

sub highlight { # colour in the sectors of the provided list on the sector image
my ($im,$points,$colour,$trigger,$diff)=@_;
my $image=${$im};
my @points=split(/\s/,$points);
foreach my $point (@points) {
$image->rectangle($diff*$mapitems[$point][0],$diff*$mapitems[$point][1],
$diff*$mapitems[$point][0]+$diff, $diff*$mapitems[$point][1]+$diff,
$colour);
}
}

sub Init { # Set up dome global values
use GD;
use GD::Image;
GD::Image->trueColor(1);
$ChartImage=$sectorfile;
$image = GD::Image->newFromJpeg($ChartImage);
($limx,$limy)= $image->getBounds();
$limx=$limx/$sectsize;
$limy=$limy/$sectsize;
$image->interlaced('true');
$image->setThickness(1);
$red = $image->colorAllocate(255,0,0);
$blue = $image->colorAllocate(0,0,255);
$green = $image->colorAllocate(0,255,0);
@traces=([-1,-1],[0,-1],[1,-1],[-1,0],[1,0],[-1,1],[0,1],[1,1]);
$itemcount=0;
$score=0;
}

sub RunTest { # Create test data
$sectsize=5;
$highlightfile="Sectors.png";
$sectorfile="sectors.jpg";
$dataout=`motiontrack.exe -s 1 --sectorsize=$sectsize picture.jpg lastpicture.jpg sectors.jpg 2>dump.txt`;
@dataitems=split(/\n/,$dataout);
$data=$dataitems[1];
$comments=1;
}

sub SaveSectorFile { # Save the marked up sector file
open (IMG, ">$highlightfile");
binmode(IMG);
print IMG $image->png;
close (IMG);
}

sub SetupData { # Read in data if provided in a file otherwise run the test code.
my $datafile=shift();
if ( -e $datafile) {
open (INFILE, "$datafile") or die ("Unable to open $datafile");
$data="";
while () {
chomp;
$data .= $_
}
} else {
RunTest();
}
$data .=" ";
if ($data !~ /^(\d+\,\d+\:\d+\s+)+$/) {
print "Data Format Error\n";
exit;
}
@data=split(/\s/,$data);
Init();
}

Friday 24 August 2007

Innumeracy

I've been very interested in the recent flurry of new camera models that have emerged from the PR departments of Canon, Nikon, Olympus, Pentax and others over the past two weeks and I may well post some thoughts of my own on them at some point but I have been very irritated by one particular error that has been repeated by a slew of camera and gadget blogs when reporting on the new Nikon D3.

You can find plenty (140k+) examples with this google query

Taking one example - Camera Labs - they state:
We’ve seen the 3in 922k pixel monitor and it simply looks superb with around four times the detail of 230k pixel models used by Canon and most of the competition. Indeed rather than quote the total number of screen pixels, you can express them as a proper monitor resolution of 640x480 pixels – that’s right, full VGA resolution and it looks great.
WoW! Real VGA! It must be great eh!!!

Well VGA is 640x480 pixels but 640x480 is a lot less than 922k. That would be about 307k pixels in fact as 0.5 seconds with any calculator will tell you even if you can't do it in your head. The 922k pixel display on the D3 and D300 has a resolution that is almost certainly 1280x720 ( 921.6k pixels). If you want to give it a label then that would be called HD 720* if it was on a TV which is a much more desirable label than VGA in any case.

Now don't get me wrong, it is a fantastic screen and the resolution on it is incredible. If this is true then it is a genuine HD wide-screen display and has (for example) 6 times the number of pixels on the over-hyped (and non wide-screen) iPhone's 480x240 display. For those interested in such things this is very close to 500dpi which is incredible as 300dpi is generally taken as the holy grail for display resolutions.

So if the 922k pixels is true then this is an incredible development in terms of display technologies and that is something that the camera\gadget blogosphere should have been making noise about but instead their inability to cope with numbers has let them down.

*The brain donors who flog TV's in the high street in Ireland would probably tell you that was actually "HD Ready" and sure why would you ever want any more than that anyway. That's a rant for another day.

Wednesday 22 August 2007

Checking Service Permissions with PowerShell

This interesting little security article that shows some poor Windows Security model understanding on the part of someone in Cisco got me thinking about how would one go about taking a look at the permissions associated with each service’s exe. There’s the simple but boring approach of stepping through every service and manually checking the files but I wanted to see if Powershell could do the trick.

There is a potentially useful built in cmdlet called Get-Service that should help us dig into this but a quick scan of the output from:

Get-Service | get-member

shows that a number of important Service properties are missing, not least of which is the Service’s startup “Path to Executable”. That is a bit of an oversight but it’s not a huge problem as this is one of very many cases where the Get-WMIObject cmdlet comes to the rescue. A simple WMI query later and we have an object collection that includes all the details we need on our services.

Once we have the collection we need to do some additional work. Let’s give it a roll.

$services = get-wmiobject -query 'select * from win32_service'
foreach ($service in $services) {
$Service.Pathname
}

The Service.Pathname object contains the startup command line for the service. This is what we need but it is (unfortunately) quite messy. It sometimes includes switches and parameters and sometimes (but not always) it’s escaped within quotes in order to handle paths with spaces. To figure out which it is we first test whether the full Pathname value a valid file using Test-Path. Like many PowerShell cmdlets this one complains quite loudly when it’s unhappy with its parameters by throwing an error exception and printing the error out in nasty red text to get your attention. You can mitigate this a bit by enclosing the path to be tested in quotes however in this case our best approach is to temporarily suppress all errors using the common cmdlet parameter –ErrorAction ( or its alias “-ea” ) with a value of “silentlycontinue”. If the test fails to find a file then we need to do some parsing of the string to find the actual path to the service executable. Inspecting some sample values shows that we have the following possibilities:
  • If there are matching double quotes then return what’s in between the First pair of matching double quotes
  • Otherwise if there is a space anywhere in the string then return everything to the left of the first space
  • Otherwise return the entire string.
And then test the resulting path again (using Test-Path $path –ea silentlycontinue ) to be certain. You can jump through some hoops to do this if you want but I prefer regular expressions.

if ($Service.Pathname -match "(\""([^\""]+)\"")|((^[^\s]+)\s)|(^[^\s]+$)") {
$path = $matches[0] – replace “”””,””
}

Powershell has a reasonably powerful cmdlet called Get-ACL which returns a windows security descriptor object for an object (a file usually but it will work for inspecting a registry key too). Inspecting a sample file shows that the object returned by Get-ACL provides us with a lot of properties and methods. The ACL object’s “Access” method is the most useful one as it is a collection of objects that correspond to the entries in the service executable file’s access ACL. Each of these objects has the following properties that are of interest.
  • IdentityReference (The name of the object that this Access Control applies to – this will generally be a user account or group name)
  • AccessControlType (Allow or Deny)
  • FileSystemRights (Read, Execute, Change, Full Control etc)

In general there’s not going to be a problem with Administrator accounts, the System or Network accounts having high privileges levels on services so we can safely ignore any entries for the following:
  • NT AUTHORITY\Service
  • NT AUTHORITY\NETWORK
  • BUILTIN\Administrators
  • BUILTIN\Power Users

A quick check also shows that many other users can safely have Read, Execute and Synchronize rights without any problem. We only want to see entries where users other than those listed above have Modify\Change or Full Control Permissions.

Putting it all together we now have.

$services = get-wmiobject -query 'select * from win32_service'
foreach ($service in $services) {
$path=$Service.Pathname
if (-not( test-path $path -ea silentlycontinue)) {
if ($Service.Pathname -match "(\""([^\""]+)\"")|((^[^\s]+)\s)|(^[^\s]+$)") {
$path = $matches[0] –replace """",""
}
}
if (test-path "$path") {
$ServiceName = $service.Displayname
$secure=get-acl $path
foreach ($item in $secure.Access) {
if ( ($item.IdentityReference -match "NT AUTHORITY\\SYSTEM" ) -or
($item.IdentityReference -match "NT AUTHORITY\\NETWORK" ) -or
($item.IdentityReference -match "BUILTIN\\Administrators") -or
($item.IdentityReference -match "BUILTIN\\Power Users" ) ) {
} else {
if ($item.FileSystemRights.tostring() -match "Modify|Full|Change") {
Write "$ServiceName : Potentially Elevated Service Permission(s)"
Write (" "+$item.IdentityReference.value + " : "+$item.AccessControlType.tostring() + " : "+$item.FileSystemRights.tostring())
}
}
}
} else {
Write ("Service Path Not Found: "+$service.Displayname)
Write (" "+$Path)
}
}


This produces output like this:

Ati HotKey Poller : Potentially Elevated Service Permission(s)
JTMANSFI-MOBL\Administrator : Allow : FullControl
Service Path Not Found: SMS Agent Assistant
C:\WINDOWS\system32\INTELMAA\ccmhlp32.exe
Service Path Not Found: DCOM Server Process Launcher
C:\WINDOWS\system32\svchost
Intel(R) PROSet/Wireless Event Log : Potentially Elevated Service Permission(s)
GER\jtmansfi : Allow : FullControl
….

This shows that we still have some issues. The command line for calling DCOM doesn’t specify the full name for SVCHOST.EXE above so we generate a not found error even though we could find it if we added some more smarts to the code that parses the path. The error displayed for the SMS agent is valid as that file did not exist on the system being tested.

Apart from those minor quibbles the script does now pretty much do what I intended. We can quickly see if there are any glaring problems like that referenced in the original article (where NT AUTHORITY\INTERACTIVE users had modify rights to the Cisco VPN Service exe). In my case (thankfully) there are none.

As a footnote it is worth pointing out that I deliberately excluded Power Users from my scan as the default permissions for a Power User on an XP system allow them to carry out a number of Privilege Escalation attacks that can bring their account up to Administrator level or to avail of System level access if they want to. Mark Russinovitch posted a very thorough blog article on this last year.

Tuesday 21 August 2007

Meteor Spotting

So getting back to the task of automating the task of taking pictures of the night sky and picking out the interesting stuff from the noise. I got a very useful data set from Doug Ellison over at UMSF of 300 or so 30-second exposure pictures that he took of a reasonably clear English sky on the morning of the 13th August when the Perseids were pretty much at their max. He got about 4 or 5 meteors, three planes and two or three satellites plus he got lots of good shots of the sort of stuff we want the motion detector to ignore, specifically clouds and tree branches moving in the wind.

So as I was saying last time I established very quickly that ImageMagick is not fast enough for this sort of work. Maybe it would be OK with a bit of work but, thankfully, once I figured out that I couldn't use it I went in search of some better open source motion detection software and found MotionTrack. This does all the heavy lifting of flattening the files to grayscale, calculating the differences and then doing a sector based blur+edge detection filter to identify points of interest. It then conveniently outputs the result as both a tagged image and a list of sector coordinates.

For example:
Image #1 - No meteors (or plane or anything of interest)


Image #2 Meteor in the Top right (just above the tree)


Image #3. Motiontrack Sector Image


Motiontrack's defaults reduce the image resolution by half and then divides the remainder up into 5x5 pixels. The image above shows any area where the average difference in a 10x10 block of the original pair of images has changed by more than a given sensitivity level. The image is handy for quick visual checking but the important output data comes in the form of the list of sectors that is printed to the console - this looks something like:
107,0:58 114,0:18 115,0:19 226,0:26 256,0:32 273,0:23...and so on.

The real trick now is figuring out a way to pick that line out of the scattered background noise and also to be able to tell the difference between that sort of difference plot and this one:


Here the wind has caused lots of movement of the branches which is a type of motion we don't really care about.

So what we want to do is search through this list looking for blocks of adjacent sectors and then to characterize them so we can tell the difference between long(ish) straight(ish) blocks and more random blocks. For the moment we'll search the list and find the block that has the greatest distance between the start sector and the end sector and we will output the total number of sectors in that block and that distance between the extreme end points. Those two numbers give us a way to crudely establish if the largest notable moving feature in the image is a line or not.

The approach I'm taking for this is to create a sparsely populated data structure to represent the sectors using a combination of two arrays and a hash (an indexed array for those of you unfamiliar with Perl).
  • The first array is just a linear copy of the sector output from Motiondetect. Each element of this array is a 3 element array containing the x,y and z(brightness) values.
  • The second array is used to keep track of where we've been and has a single entry that corresponds to each element in the first array. Initially we set this to a value that we will take to mean "Hasn't been included in a measurement yet".
  • The hash is 2 dimensional and uses x and y coordinates of the sector output. The value of each element is the index number for the corresponding element in the arrays. We will use this to locate valid sectors as we search for blocks\lines of adjacent moving sectors.
We could use a fully populated data structure to do the search but it seemed like a waste to me so I ploughed on with writing a quick version of this in Perl. I'm posting the current working version below for anyone who really wants to dig into it. It's a fairly simple depth first recursive scan.

Pushing the set of 300 images through this process and we find that if we set the sensitivity high enough we can detect most of the interesting images (~13 out of 16) but we also trigger false positives on an additional 20 or so images that are just moving branches. If we set the sensitivity a little lower we drop the false positive rate to about 10 but only hit about 50% of the interesting images. The solution to this problem is going to be a much more robust line finding algorithm so I'll be digging into that over the next few days and then trying to figure out how to convert the code below into PowerShell.

The Perl Adjacent-Blocks code.

# Find-Blocks - Recursive Line finder for sparse data
$datafile=shift();
open (INFILE, "$datafile") or die ("Unable to open $datafile");
$data="";
while () {
chomp;
$data .= $_
}
$data .=" ";
if ($data !~ /^(\d+\,\d+\:\d+\s+)+$/) {
print "Data Format Error\n";
exit;
}
@data=split(/\s/,$data);
$itemcount=0;
foreach $item (@data) {
@items=split(/[^\d]/,$item);
$itemcount++;
$mapdata{$items[0]}{$items[1]}=$itemcount;
$mapitems[$itemcount]=[$items[0],$items[1],$items[2]];
$livemap[$itemcount]=1;
}

@traces=([-1,-1],[0,-1],[1,-1],[-1,0],[1,0],[-1,1],[0,1],[1,1]);

$maxlen=0;
$maxdepth=0;
$deepest=0;
$ppos=0;
for ($indexitem=0;$indexitem<$itemcount;$indexitem++) { if ($livemap[$indexitem]) { $livemap[$indexitem]=0; findblocks($indexitem,$indexitem,1); } } print "$deepest $maxdepth $maxlen $ppos ".$mapitems[$ppos][0]." ".$mapitems[$ppos][1]; sub findblocks { # Recursive adjacent item finder my $lstring=shift(); my $position=shift(); my $depth=shift(); my $cur_x=$mapitems[$position][0]; my $cur_y=$mapitems[$position][1]; my $new_x,new_y,$new_position; $new_position=0; my $trigger=1; # Flag for triggering output # check each of the 8 possible adjacent sectors for (my $traverse=0;$traverse<=7;$traverse++) { # find the new sector using the offsets in the traces array $new_x=$cur_x+$traces[$traverse][0]; $new_y=$cur_y+$traces[$traverse][1]; if (exists($mapdata{$new_x}{$new_y})) { $new_position=$mapdata{$new_x}{$new_y}; if ($livemap[$new_position]) { $trigger=0; $livemap[$new_position]=0; $lstring .= " $new_position"; $depth++; findblocks($lstring,$new_position,$depth); } } } if ($trigger) { # If this is true then we are at a deepest point. # Otherwise we're still scanning my @points=split(/\s/,$lstring); my $fx=$mapitems[$points[0]][0]; my $fy=$mapitems[$points[0]][1]; my $dist=0; foreach my $pixel (@points) { my $tx=$mapitems[$pixel][0]; my $ty=$mapitems[$pixel][1]; my $dist2=int((($tx-$fx)**2+($ty-$fy)**2)**(0.5)); if ($dist2>$dist) {
$dist=$dist2;
}
}
if ($depth > $deepest) {
$deepest=$depth;
}
if ($dist > $maxlen) {
$ppos=$position;
$maxlen=$dist;
$maxdepth=$depth;
}
}
}



Google Backtracks on Video

Apparently the powers that be in Google have heard the uproar that ensued following their fairly inept handling of the cancellation of their video download to own service and have backtracked somewhat. They're still canceling it and anyone who bought stuff to own for good will still eventually lose it but they are now giving everyone who fell for the "Download To Own" DRM fiction a full refund.

Frankly I'm very unimpressed. If that blog post is the whole truth then they didn't realise what the reaction would be to what they were planning to do and that, frankly, means they were being stupid.

Given that they made a stupid mistake it's nice to see them making some effort to make up for it but I reckon they seriously missed out on a fantastic opportunity to carve out some good legal precedent.

Oh well.

Sunday 19 August 2007

Camera Automation with PowerShell Part III

Camera Automation with PowerShell Part III

So far we've figured out how to do some basic automation and image comparison. Now it's time to see if we can make this do anything useful.

Our first serious attempt is going to add some basic error trapping to the various part. For starters we have to some traditional property checking to the start up code to make sure that:

  • The PC can create a functioning WIA object
  • We have at least one WIA compatible device connected
  • We can connect to the device
  • It supports the WIA Command wiaCommandTakePicture ({AF933CAC-ACAD-11D2-A093-00C04F72DC3C})

The object setup and enumeration routines don't generally throw any error exceptions but we do have to make sure that we actually have a valid device and we might as well just set things up so that all but the nastiest failures end up with the user getting some sort of sensible message at least. To do this the best approach is to set PowerShell's global error handling behavior to "SilentlyContinue" and then to check for all errors immediately after making any call that might produce a fatal result. This global behavior is controlled by the Global Automatic Variable called $ErrorActionPreference which we will set to "SilentlyContinue". We could go to some trouble here enumerating through the stack of messages in the $error error handler object however for our purposes here we're simply going to use the "$?" variable that always returns the state (Successful [True] or Failed [False]) of the last command. This turns the initialization block of code into:

$ErrorActionPreference="silentlycontinue"
$WIAManager = new-object -comobject WIA.DeviceManager
if (!$?) {
write "Unable to Create a WIA Object"
Eexit
}
$DeviceList = $WIAManager.DeviceInfos
if ($DeviceList.Count -gt 0) {
$Device=$DeviceList.item($DeviceList.Count)
} else {
write "No Device Connected"
}
$ConnectedDevice = $Device.connect()
if (!$?) {
write "Unable to Connect to Device"
Exit
}
$Commands = $ConnectedDevice.Commands
$TakeShot="Not Found"
foreach ($item in $Commands) {
if ($item.name -match "take") {
$TakeShot=$x.CommandID
}
}
if ($TakeShot -eq "Not Found") {
write "Attached Camera does not provide the ""Take Picture WIA Command"""
Exit
}

Moving on to the core working loop there are a couple of errors that arise quite often once you begin to test it.
  • Unexpected files. Since the basic function of the script means that we are creating and renaming image files all the time it is no surprise that we often find situations where files exist that we didn't expect and likewise files don't exist when we really wished they did.
  • Unexpected Camera failures. Depending on your camera you may see this a lot. My experience has been that really simple cheaper (fully automatic) cameras tend to plough along regardless of conditions but more powerful cameras tend to get easily upset and refuse to take pictures unless you have everything perfect for them. We need to code for the latter behavior as I want to use a DSLR for this, and its highly likely that it will be temperamental.

This transforms the core loop into:

$Pdir="C:\temp\capture\test"
new-item $Pdir -itemtype dir
$ICompare="c:\Program Files\ImageMagick-6.3.5-Q16\compare.exe"
$SaveCount=0
$Sensitivity=1000
for ($stage=0;$stage -lt 10;$stage++) {

if (test-path "$Pdir\temp.jpg") {
del "$Pdir\temp.jpg"
}
rename-item -force "$Pdir\lastpicture.jpg" temp.jpg
rename-item -force "$Pdir\picture.jpg" "lastpicture.jpg"
if (-not (test-path "$Pdir\lastpicture.jpg")) {
rename-item -force "$Pdir\temp.jpg" "lastpicture.jpg"
} else {
del "$Pdir\temp.jpg"
}
Write "Taking Picture # $stage. "
$Pcount=$ActiveCamera.items.Count
$TakeanImage=$ActiveCamera.ExecuteCommand($TakeShot)
if ($ActiveCamera.items.Count -gt $Pcount) {
# Camera has actually taken a picture and has it stored so attach an object to it
$Picture=$ActiveCamera.items.item($ActiveCamera.items.Count)
# Set up a storage variable to be certain we have retrieved it
$Imagefile=""
$ImageFile=$Picture.Transfer()
if ($ImageFile.FileExtension -eq "jpg") {
# If the object now contains a property called Filextension that has a value jpg we have the pic
# Make sure we have a good filename to save it to
if (test-path "$Pdir\picture.jpg") {
del "$Pdir\picture.jpg"
}
$ImageFile.SaveFile("$Pdir\picture.jpg")
# Call ImageMagic to compare it
# $compare=& $ICompare -metric MEPP $Pdir\picture.jpg $Pdir\lastpicture.jpg $Pdir\sectors.jpg 2>&1
if ($compare -match "(\d+)(\.|\s)") { $result=$matches[1] }
# If the Compare result is higher than some value then save a copy of the file
if ($result -gt $Sensitivity) {
copy-item "$Pdir\picture.jpg" "$Pdir\$SaveCount-$result-picture.jpg"
Write "Saving interesting picture: $Pdir\$SaveCount-$result-picture.jpg"
$SaveCount++
}

# We have an image on the camera that we need to tidy up
$ActiveCamera.items.Remove($ActiveCamera.items.Count)
}
}
}
This now gives us a reasonably reliable way of taking a bunch of snaps of a (slowly) changing target and seeing what the results look like. The results of that exercise were pretty good for static indoor scenes but they aren't actually all that good for the sort of thing that I want to do - compare long exposure pictures of the night sky. The sky rotates quite a bit in 30 seconds - 1/8th of a degree to be exact and that converts into up to 7 or 8 pixels near the horizon on a 10 mega pixel image for a camera with a fairly average wide angle lens (18mm on a Nikon D40x). The net result is that the ImageMagick compare metrics see the meteor as a single line covering up to 20 degrees or so of the picture (say affecting 1000-2000 pixels) but there are also many hundreds of star tracks each moving 7-8 pixels each. The end result is too complex for Imagemagick to figure out.

To make matters worse ImageMagick is a complete resource hog when dealing with DSLR size images. Comparing two 1-2Megapixel images may chew up 10-20Meg of memory and use 100% CPU to yield an answer in five to ten seconds. Bumping the resolution up to 10 Megapixels and the amount of memory jumps to 250Meg or more and will use 100% CPU for up to a minute on my laptop to yield an answer. Playing about with preparatory transforms that reduce the image resolution and change the colour space to Monochrome help the performance issues at the cost of some complexity but it is a lot of work and in the end none of the blurring and edge detection filter options I tried gave me a reliable number that could be used to detect meteors.

Tomorrow we'll look at the test images and see what we can do about the problem.

Blogger Doesn't Do Code Redux

Apparently using the PRE tag should work. Let's see.

$Username = "USERID"
$DS_Search = new-object System.DirectoryServices.DirectorySearcher
$DS_Search.Filter="(sAMAccountName=$Username)"
$User=$DS_Search.FindOne()
foreach ($item in $User.Properties["memberof"]) {
if ($item -match "Domain Administrators") {
"$Username is a Domain Administrator"
}
}

Well what do you know - it works. Time to fix all the old posts now...

Tuesday 14 August 2007

Blogger doesn't do Code

Or I can't get Blogger to do Code. Either way my last two posts look like someone mistook what I wrote for a salad and tossed it, repeatedly.

For anyone who is actually interested in what they should look like here are links to better presented versions.

Camera Automation with PowerShell Part II

Camera Automation with PowerShell Part II

We left off yesterday with most of the basic parts for a first run test for our Camera Automation script. I realise now that Blogger has really torn into my code snippets and removed all indentation which makes some of them look pretty bad. I'm trying to figure it out but my first serious attempt (editing the raw HTML and blocking the snippets out with < code >...< /code > tags) didn't work. Suggestions appreciated as
today's post is going to have the same problem for now.

Anyway let's stick the bits we had figured out together and see what we come up with.


First off we need to initialize the camera, locate the command for taking a picture, set up a working directory and a some variables.

$WIAManager = new-object-comobject WIA.DeviceManager
$DeviceList = $WIAManager.DeviceInfos
foreach ($item in $DeviceList) {
$Device=$item
}
$ConnectedDevice = $Device.connect()
$Commands = $ConnectedDevice.Commands
foreach ($item in $Commands) {
if ($item.name -match "take") {
$TakeShot=$x.CommandID
}
}
$Sensitivity=1000
$Pdir="C:\temp\capture\test"
New-Item $Pdir -itemtype dir -ea silentlycontinue
$IConvert="c:\Program Files\ImageMagick-6.3.5-Q16\convert.exe"

The only thing new here is the "New-Item -itemtype dir" command which creates a new sub-directory. The -ea tag sets the Error Action for the command so that we don't display an error if the directory already exists.

The initial approach we're going to take is to set up a basic loop within which we will:
  • Keep a copy of the last picture we took so we can compare it to the new one
  • Take a new picture and pull it back from the camera
  • Compare the new image to the old one
So a quick and dirty version looks like this:

$SaveCount=0;
for ($stage=0;$stage -lt 10;$stage++) {
# Keep a copy of the last picture
rename-item -force "$Pdir\picture.jpg" "lastpicture.jpg" -ea silentlycontinue
# Take a picture, find it, copy it from the camera and then save it
$f=$arg.ExecuteCommand($TakeShot)
# Select the last item in the item(image) collection on the camera
$f=$arg.items.item($arg.items.Count)
$fg=$f.Transfer()
$fg.SaveFile("$Pdir\picture.jpg")
# Compare the two pictures
$compare=& $ICompare -metric MEPP "$Pdir\lastpicture.jpg" "$Pdir\picture.jpg" null: 2>&1
# Pull the exact value we need from the output
if ($compare -match "(\d+)(\.|\s)") { $result=$matches[1] }
# If the Compare result is higher than some value then save a copy of the file
if ($result -gt $Sensitivity) {
copy-item "$Pdir\picture.jpg" "$Pdir\$SaveCount-$result-picture.jpg"
Write "Saving interesting picture: $Pdir\$SaveCount-$result-picture.jpg"
$SaveCount++
}
# Tidy up the image store on the camera
$arg.items.Remove($arg.items.Count)
}

We now have a basic script that we can use to test the principle of the thing. In testing it works some\most of the time but there are lots of things that can go wrong and we've put in absolutely no error handling.

For the next stage we'll improve the image comparison baseline by taking an average of a number of images and add in better error handling both up front (to handle cases where there is no camera or the camera connected isn't suitable) and during shooting so that we don't get caught out when the camera can't take a shot for one reason or another.

And hopefully some better code sample layout.

* Edited to fix the code snippets

Monday 13 August 2007

Camera Automation with PowerShell

Following a conversation with some friends I had an idea on Friday last that it would be quite cool to be able to link up a camera to a PC, figure out how to get the PC to drive the camera and then add some image analysis on top so that I could leave it to watch the sky for Perseids so I could possibly take some interesting pictures while not losing out on any beauty sleep. This was just the sort of mini-project I've been looking for as a way to really get a handle on what PowerShell is good at and to see if I can actually build something useful with it. For those of you not familiar with it, PowerShell is a new command shell that Microsoft have developed for Windows platforms. You can find some pretty decent introductory documentation from Microsoft Switzerland here and a quick overview with some handy links from MS Channel 9 here.

What I want to do here.

This should be fairly straightforward but the only way to be sure is to go ahead and build the thing.

  1. Find some way to control a camera and get pictures taken and delivered to the PC
  2. Figure out some way to examine and compare those pictures so we can decide if we want to do something with a specific picture
  3. Write some scaffolding code around both of these to make it all happen in the correct sequence.


Part 1. Getting Started - Connecting to API's and Service Interfaces using PowerShell

We'll go off on a small tangent for a bit first to see how PowerShell can be used to plug into and talk to the various API's and services that the Windows Platform provides. Generally you have to build an application in order to do this sort of stuff - PowerShell makes it pretty simple, quick and more importantly makes the whole exercise interactive so investigating the capabilities of a service or API becomes much more intuitive. Anyway enough waffle - time for some code.

For example if the machine we're using is part of an Active Directory Domain then we can plug into the Active Directory and start poking about really easily:

$Username = "USERID"
$DS_Search = new-object System.DirectoryServices.DirectorySearcher
$DS_Search.Filter="(sAMAccountName=$Username)"
$User=$DS_Search.FindOne()
foreach ($item in $User.Properties["memberof"]) {
if ($item -match "Domain Administrators") {
"$Username is a Domain Administrator"
}
}

Here we create a new object of the system directory services class (which provides an interface to the Active Directory's Global Catalog), create a filter that finds a specific user object and then enumerate the groups the user belongs to to see if they hare a member of a specific group. The point of this is to demonstrate the way that PowerShell can directly instantiate .NET framework objects. Once we have created an object we can then explore its properties interactively using PowerShell's tab completion or by piping the object into the ultra-useful Get-Member cmdlet (aliased by default as gm to speed this sort of thing up). To play with this a bit edit the above with any valid UserID from your domain and then paste it into a PowerShell command line. Hit enter at the end to make sure the loop completes. Unless you are a Domain Admin it will appear to do nothing but if you then type


$User | gm and $User.Properties

You can start to explore the user object interactively. You can use the enumeration concept demonstrated for the "memberof" collection above to dig into the more complex structured properties.

You can also attach to specific .NET assemblies directly rather than instantiating via a namespace. The syntax here is slightly less obvious but the result is identical - you have an object and an interactive shell that allows you to inspect and interact with its properties and methods.



[System.Reflection.Assembly]::LoadFrom("c:tempOpenNETCF.Desktop.Communication.dll") |Out-null
$rapi = New-Object OpenNETCF.Desktop.Communication.RAPI
$ActiveSyncVer=$rapi.ActiveSync.Version.ToString()


This (obviously) won't do anything unless you have ActiveSync installed. Anyway here we directly load an assembly via its DLL and then make use of the namespace that presents, in this case the very useful OpenNETCF desktop interface that provides a .NET wrapper for the RAPI (ActiveSync) functions for controlling Windows Mobile PDA's and SmartPhones. Once again you can explore the capabilities here by simply piping the newly instantiated object into Get-Member/gm and then working with the displayed methods and properties interactively. You will need to use the Connect() method to bind the initial object to any PDAPhone you physically connect before being able to interact fully with it.

Finally we can also instantiate legacy objects from the OLECOM+ name-space that native Windows applications have used for years. This is what we need to for our Camera Automation effort since we need to use the Windows Image Automation service for this and it exposed via a COM+ interface. The following code snippet creates a WIA Management object, enumerates through all connected devices and returns the last object found.

$WIADeviceManager = new-object -comobject WIA.DeviceManager
foreach ($Device in $WIADeviceManager.DeviceInfos) {
$Camera=$Device
}
$CameraControl=$Camera.connect()


The instantiation is very similar to that for native .NET objects but we have to give the "-comobject" hint as a parameter in order to find names (ProgIDs) from within the COM+ name-space. Once again we can enumerate the properties and methods of the object interactively. It is very instructive to drill into this interactively yourself with a camera attached so you can really explore the object structures but here's a basic sequence of commands that steps through creating the object, setting some important things up, taking the picture and then getting it back onto the PC as a file.

$WIAManager = new-object -comobject WIA.DeviceManager
$DeviceList = $WIAManager.DeviceInfos
foreach ($item in $DeviceList) {
$Device=$item
}
$ConnectedDevice = $Device.connect()
$Commands = $ConnectedDevice.Commands
foreach ($item in $Commands) {
if ($item.name -match "take") { $TakeShot=$x.CommandID }
}
$ConnectedDevice.ExecuteCommand($TakeShot)
$Pictures = $ConnectedDevice.Items
foreach ($item in $Pictures) {
$Picture = $item
}
$PictureFile = $Picture.Transfer()
$PictureFile.SaveFile("filename.jpg")


If you play around with the object yourself you will find a significant amount of other data depending on the make of camera attached - including lots of properties of the camera, its state (focusexposureshutter speedwhite balance etc) and similar data about any images that are on the device (format, dimensions, compression ratio, colour depth etc). In the real script we will want to add a lot of error trapping as we're driving a real world object here and they tend to fail to do what you want a lot. For the moment though what we have here is the basic set of capabilities that we need in order to be able to tell the camera to take a picture, retrieve it from the camera and then save it on the PC.

Part 2. Manipulating Images and PowerShell Console Scripting

Now that we have a way to take and retrieve pictures from the camera. The next step is much more specialized but the principle is frequently required for many scripting tasks. Ideally I'd like to have a native APIService to call on here but there isn't one that does what we need (or at least I'm not aware of one) so we're going to have to fall back to the general purpose strategy of driving an external application and then pulling out some data from that task when it's completed. This is a useful enough exercise in itself in any case and if my own mistakes in figuring it out for this task are anything to go by it's something that everyone should be forced to do very early on in their PowerShell learning curve.

What we want to be able to do here is compare sequential pictures with each other and save ones that seem to have something significantly different about them. Eventually we'll want to get smarter and figure out a way to establish deviations from a moving average but this simple comparison will do to get us started. There is a very useful cross platform utilityAPI called ImageMagick that does this sort of thing, it provides a set of tools for manipulating image files and was expressly developed to make it possible to write scripts and applications that could automate image manipulation tasks like format conversion, re-sizing, changing colour palettes etc. The capability that we need is made available to command line scripts via the ImageMagick Compare command. This function is pretty powerful and has quite a few options but for now we're going to limit ourselves to doing a fairly basic comparison between images based on a metric called Mean Error Per Pixel (MEPP) which averages out changes over the whole image. The command syntax we'll be using is:
[ImageMagick-Dir]compare -metric MEPP filename.jpg previous.jpg null:

This prints out result values to the console screen that look like the following:

0 (0,0)
(for identical pictures)
591.746 (0.000264707, 0.282353) (for very similar pictures)
34978.3 (0.361637, 1) (for completely different pictures)


I will be returning to this later when I've had time to establish what metric and value gives the best result but for now we'll assume that an MEPP value of > 1000 indicates that something has noticeably changed between our two test images.

Calling external executables from within PowerShell is quite easy to do using the "&" or ". " operators. I say this despite the fact that I spent a soul destroying couple of hours over the past weekend trying to find them. There is a much more comprehensively documented way to spawn off any application in a separate process (using [System.Diagnostics.Process]::Start() ) but then you have to build some fairly awkward scaffolding around that in order to watch for its termination and then go through a few more hoops to capture the console output stream that we are looking for. Rather than go to that sort of trouble the "&"". " operators syntax works in a fashion that will be much more familiar to traditional command line junkies. There is one quirk as the following will show but it works pretty much as any Perl Windows CMD.EXE Shell script writer should expect. I'm disgusted with myself that it took me so long to figure this out BTW which is why I'm harping on so much - the lesson here is that if you are the type who never bothers to RTFM you are going to be totally FUBAR when the stuff you are looking for is punctuation - the chances of finding information on the ". " command via a search engine or any application help interface are not good and "&" isn't much better.

So now we have a kind of bulky compound command line that we want to execute. From a CMD.EXE shell on my machine it looks like:

"c:Program FilesImageMagick-6.3.5-Q16compare.exe" -metric MEPP filename.jpg previous.jpg null:

The two operators we're dealing with expect their first parameter to be an executable, cmdlet, function or script so we should pass it exactly as it is seen above including the quotes around the executable. We also want to capture the output text into a variable for later so we attempt something like this:

$Compare=& "c:Program FilesImageMagick-6.3.5-Q16compare.exe" -metric MEPP filename.jpg previous.jpg null:

This (unexpectedly) does not capture the output into the variable. Instead we still see the output ( something like 591.746 (0.000264707, 0.282353) ) on the console when we test this interactively. Suspecting that COMPARE.EXE might be sending this output to STDERR we try the classic shell STDERR to STDIN redirection syntax of 2>&1 at the end of the command line.

$Compare=& "c:Program FilesImageMagick-6.3.5-Q16compare.exe" -metric MEPP filename.jpg previous.jpg null: 2>&1

This works but when we echo out the contents of the $Compare variable to the screen we see a bit more than the simple text value that we would see under similar circumstances in a Perl Script for example.

PS > $Compare compare.exe : 591.746 (0.000264707, 0.282353) At line:1 char:13 + $Compare = & <<<< "c:Program FilesImageMagick-6.3.5-Q16compare.exe" -metric MEPP picturethumb.gif lastthumb.gif null: 2>&1

The extra data appears because $Compare is actually an object not a simple string and the PowerShell console is dumping a lot of the properties out at once. Once we get over that minor confusion we can use Get-Member to see what sort of object it is and this indicates that what we actually want at this stage is the return value from $Compare.ToString(). To be even more specific we want to extract out the first few digits that were being sent to the console (591 in the above example). I'm a fiend for misusing Regular Expressions but this is definitely a good place to use a simple one so we're going to fetch the value of our compare operation using the following code.

if ($Compare.ToString() -match "(d+)(.|s)") { $CompareMetric=$matches[1] }
PowerShell encapsulates it's regex's in an object syntax inherited from .NET but the core regex strings (the important part) are very close to Perl5 syntax which is a Very Good Thing. If you don't think Regular Expressions are a worth the effort then I strongly recommend that you drop everything you're doing and head off to buy Jeffrey E.F. Friedl's "Mastering Regular Expressions".

Anyway this all now means that we have a straightforward means of getting an indication of the degree of change between two images.

Part 3. (To Follow) Putting it all together and taking some Pictures.
Coming tomorrow: Gluing it all together and testing various image comparison techniques for sensitivity and performance.

* Edited to fix the code snippet formatting.

Saturday 11 August 2007

Google Kills Google Video

Yesterday's news that Google were terminating their Google Video service with extreme prejudice is very interesting. The fascinating part about this is that they have decided not only to stop selling DRM locked content but they are also shutting down the key management service* that enables current owners of Google DRM "protected" material to unlock the content. As of August 15th 2007 all Google DRM products will simply cease to work. Not very nice if you had spent any significant amount of cash on their so-called "Download To Own" content. In return they are giving affected users a $5 refund voucher on Google Checkout valid for 60 days. How cheap and nasty is that?

When Google initially introduced DRM protected content on Google video many people were skeptical about their motives. It was certainly a move that seemed at odds with the company's "Don't be Evil" motto. In the intervening period they never gave an impression that they really cared much for their own premium service and the purchase of YouTube was a clear sign of the direction they really wanted to go in terms of online video.

This latest move also comes at an odd time. There's a lot of focus on the negative side of DRM and former DRM devotees are breaking ranks (somewhat). Universal have just announced a 6 month trial period where they will sell DRM free premium content and Apple are apparently doing pretty well with their non-DRM protected iTunes tracks. Interestingly all of these efforts are for audio only but Google have now drawn significant attention to online video DRM.

So what is Google up to? Why are they not fully refunding their customers? They can't have sold all that much content so a full refund wouldn't cost much and would have headed off all of the criticism that has exploded over this. They must have been aware that this would generate howls of protest from the peanut gallery. These are vry smart people and while they might sometimes launch products that are a bit over ambitious or simply pointless they are not going to actively do something that pisses people off without having a very good reason for doing it.

Call me nuts but I suspect that there is method to their madness. I think that they want to be sued over this and that what they are looking for is a way to get a legal precedent set whereby distributors of DRM protected content _must_ provide consumers with guaranteed access for life to content they purchase in good faith. Google's loss in this case would be trivial in financial terms but the serious impact of this would be felt by the large scale players in the online music and video distribution businesses - specifically Apple (for AACS DRM), Microsoft (WM* DRM) and Real (Helix). If I'm right then in the long term this could become a hugely important milestone in the story of the use of DRM in online content distribution.

* Ed Felten's Freedom To Tinker has a very good overview of how Google Video's DRM system worked that explains why once the service is shut down how all access to previously purchased content can be disabled more or less instantly.

Thursday 9 August 2007

Merial

I'd never heard of Merial until all the hoo ha over Foot and Mouth in the UK but it appears I'm a long term (and very satisifed) customer since I've been putting their Frontline anti-flea goop on my cats for years. Funny how you notice these things.


Tuesday 7 August 2007

More on Aspergers

Daithi maintains that the Barren Cohen Asperger's test is rigged and I thought he might be onto something as I also seemed to score outrageously high myself and well, I know I'm not the most socially adept bloke on the planet but I wouldn't have pegged myself as dysfunctional, at least not unless I'd been drinking excessively the night before.

However I do have some particular foibles that some would find a bit odd and could be used to back up an Aspergers (or related disorder) diagnosis if someone was attempting to get me tagged as being in some way defective.

I'll start with two :
I always wear my socks inside out as the seams drive me nuts.
I almost always shop for jeans in the same shop. I've done this for nearly 25 years and continued to do so even though I was living on a different continent for 10 years of that time.

How mad is that?

Friday 3 August 2007

The Dangers of Public Networks

So some of the jokers at Blackhat gave a very effective demo of why it is just plain stoopid to use public WiFi networks without turning up the paranoia level to max before you start. For their demo all they did was set up a willing victim and then do some basic network sniffing in order to feed a parser that extracted cookies from the captured data. That allowed them to snarf the target's google account cookie and then use that to log in to the victim's Gmail account.

It surprises me (a lot) that this is being presented as something new but hats off to Rob Graham and Errata Security for not only getting a gig at BlackHat 2007 for something that isn't really a hack, let alone something new, but that he has gotten the 'net media to distribute the news far and wide as an expose on how to "hack Web 2.0".

It's a welcome reminder though that you can't trust important data on any network and you should (at the very least) encrypt the part of any session that handles credential exchanges.

Aspergers

Pie Palace - Aspergers Test (AQ Score by Simon Baren Cohen)

Daithi scored 29 - I tried hard to score low and came in at 37. Explains a lot that.