Author Archives: Ruprict

About Ruprict

I am a nerd that is a Nerd Wannabe. I have more kids than should be allowed by law, a lovely wife, and a different sense of humor than most. I work in the field of GIS, where I am still trying to find myself on the map.

Apple’s Fruitless Security

For 20+ years I have relied on Jeff for friendship and technical advice.  He is a solid nerd, rooted in things Oracle/Linux with more knowledge of hardware and operating systems than any ten people I know.  In other words, he is no moron.  Recently, his iTunes account was hacked and upwards of $300 worth of apps/movies purchased.   Apple’s response? Paraphrasing “You need to cancel your credit card and take up disputing the charges with them.  As for Apple, we’ll be keeping that money.  Oh, and once you get a new card, please remember to add it back to your disabled iTunes account.”  In other words, you just got ramrodded while we watched, did nothing, and profited by it.

Now, I am an Apple fan.  I have an iPhone; I have a G5 (PowerPC-based, old school baby) that I use every day.  I love Mac OSX, finding it superior to the Windows operating systems (although, Windows 7 is pretty damn good) and have been on the cusp of buying a MacBook for a few months now.  I bought my wife an iMac, and she loves it.  In other words, I am not a Windows nerd bashing Apple.  I was once a blind fanboy, encouraging everyone I knew to by a Mac and get an iPhone.  I would passionately debate why Apple products were superior to all comers, sometimes without the benefit of rational thought.

I am an Apple user, and this may be the last straw.

Once the honeymoon of my relationship with Apple products faded into history, I started noticing what Apple gives me as a proponent of their products.  They don’t trust me to change a battery or add storage.  They force me to use a singular application to activate and update my phone (iTunes).  Their products are outrageously more expensive when compared to the competition.  A few times a year they have a media circus to unveil new crazy expensive hardware while their king talks down to me like I am expected to embrace whatever floats to the top of his mock turtleneck, even when it’s underwhelming (copy/paste).  Apple wardens off their systems, keeping a who’s who list of frameworks and products that are allowed inside the velvet ropes (i.e., the striking omission of Flash on the iPhone)  They allow me to pay $99 for the right to develop for their mobile platform, but only if I use a language who’s base feature set would have been laughed out of most late-1970′s development shops.  Oh, and I can pay another $99/year to have a closed off online  e-mail/contacts/photo/file offering who’s initial shininess fades rapidly under the light of actual use.

All this, and now an approach to online fraud protection that only an evil dictator could appreciate.  Apple’s software was hacked, my friend was affected, and they basically asked him to suck on it and come back for more.  I have heard many a user/nerd pontificate on why Apple’s user base pays a premium to be treated like dirt.  I have wondered aloud why the governments of the UK and the US will drop the “Monopoly” moniker on Microsoft, but allow Apple to dominate and control the mobile market without a peep.  You have to hand it to Apple, they have created the perfect spot for themselves.

I want to have faith in the masses.  I want to hope for the day that the users revolt and demand Apple to stop gouging our wallets and closing off their systems.  I just don’t see it coming.  Talking to other die-hard Apple users, they say that Apple should be allowed to control what is allowed on their devices and operating systems.  These are the same people that would have held sit-ins to force Microsoft to allow more than one browser in Windows.  The double-standards are obvious and ubiquitous.

I’ve been told that I don’t have to buy Apple products.  I don’t have to subject myself to the whims of black turtlenecks.  This is true.  My hope is now shifting to Microsoft and Google.  Two other behemoths that want my money.  Here’s hoping that they realize that my business is their privilege, that my information is worth protecting, and that my choice is still mine.


Unit Testing Objects Dependent on ArcGIS Server Javascript API

Recently, I’ve created a custom Dojo dijit that contains a esri.Map from the ArcGIS Server Javascript API.  The dijit, right now, creates the map on the fly, so it calls new esri.Map(…) and map.addLayer(new esri.layers.ArcGISTiledMapServiceLayer(url)), for example.  This can cause heartburn when trying to unit test the operations that my custom dijit is performing.  I am going to run through the current (somewhat hackish) way I am performing unit tests without having to create a full blown HTML representation of the dijit.

I’ll be using JSpec for this example, so you may want to swing over to that site and brush up on the syntax, which is pretty easy to grok, especially if you’ve done any BDD/spec type unit testing before.

The contrived, but relatively common, scenario for this post is:

  1. My custom dijit makes a call to the server to get information about some feature.
  2. The service returns JSON with the extent of the object in question.
  3. I want my map control to go to that extent.

Following the Arrange-Act-Assert pattern for our unit tests, the vast majority of the work here will be in the Arrange part.  I don’t want to pull in the entire AGS Javascript API for my unit tests.  It’s big and heavy and I am not testing it so I don’t want it.    Also, I don’t want to invoke the call to the server in this test.  Again, it’s not really what I am testing, and it slows the tests down.  I want to test that, if my operation gets the right JSON, it sets the extent on the map properly.

The Whole Test

Here is the whole JSpec file:

describe 'VersionDiff'

 before_each
   vd = new esi.dijits.VersionDiff();
 end
 describe 'updateImages()'
   it 'should change the extent of the child map'
     //Arrange
     esri={
       Map:function(){
         this.setExtent=function(obj){
           this.extent=obj;
         };
       },
       geometry:{
         Extent:function(xmin,ymin,xmax,ymax){
           var obj={};
           obj.xmin=xmin;
           obj.ymin=ymin;
           obj.xmax=xmax;
           obj.ymax=ymax;
           return obj;
         }
       }
     };
     vd.childMap = new esri.Map();
     var text = fixture("getFeatureVersionGraphics.txt")
     text = dojo.fromJson(text)
     //Act
     vd.updateMapToFeatureExtent(text)
     //Assert
     vd.childMap.extent.xmin.should.be 7660976.8567093275
     vd.childMap.extent.xmax.should.be 7661002.6869258471
     vd.childMap.extent.ymin.should.be 704520.0600393787
     vd.childMap.extent.ymax.should.be 704553.8080708608
   end
 end
end

//Arrange

The Arrange portion of the test stubs out the methods that will be called in the esri namespace.  Since the goal of the test is to make sure my djiit changes the extent of the map, all I need to do is stub out the setExtent method on the Map.  setExtent takes an Extent object as an argument, so I create that in my local, tiny esri namespace.  Now I can set the property on my dijit  using my stubbed out map.  Thanks to closures global variables (ahem), the esri namespace I just created will be available inside my function under test.  Closures are sexy, and I only know enough about them to be dangerous.  Yay!  I don’t have to suck in all the API code for this little test.  That fixture function is provided by JSpec, and basically pulls in a text file that has the JSON I want to use for my test.  I created the fixture my saving the output of a call to my service, so now I don’t have to invoke the service inside the unit test.

//Act

This is the easy part.   Call the function under test, passing in our fixture.

//Assert

How do I know the extent was changed?  When I created my tiny esri namespace, my esri.geometry.Extent() function returns an object that has the same xmin/ymin/xmax/ymax properties of an esri.geometry.Extent object.  The setExtent() function on the map stores this object in an extent property.  All I have to do is make sure the extent values match what was in my fixture.

I didn’t include the source to the operation being tested, because I don’t think it adds much to the point.  Suffice it to say that it calls setExtent() on the childMap property.

So What?

I realize this may not be the greatest or cleanest approach, but it is serving my needs nicely.  I am sure in my next refactor of the unit tests that I’ll find a new approach that makes me hate myself for this blog post.  As always, leave a comment if you have any insight or opinion.  Oh, and regardless of this test, you should really look into JSpec for javascript unit testing.  What I show here is barely the tip of what it offers.

Reblog this post [with Zemanta]

Robotlegs and Cairngorm 3: Initial Impressions

Smackdown? (Well, not really...)

I have had Robotlegs on my radar screen for months now, just didn’t have the time/brains to really check it out.  I cut my Flex framework teeth on Cairngorm 2, which seems to be the framework non grata of all the current well-known Flex frameworks (Swiz, PureMVC, Robotlegs, etc.)  However, at Joel Hooks‘ suggestion, I took a look at Cairngorm 3 (henceforth referred to as CG3), enough to do a presentation on it for a recent nerd conference. The presentation includes a demo app that I wrote using Cairngorm 3, which (currently) uses Parsley as it’s IoC container.  That’s right, Cairngorm 3 presumes an IoC container, but does not specify one.  This means that you can adapt the CG3 guidelines to your IoC of choice.  This is only one of the many ways CG3 is different from CG2, with others being:

  • No more locators, model or otherwise.  Your model can be injected by the IoC or handled via events.  For you Singleton haters, this is a biggie.
  • A Presentation Model approach is prescribed, meaning your MXML components are ignorant of everything except an injected presentation model.  The views events (clicks, drags, etc) call functions on the presentation model inline.  The presentation model then raises the business events.  This allows simple unit testing of the view logic in the presentation models.
  • The Command Pattern is still in use in CG3, but commands are not mapped the same way.  For CG3, your commands are mapped to events by your IoC.  Parsley has a command pattern approach in the latest version that actually came from the CG3 beta.  This approach uses metadata (like [Command] and [CommandResult]) to tell Parsley which event to map.  Again, this results in highly testable command logic.
  • CG3 includes a few peripheral libraries to handle common needs that are very nice.  Things like Popups, navigation, validation, and the observer pattern are all included in libraries that are not dependent on using CG3 or anything, really.  If you don’t intend to use CG3, it may be worth your while just to check out these swcs.

All in all, CG3 is a breath of fresh air for someone who has been using CG2.  Cairngorm 2 is fine, and it’s certainly better than not using any framework, but the singletons always made me uneasy and, in some of our lazier moments, we ended up with hacks to get the job done.  I feel that CG3 really supports a test-driven approach and Parsley is very nice and well-documented.   It’s worth saying that I only know enough about Parsley to get it to work in the CG3-prescribed manner, and there seems to be much more to the framework.

Once I had a basic handle on CG3, Joel said he was interested in how CG3 could work with Robotlegs (henceforth referred to as RL).  Also, after my presentation, a couple of folks wandered up and mentioned RL as well.  So, when I got home, I ported the demo app to RL (it’s a branch of the github repo I link to above) so I could finally check it off my Nerd Bucket List.

First of all, Robotlegs is, like CG3, prescriptive in nature.  It is there to help you wire up your dependencies and maintain loose coupling to create highly testable apps (right, all buzzwords out of the way in one sentence.  Excellent)  Like CG3, it presumes an IoC, but does not specify a particular one.  The “default” implementation uses SwiftSuspenders (link?) but it allows anyone to use another IoC if they feel the need.  I have heard rumblings of a Parsley implementation of RL, which I’d like to see.  Also, the default implementation is more MVCS-ish than the default CG3 implementation.  What the hell does that mean?  Well, MVC is a broad brush and can be applied to many architectures.  In my opinion, the CG3-Parsley approach uses the Presentation Model as the View, the Parsley IoC/Messaging framework for the controller and injected value objects for the model.  The RL approach uses view mediators and views for the view, which reverses the dependency from CG3.    The RL controller is very similar, but commands are mapped explicitly in the context to their events, rather than using metadata.  The model is also value objects, but it’s handled differently.  In CG3, the model is injected into commands and presentation models, then bound to the view.  So, if a command changes a model object, it’s reflected in the presentation model, and the view is bound to that attribute on the presentation model. In RL, the command would raise an event to signify model changes, passing the changes as the event payload.  The view mediator listens for the event, handles the event payload to update the view, again through data binding. (NOTE:  You can handle the model this way in Parsley as well, using [MessageHandler] metadata tags, FYI)

It’s worth mentioning that when I did the RL port, I added a twist by using the very impressive as3-signals library from Robert Penner.  Signals is an alternative to event handling in Flex, and I really like it.  Check it out.  Anyway, RL and Signals play very well together, but it means I wasn’t necessarily comparing apples-to-apples.  Signals is not a requirement of using RL, at all, but the Twittersphere was raving about it and I can see why.  The biggest con to using Signals with CG3 might be some of the peripheral CG3 libraries.  For example, I think you’d end up writing more code to adapt things like the Navigation Library to Signals.  The Navigation Library uses NavigationEvent to navigate between views, which would need to be adapted to Signals.  Of course, I am of the opinion that, if you are going to use something like Signals, you should use it for ALL events and not mix the two eventing approaches.  This is a philosophical issue that hasn’t had the chance (in my case) to be tested by pragmatism.

So, which framework do I like better?  After a single demo application, it’s hard to make a firm choice.  I really like the CG3 approach, and it’s only in beta.  However, I also really like the Signals and RL integration, which I think makes eventing in Flex much easier to code and test.  I am not that big a fan of the SwftSuspenders IoC context, as there doesn’t seem to be anyway to change parameters at runtime, which is something I use IoC to handle.  An example is a service URL that would be different for test and production.  I’d like to be able to change that in an XML file without having to rebuild my SWF.  I asked about this on the Robotlegs forum, and was told that it’s a roll-your-own answer.  On the other hand, Parsely offers the ability to  create the context in MXML, actionscript, XML, or any mixture of the three.  I like that.  I think the winner could be a Parsley-RL-Signals implementation pulling in the peripheral CG3 libraries, which mixes everything I like into one delicious approach.  MMMMMM.

Anyone have questions about the two frameworks that I didn’t cover?  Hit the comments.  Also, if anything I have said/presumed/implied here is flat out wrong, please correct me.  The last thing I want to do is lead people (including myself) down the wrong path.

Related articles by Zemanta (well, some of them were by Zemanta, others by me)

People You Should Follow on Twitter About This Stuff

Reblog this post [with Zemanta]

2010 ESRI Dev Summit Wrap Up

Back in Charlotte after another lively ESRI Developers Summit.  I went back and read my impressions from last year, and have to say that they were hit and miss.  You could replace last year’s impressions mentioning 9.4, with some mentioning 10 (the new and improved 9.4) and it would at least partially apply.  New stuff this year to get your inner (and outer) GIS nerd in a frenzy are:

  • Editing from the web.  The new FeatureLayer in the REST (and, thus, the various web) API is the big deal.  Simple editing of GIS data from the web.  In my oft-hyperbolic opinion, this is a game changer.
  • Attachements support in the REST API.  I have mixed feelings about this, as it seems that ESRI might be trying to make the geodatabase the everything-base, but I guess attachments are just another kind of data.  I can see cases where we’d use this, but I plan to be very careful…
  • Scriptable REST admin (my sessions were almost all either REST or Flex or both), which could be very useful.
  • REST-enabled Server Object Extensions (SOE) look very promising as well.
  • The Flex API has AMF support at 10.  Truthfully, I’ve not done much with AMF in Flex, but I understand it’s superdy-duperdy fast.  That’s on the immediate todo list.
  • Also on the Flex side, although not ESRI specific, is the release of Flex 4.  I went to a couple of sessions where they demoed Flash Catalyst, Flash Builder, and the new workflow.  I finally understand what Catalyst is, which is a good thing.  Depending on it’s cost, we may or may not use it.
  • Various small bits, like point clustering being supported by the GraphicsLayer, complete with a cool “flare-out” symbol.

This year was the second where users were allowed to present.  I presented on Cairngorm 3 and best practices in Flex.  I thought the presentation when as well as I could have hoped.  The slidedeck and code are available, and you can find all that information here.  The app I used basically allows the user to draw a polygon around NFL stadiums in the US and then click on the selected stadiums to see a pop-up with an aerial view of that stadium.  Cairngorm 3 and Parsley made it very easy to create the app, and the amount of code I had to write is shockingly little.

I will say I was surprised at the number of Flex vs Silverlight developers this year.  Last year, I wrote about Silverlight being the Queen of the Ball, with most developers I knew going to all SL sessions.  The buzz was much bigger about SL then, which was a 180 turn-around this year.  All of the Flex sessions seemed to be packed, and the buzz was Flex-heavy.  I didn’t actually go to any SL sessions, but I heard more than one developer say that the sessions seemed less full than last year.  Maybe it’s an alternating year thing or something.  Or maybe the release of Flex 4 on the Monday of the summit had something to do with it.  If you are a Sliverlight developer, please bear in mind that I don’t really care if Flex or SL has more “buzz” or attendees, but I just find the dynamic between the two camps and their respective APIs mildly interesting.

For more info, you can go to ESRI’s Dev Summit site and watch plenary videos as well as all tech sessions.    The user sessions aren’t posted yet, but they’re coming.  If I think about it, I’ll post a link to mine when it comes online.  Oh, and if you checkout the Twitter #devsummit tag, you’ll see a mountain of info and links for your perusing pleasure.

It was a great conference, as always.  Already looking forward to next year.


2010 ESRI Developers Summit

So, I’m off to the ESRI Dev Summit next week to meet and learn from a legend (the official unit of measure of geonerds) of geonerds.  I will be giving a user presentation on using Cairngorm 3 to create testable applications with ArcGIS Server.  The presentation is all but done, and I’ll have links to the slides as well as the source I use for the demo app once the conference is over.  I am very interested in some of the other user presentations, which span the gamut of what can be done with ArcGIS and a bit of nerd elbow grease.  I’ll definitely be attending the Ruby/Rails based user presentations, as well as some of the other Flex and javascript-based presos.  Just like last year, I’ll not likely go to any Silverlight presentations, simply because we are not currently using Silverlight.

If you’re going to be in Palm Springs this year and want to have a pint or ten, hit me on twitter (@ruprictGeek).  For what I do, the ESRI Developers Summit is far-and-away the most relevant and important conference, so the more geonerds I can meet, the better.

Hope to see you there!


ArcGIS Javascript API Tasks and the Back Button

Today I was working on an issue with a Sharepoint (I know, I know) web part that we have created to host a map. The web part using the ESRI ArcGIS Javascript API to dynamically load some graphics and layers when the page is loaded. The structure of the Sharepoint pages is hierarchical, so the user would select a property, then a site on that property, then an agreement on that site. Each of these pages contained a map that zoomed to the relevant area and loaded the relevant graphics/layers. This hierarchy leads the users to hit the browser Back button quite often. Go to a property, then a site, back to the property, to another site, then an agreement, back to the site….you get the idea.

Unfortunately, when the user hit the back button in that piece of crap Internet Explorer, the queries would not fire. In all other browsers, the queries work fine when the back button is clicked, but not IE. As you probably already know, if an enterprise is using Sharepoint, then they are using IE as their standard browser. This issue was causing a ton of grief.

I tried many different approaches, from forcing HTTP headers (Pragma: no-cache or Expires: -1) to trying to reload the page if the history.next object was defined. What finally worked was slightly changing the URL that task used so that IE wouldn’t use the client-side cache. I used a random number generator in javascript and appended it to the URL. The code is below.


var find = new ESRI.ArcGIS.VE.Query();

find.Where="ATTRIBUTE='"+this.value.toUpperCase()+"'";

find.OutFields=["ATTRIBUTE,OTHER_ATTRIBUTE"];

var findTask = new ESRI.ArcGIS.VE.QueryTask();

//Have to append a random number to get this to work when back button is pressed

findTask.Url=th.Url+"/3?_esi="+Math.floor(Math.random()*111);

findTask.Execute(find,dojo.hitch(this,this.handleResults));

The “_esi=” is just a key I put in to know it was mine and is, likely, unnecessary. May not be the best solution (I bet Vish will skewer it) but it works. Anyway, this took a few hours away from my life, so I thought I’d post it here. Hope it helps someone else.


Continuous Integration with Flex, Hudson, and ArcGIS Server-Part V

(Part 1, Part 2, Part 3,Part 3, Part 4)

In what I hope is the last post in this particular series, we will get Hudson up, building our project and running our tests. Let’s get to it.

Get Hudson

Download Hudson from here. This will download hudson.war, which is a java web archive. Oh, and I presume that you have a java SDK installed somewhere. If you don’t go to java.com and get one.

Start Hudson

Hudson is super easy to get up and running. I suggest you copy the hudson.war file to c:\hudson and open a command prompt to that directory. Type:

java -jar hudson.war

This will start the Hudson service on port 8080. If you have anything already running on this port (I did) you need to shut it down or pass the –httpPort= to the command with the port you wish to use.

Talk to Hudson

Open up a browser to http://localhost:8080 and you’ll see this:
Hudson Dashboard

We need to create a job for our Flex project. Click New Job put in a job name (I am using “AGSFlexBuild”) and select “Build a free-style software project” and Click OK. This will bring up the main configuration of the build. The first item we will set up is the Subversion repository. The high-level process that Hudson will follow is:

  1. Poll Subversion for changes
  2. When a change is detected, run an SVN updte to get the latest version
  3. Build the project
  4. On a successful build, run the tests

So, the first thing we should do is setup our Subversion repository information.

Hudson SVN

Put your repository URL in as shown in the above image. In the “Build Triggers” section select “Poll SCM” and put “* * * * *” in the “Schedule” text box. This will cause Hudson to poll SVN every minute.

Now we have to add the build step. Since we have already set up our ant build script, this part is easy. Click “Add Build Step” and select “Invoke Ant”, then fill out the text box as shown here:
Hudson Build Step

At this point if we ran the build from Hudson, we’d get an error complaining about ${deploy_dir}. The reason for this is we don’t have a local.properties file on the server (remember when we did that?). There are several possible solutions to this issue.

  • Create a ci.properties file, check it in to SVN, and pass a parameter to the build step in Hudson.
  • Create an ant task that copies the local.properties.template file to local.properties, then add a build step in Hudson that runs before the build-flex-project step.
  • Default the properties in the build.xml file.

The last option is the ickiest, but the first two are about the same, in my opinion. I took the second option, creating an ant task and adding it to my Hudson configuration.
Hudson Build Step 2
I leave the creation of the ant task as an exercise for the reader.

The final build step to be configured will run the tests. Add another build “Invoke Ant” build step that invokes the “test” ant task. At the bottom of the page, click the “Save” button. This will take you to the AGSFlexBuild Project page.

Run the build

Click “Build Now” on the left hand side of the page and the build will be scheduled. The build should run successfully. Click on the hyperlink with the build date and time, which will take you to the details of the build. If you click on “Console Output”, the entire build log is displayed. When things go awry in the build, you’ll likely go here first.

A continuous integration process is incomplete without some kind of notification when the build fails. There are a couple of options for notification: e-mail, CCTray, or twitter, just to name a few. E-mail is available in the base Hudson package (see Manage Hudson, Configure System, put in SMTP information). CCTray allegedly works, but I don’t use it and can’t find it. Twitter is available as a Hudson plugin.

Speaking of plugins, there are TONS of them. Twitter, Git, Windows Authentication, etc. You can find them in the Hudson configration pages of the site. I reccommend you look through them and find what you need.  Finally, it’s super easy to get Hudson running as a Windows service, as there is a “Install as Windows Service” option in the Manage Hudson configuration area.

It’s difficult to find a stopping place when discussing Hudson, so if you have questions, send me an e-mail or leave a comment. The code for this series can be found on github (you could fork it and test using Hudson with git, just for fun…. ;)) I hope someone found this series useful.


Follow

Get every new post delivered to your Inbox.