IEEE Globecom 2012: Management & Security Tech for Cloud Computing

I presented my paper titled Risk Propagation of Security SLAs in the Cloud (M. Hale & R. Gamble) at the IEEE Globecom workshop for Management and Security technologies for Cloud Computing. This work represents an initial step towards embedding risk into SLAs for the purpose of organizational awareness and acceptance. More specifically the paper establishes an algorithmic process for handling dynamic risk evaluations in the cloud. This novel risk evaluation and renegotiation algorithm handles cases where service providers alter their terms of service or can no longer meet their SLA-bound security parameters. In such events, the algorithm searches for alternative services, selects the lowest risk, most compatible replacement and calculates and informs previous services, all the way back to the original requester, of the updated risk valuations.

It sparked a series of discussions with other researches in the field and was a great experience overall.


IEEE Services 2012: 8th World Congress on Services

Dr. Gamble presented three papers at the IEEE Int’l Conference on Web Services (ICWS) , IEEE CLOUD 2012, and IEEE SERVICES 2012, all co-located in Honolulu, HI, where the new IEEE Cloud Initiative was launched.   The three papers from SEAT are all part of research in making web services security-aware and building a calculus to verify their security compliance.

Specifically my paper, SecAgreement: Advancing Security Risk Calculations in Cloud Services, (M. Hale & R. Gamble) was presented in the Security and Privacy Engineering track at IEEE SERVICES 2012. This work focused on the question “How can cloud service providers SLAs be augmented meet the security needs of organizational consumers?” Our approach extends WS-Agreement for SLA creation, negotiation, and formation to allow for security risk to be understood as part of service level objectives and service description terms.  The result is SecAgreement that embeds the security requirements expectations. We presented a matchmaking algorithm capable of matching SLA requests against SLA offers within the SecAgreement provided by cloud services to choose the least risk cloud to fulfill the request.

The conferences were a great medium for idea exchange and collaboration, and we are looking forward to the next phase of research to incorporate the feedback we received from the presentations! We’ll be posting some information on each paper and conference shortly

Been busy this summer

I realize I’ve been neglecting my blog for the past few months but I have my reasons. Since I last posted I have gotten married, went on a honeymoon, and been busy with a full work and class load.

On the research front, Dr. Gamble and I have been working on a semantic model for denoting security requirements embedded in government standards documents. While we’ve been working on formalizing these security requirements for awhile now, we are now on the verge of a journal quality paper.  I can’t spill the beans quite yet – but I can say it should have a fairly large impact in the world of government non-compliance analysis.

On the home front, Erin and I have finally settled in. Since June, we’ve organized and arranged the house to fit all of her stuff in and have been working both outside and in to improve the look and feel of the house. Outside we’ve torn up the grass, root and stump filled flower beds. In their place we’ve put Terracotta stones, a sprinkler system, fresh compost/dirt, weed fabric and mulch. It really is looking quite nice, although we have one more section to finish before the winter.

The weather is nice and the entropy of my life’s random variables seems relatively low right about now – which, after recent times, is quite nice. Good to be back!

ICSE 2011 Wrap up and future research work

After the long (16 hr) flight home from ICSE 2011, I see a full schedule of work, wedding, and house work. ICSE 2011 was a great first conference to go to. I had two great presentations, one at TEFSE and one at CSEE&T. The former was in regards to our traceability paper and the latter performance metrics. The experience at ICSE was much more than just about presentations. I had the pleasure to hang out with other great minds and network with other researchers from around the world.

While sitting in the Hilton Hawaiian bar and grill and reading a book, I had the surprising pleasure of an unexpected excellent conversation with some guys from NASA. I was reading the Foundation series by Isacc Asimov and enjoying a Malibu and pineapple when a group of three guys sat down nearby. They started talking about some obviously CS related topics like the latest on Oracle and Sun and one of them noticed my book. He said “that’s not John Gresham, are you here for ICSE?” Thus began a conversation with, as I found out through the course of the talk, someone who writes algorithms for the DSN (Deep Space Network) i.e. the network responsible for handling communication data from Voyager, Cassini, the Mars rovers and more. Given that I’m intensely interested in these things it was my pleasure to learn some of the nitty-gritty insider perspective.

This type of random encounter was, in my view, typical of my experience at ICSE. Everyone was mostly friendly and almost all had some passionate interest in their work. It was intellectually refreshing to take part in and now I’m motivated to move forward with the next phase of the security research. Dr. Gamble and I will be moving forward with a renewed sense of direction and some good grounding which I think will culminate in the next round of papers/reports for the AFOSR. This new initiative will be my work for the next two weeks in addition to my last minute wedding planning duties.

That’s it for now, stay tuned!

TEFSE’11 at the International Conference on Software Engineering

Today is the second day of the conference for me. Yesterday I participated in the workshop on secure systems. Now I’m switching gears into full SEREBRO mode. We kick off a three presentation streak today with my paper “Analyzing the Role of Tags as Lightweight Traceability Links,” continue tomorrow with our paper on performance indicators at CSEET and finally finish with a demonstration of SEREBRO to the entire conference on Friday.

Hawaii is beautiful outside, but I’m here 8-5 until at least Wed. I’ll post some lessons learned as I get some time when I get back. As for now, the keynote has begun – so I should go. Aloha!

Collision detection in Adobe Actionscript 3 (on a large scale)

It’s been a few weeks since I last worked on my VisGA, but I finally got some time this weekend. I was able to derive a relatively efficient collision detection scheme for path finding. Computational efficiency is a must with VisGA as cpu time and memory allocation are the two largest concerns for wide distribution of a flex/AC3 web app. The issue with actionscript is not that there are no methods for doing collision detection. Instead collision detection on multiple objects (10-100) which must be tested for each non-deterministically generated path between point A and point B can lead to serious computational overhead given adobe’s native collision testing methods.

For standard graphics objects, Actionscript provides two primary means of testing collisions.

First is the .hitTestObject – which allows you to compare a given graphics object, testSprite, to a point, testPoint, as in:


My application needs to determine whether or not a line being drawn between two points (divided into N Manhattan segments) collides with an obstacle while trying to reach the destination point using non-deterministically selected heuristic driven path finding. Each segment must be tested as it is non-deterministically built to ensure that it does not collide with any previously placed obstacles. Using hitTestPoint would require P*M tests to be made for collision where P is the number of pixels between the “from” and “to” points on the segment and M is the number of obstacles to compare against. For anything but trivial cases this balloons very quickly for even a single segment if P or M is large. Considering there are N segments and L lines where L is the number of total start-end point pairs in the population, this is an exponentially large problem – just to collision test.

A much better option is the .hitTestObject – which allows you to compare a given sprite, mySprite (the segment in question), to another graphics object, testSprite. as in:


Using hitTestObject, I would need to do, at minimum, N*L hitTests and then further determine where exactly the collision is using boundary comparisons, so the total complexity would be 2N*L tests. While this is better it is still not ideal – ideally I would only need to do L computations with something like hitTestALL (which unfortunately doesn’t exist and is very non-trivial to implement).

I tinkered with ways to combine obstacle sprites into a sort of master sprite – but this causes the master sprite boundary box to be the smallest box which can surround all sprites rather than an irregular boundary box that only includes the component boundary box pixels. There is a work around for this by defining a custom boundary box – but this introduces other more difficult problems – such as where to go to when there is a collision (rectangular graphics objects provides a number of helpful points).

My ultimate thesis is a) Adobe should provide better collision detection for large numbers of items and b) the best way I’ve found to implement multiple collision testing and pathfinding is a complicated interplay of hitTestObject, iterated over all objects to be tested, boundary checking based on heuristics (such as knowing where you are coming from and testing only those sides of objects which may be in the path of such a vector), and finally optimized pathfinding logical heuristics that reduces the “internal loops” that can occur using non-deterministic pathfinding techniques.

Below is the core collision testing method for an array of obstacle items and a test_line generated from a “from” and “to” point using the graphics.drawPath method. It returns all obstacles that collide with the given line, subsequent analysis of exactly where the collision occurs and the logic used for routing around it is performed by the pathfinding algorithm:

protected function obstacle_collision(from:Point,to:Point):Array{
 //checks for collision of the line formed between the two points "from" and "to" with all obstacles

 var collision_array:Array = new Array();
 var test_line:Line = new Line();
 var test_line_sprite:Sprite = new Sprite();

 //iterate through all obstacles and check for collision
 for(var i:Number = 0;i < obstacles.length; i++){
 return collision_array;

Hopefully this has clarified some nuanced issues regarding large scale collision testing. In my research I couldn’t find a simpler, more efficient way to do this. If you come across something better – let me know, otherwise feel free to reuse anything here for your purposes.


P.S. you can see the pathfinding in action in my latest version of VisGA available at:

Working on VisGA and next round of security research this week

On VisGA:

It has taken me a bit to get started again on non-house related projects post-spring break. Last week I started working on my VisGA again. I’m currently working on an efficient and complete algorithm for collision detection and re-routing of “pipes” that cross through obstacles. So far I’ve come up with a good algorithm for handling disjoint obstacles – but It needs to be extended for routing around merged obstacles (i.e. non-uniform obstacles composed of multiple squares). This is my main programmatic task for the next few days. Look for an updated VisGA version available via my utulsa web space in the next week or so.

Research Tasks:

On the other hand, I have a pile of papers and documents to look through to glean some insights regarding the next step for the Security Calculus work. I’m currently looking at the formal methods compliance and security assurance literature. The past few days I’ve been delving into the Common Criteria part 3 (CCpart3). This latest section of the CC seems to be much more applicable and boiled down for compliance and much more evaluation-directed than parts 1 and 2, which I looked at before for the SESS’11 paper.

As for the present its back to reading…