Monday 3 August 2015

SOA/MSA design - Isolating contract dependencies

Contract is a crucial part of any services anatomy. In a SOA/MSA approach, contracts exposes the services behaviour that consumers will rely on. This behaviour is usually exposed as series of interfaces and input/output structures and implemented by internal components. The last is usually hidden from external consumers. The previous described structure may look like this:


On this example, modules are splitted, there is only one dependency flow (internal modules doesn't know external ones), each module has Its own set of dependencies, consumers rely on abstractions (the contract), so far so good.
When consuming services through REST or SOAP, consumers will need to serialise and deserialise the structures service exposes. It means they will need to hold a copy of these structures locally in order to get this job done. Now lets say you want to avoid this structure replication through all consumers inside your company. Considering services are built following the structure illustrated before, the contract module could be splitted apart and then shared with the consumers. In this scenario, you may get ride of the structure replication issue, but will running into another. When issuing the contract module as is to consumers, they will now depend on the same libraries that contract depends, otherwise consumers wont be able to use them. What about services moving to a different library-A version that isn't compatible with previous ones? A scenario like this:



If all consumers don't upgrade to the same library-A version than service is they will be broken. More consumers you have on this model, more synchronisation between them will be needed to deploy. More services you have worst It get. The expected agility when choosing this architecture approach will be harder to achieve.
One possible solution for this problem is, accept that consumers will hold local copies and just deal with It. Accepting they will have some extra work is still better than the coupling scenario described before.
You can also design your service keeping the contract module with less dependencies as possible. It may looks like this:



This design guideline can increase services flexibility when sharing contract as library. Consumers would need be affected only contract behaviour changes.even here but there are techniques to mitigate these effects. It is also an example that being succeed on going for SOA/MSA approaches depends also on good design and architectures choices.









Sunday 28 June 2015

AWS-Lambdas - Automating function deploys

On this post, I talked about the ideas behind AWS-Lambdas computation service and how It works.The presented example shows how It can be be deployed and used. Even having an working example, there is an issue on the way I'm using It. All steps regarding the deploy process are manual. It just goes against agility. Manual deploys as such are error prone. More complex the application get, more expansive It will be to maintain.The side-effects when maintaining manual deploy steps are endless, so there should be an alternative to automate It and make AWS-Lambdas really cost effective as It promises to be.
Kappa seems to fill this gap. It is a command line tool that greatly simplifies the process of having lambdas deployed on the cloud. All steps described on the mentioned post can be automated. Now are talking!

Setup

Before start, be sure you have python (2.7.x)  and pip available on the command line.

Installing kappa: 

I strongly advice build It from sources as far there are important bug fixes that seems to be fixed recently:

git clone https://github.com/garnaat/kappa.git
cd kappa
pip install -r requirements.txt
python setup.py install

Instaling awscli:


sudo pip install awscli


Configuration:

First thing to do is create the the kappa configuration file. This is where I'm gonna tell It how to deploy my lambda function (config.yml).
---
profile: my-default-profile
region: us-west-2
iam:
  policy:
    name: AWSLambdaExecuteRole
  role:
    name: lambda_s3_exec_role
lambda:
  name: myLambdaFuncId
  zipfile_name: integrator.zip
  description: Somethhing that helps describe your lambda function
  path: src/
  handler: Integrator.handler
  runtime: nodejs
  memory_size: 128
  timeout: 3
  mode: event
  test_data: input.json
  event_sources:
    -
      arn: arn:aws:s3:::[set your bucket name]
      events:
        - s3:ObjectCreated:*


Lets see what is going on:

Line 2: There should be a profile that kappa will use to authenticate Itself on amazon and create the function in my behalf. We are gonna see it later on the awscli  configuration;
Line 4: The policies assigned to this lambda. In case they aren't there yet, kappa will create them for me.
Line 9 - 18: function Runtime configs.
Line 19: This is the file that contains an example request in order to test the function. It is useful once we want to be sure everything is working fine after the deploy is over.
Line 20: Here I'm setting from where events will come from. In this case, any changes on the given bucket, will trigger a call to my function.

Now It's time to configure aws-cli. The only configuration needed is the security profile. Kappa will use It as stated before:

Create the following file in case Isn't already there: ~/aws/credentials and put the following content
[my-default-profile]
aws_access_key_id=[YOUR KEY ID ]
aws_secret_access_key=[YOUR ACCESS KEY ]


Having it set, It's time to deploy it using kappa tasks:
kappa config.yml create
kappa config.yml add_event_source
kappa config.yml invoke
kappa config.yml status


It should be enough to see the function deployed on the aws-console. The previous commands in order did:

  • create the function on the amazon
  • Make it listen changes on a given bucket
  • Test the deployed function using fake data (simulating an event)
  • Check the status of the deployed function on amazon.


As far Kappa let me automate all deploy tasks,I'm able to create a smarter deploy process. I worked in an example about how could It be done here. I may forgot to mention some detail about the process of having It work, so in this case leave me message and I'll be glad to help.



Sunday 7 June 2015

AWS Lambda - Computation as Service

Less than one year ago, Amazon launched  a new computation service, the AWS-Lambda. It promises simplify the process of building applications, by hosting and running code for you.  All the infrastructure and some scalability and fail over aspects are Amazon's concerns. It also integrates pretty well with other Amazon's services like SQS, SNS, DynamoDB, S3, etc. The code hosted there can even be called externally by other applications using the aws-sdk.

Here, I'll show how to use this service by doing something very simple. It idea is implement some code that will listen to an event (PUT) in a given S3 bucket, apply some processing on the file content, and send It to a SQS Queue.

This service restricts the language and platform where the code is implemented. A NodeJS module needs to be exported and called after deployed into the Amazon infrastructure, So, if you are not familiarised with Javascript and Nodejs, I would advice you to step back and look some documentation first.

var AWS = require('aws-sdk');
var parser = require('xml2js').parseString;
var async = require('async');

var s3 = new AWS.S3();
var sqs = new AWS.SQS();

exports.handler =  function(event, context) {
 var bucketName = event.Records[0].s3.bucket.name;
 var fileName = event.Records[0].s3.object.key;

 async.waterfall([
  function download(next) {
   s3.getObject({Bucket: bucketName,  Key: fileName}, function (err, data) {
    next(err, data);
   })
  },
  function parseXml(response, next) {
   parser(response.Body.toString(), function(err, result) {
    next(err, result);
   })
  },
  function sendMessage(result, next) {
   var message = {
    MessageBody: JSON.stringify(result),
    QueueUrl: "[YOUR QUEUE URL: i.e: https://....]"
   };
   
   sqs.sendMessage(message, function(err, data) {
      if(err) {
         context.fail("Error: " + err);
       } else {
         context.succeed("Message sent succefully: " + data.MessageId);
       }
       context.done();
   });
  }
 ], function(err) {
  if (err) {
   context.fail("Error: " + err);
   throw err;
  }
 });

}

Lets see what is happening here:

  • From line 1 to 3: Importing the modules needed on the implementation. All these modules needs to be packed when deploying the application, except by the aws-sdk, which is available by default in runtime.
  • Line 9 and 10:  Getting information from the event. When listing to an event from a S3 bucket, what you receive is the event metadata. So, if you want to do something with the object that uploaded, you need to extract the event metadata and then get the uploaded object and do something with the content.
  •  Line 12: The code from this point is a series of callbacks that depends from each other's results. So, to avoid the callback hell scenario, I used an external lib that make these functions dependencies a bit more clear to read.
In order to sure It's everything ok before deploying It, go to the console and perform a "npm install" command. It should check all code dependencies and put them into a specific directory.

Now It's time to set It up on Amazon infrastructure. The AWS-Lambda service let you upload a zip file with Its dependencies inside a Zip file. When using this option, be careful when creating the zip file. The Javascript file that will contains the code shown before needs to be at "/" on the zip file, otherwise It wont work. Worst than that, when running the code, the error message on the console is gonna show an error message that does not point on this direction.

Once is you have your code properly packed, go to the AWS console, access the "Lambda" option and and ask to create a new function. The  presented screen should look like this:



There, I'm putting basic information about what I'm uploading. The most relevant information are the Handler (Javascript file + the module name to be called) and Memory and Timeout values (Amazon will use this information billing). There is still the execution role. If you don't have one yet, create It using the options available on the combo box.  Once you managed to finish this step, the module is ready to be called. Now, the last step is just go to the bucket and I'm interested to monitor, and trigger this this function every time a new event happens by changing the bucket properties.

An additional and reasonable step would be test this deployed function in order be sure It's everything ok. In order to do that , go the console where all the functions are listed, select the function you want to test, press the "Actions" and select "Edit/Test". A new screen will be  presented. On the left side, there is a section called "Sample Events". It simulates some real use cases. To test this function, pick "S3 Put" option and  adapt the event setting valid bucket and file names. 
If everything went fine, you should able see a message looking like this on the Execution result area:

"Message sent successfully: 9431465....."

Some additional billing information should be displayed also and that is It. From this point you are sure that the function is properly deployed and ready to be used.
The working example can be found here.

Sunday 10 May 2015

Devops Culture - Why Silos are an issue?

Devops is becoming a hot subject nowadays. There are some nice materials here and here explaining the principles behind It. At this point, we know that Deveops isn't only about automation tools and frameworks. It's about a different way to thing I.T, One of these principle is the Culture.
This is actually an essential dimension when talking about changing the way companies build software. They can pick the best automation tools available or hire the best market consultants to built their architecture, but if cultural aspects aren't improved, they may wont ship software as they expect. Today I'm gonna talk about one these cultural aspects, the Silos.


The Traditional Division


It's very commons see companies splitting up teams by their skills. A traditional division that follows this idea would look like that:







On this model, the software creation will be supported by a series of  specialised teams. These teams will "somehow" be talking to each other in order to build a solution. Using this structure, companies can have specialists on each that will lead people, ensuring issues/solutions are addressed properly. This structure can change a bit depending on the case. There are cases where companies nominate people to supervision (manage) to whole pipeline. They can be responsible by make everybody talk, that environments are delivered on schedule, issues are addressed by the right people/groups, etc.
Software has been delivered on top of this model during years. But we're living in a different era now and there are some scenarios this model does not handle well.

The Software Complexity


The problems we need to solve with software today are different than the ones we needed 10 years ago. Lets see some examples that illustrate this:

  • The data volume systems need to handle is way bigger than before. Architectural decisions need to take place in order to build systems that met business expectations. Such decisions can even change the way business sells their products to Its consumers.
  • The decision of moving to the cloud or not is today a strategic business decision not from some geek on the ops area. 
  • Solutions needs to support traffic growth in terms of seconds. Systems need to be elastic, but architectural decisions can make it impossible to achieve. Ops and arch teams need to be on the same page here before take any architectural decision.
  • How to test a solution with the characteristics  as highlighted before? QA team will need special skills.

Teams needs to be on the same page since the beginning here. Even small decisions can affect everybody's and make the release process slower release after release as far  application grows. What happens with the traditional model is that people tend to ignore the software complexity as whole and focus the area where they are experts. People  can workaround this issue and achieve the necessary coordination even on the traditional model, BTW, It is gonna demand more effort from everybody involved.
The complexity is well handled when people understand the system as whole. Adding barriers between teams does not help achieve this goal.

The Barriers Between Teams


When workstreams are used as silos, the development cycle is over simplified. Software is treated as a simple package that could be sent here and there. The issue is that the whole software complexity described before is ignored. As the complexity grows, releases  and maintenance start take more time and then became more expansive.  When dealing with complex software,  communication, alignment and proper architectural understanding are essential to support the whole process. The barriers created due oversimplification wont help on this scenario. Companies tend to normalise the communication between teams in  order to reduce misunderstanding but they actually just add more noise. The idea behind "breaking Silos" is remove the barriers to let people collaborate and then understand the system as whole.

As more barriers are created between business and production less agile the development cycle will be. Does not matter how many experts are on each Silo or how many automation tools they implement, The barriers between them will be always something to improve. By removing Silos, people are able to see the system as whole rather than only their "working-area". The real benefits became when system is optimised as whole instead of specific parts.

Saturday 2 May 2015

Apache camel - Testing integrations

As far Systems are became more complex, It turns out that Test Automation is a mandatory Software Engineering discipline that needs to be followed. It is specially true when talking about Integrations between systems.On such scenarios, there are some challenges
  • Hard to isolate components - to test a single component, call to other components may need to be made in order to make the test possible, which makes the test process expansive
  • How to simulate system failures -  How to test the implementation behaviour in case one of the system being integrated fails? It is gonna be complicated or at least very time consuming simulate this test scenario because we need to control components that we usually can't.
  • Slow Build  - calls to external systems can be slow (slow connection, external system unavailability, etc). If your tests are build calling external systems, the build time might get slower over time as far you test coverage grows.
At the end of the day, these are things we need to workaround because tests would need to be done anyway. The good news is that It is possible to achieve certain level of test coverage even  on such scenarios. By using some DI techniques and a couple of frameworks I'm gonna make it happen.
I'm gonna use my last post as base. Actually I'll keep It as It is and create a different implementation applying modifications that will let It testable.

What is worth to test?


Looking to the implementation as It were, would be desirable test if the routing logic works as we expect on both cases (when it finishes successfully and when there is an error). To achieve that, I don't necessarily need  to rely on Amazon S3 or a different external component. I can "mock" them and then test the routing logic isolatedly.

How to do that?


In order to not depend on Amazon S3 on my tests, I need somehow  "replace" It only during the tests by "mocked" endpoints. By doing that, I'll be able to isolate what needs to be tested (the routing logic) and also control the input data, then simulate the behaviour I want.
First thing to do is to remove the hard coded references to S3 externalising them. The code will look as follows:

 @Component
class FileRouter extends RouteBuilder {
  @Autowired val sourceJsonIn: String = null
  @Autowired val targetJsonOut: String = null
  @Autowired var targetJsonError: String = null

  override def configure(): Unit = {
    from(sourceJsonIn).
      to("bean:fileProcessor").
      choice().
        when(header("status").isEqualTo("ok")).
          to(targetJsonOut).
        otherwise().
          to(targetJsonError)
  }

}

Here I'm using Camel Spring support for java. The FileRouter class will receive the endpoints from the spring context in runtime. In fact, these endpoints are now spring beans defined as follows:

trait S3Beans {
  @Bean
  def sourceJsonIn = "aws-s3://json-in?amazonS3Client=#client&maxMessagesPerPoll=15&delay=3000&region=sa-east-1"
  @Bean
  def targetJsonOut = "aws-s3://json-out?amazonS3Client=#client&region=sa-east-1"
  @Bean
  def targetJsonError = "aws-s3://json-error?amazonS3Client=#client&region=sa-east-1"
  @Bean
  def client = new AmazonS3Client(new BasicAWSCredentials("[use your credentials]", "[use your credentials]"))

}

There is also the FileProcessor that handles the file content. It is also defined as a spring bean as follows:

trait NoThirdPartBeans {
  @Bean def fileProcessor() = new FileProcessor
}

The S3 endpoints,  FileProcessor and FileRouter classes are ready to be added into the spring context. As far we are using spring support from camel, they will on be available on the camel context as well. It's being done as following:

@Configuration
@ComponentScan(Array("com.example"))
class MyApplicationContext extends CamelConfiguration with S3Beans with NoThirdPartBeans {}


Now the implementation is ready to be tested. In order to achieve to behaviour I want, I need to replace all endpoints set on S3Beans class by "mocked" endpoints. By doing that, I'll be able to "control" the external dependencies and then simulate different scenarios. To do that, I'll create a different "test context"but only replacing the beans I need to mock.

@Configuration
class TestApplicationContext extends SingleRouteCamelConfiguration with NoThirdPartBeans {
  @Bean override def route() = new FileRouter
  @Bean def sourceJsonIn = "direct:in"
  @Bean def targetJsonOut = "mock:success"
  @Bean def targetJsonError = "mock:error"
}

Direct is a camel component that works as a "memory" queue. It is gonna replace the S3 bucket where the files come from. Mock is another camel component that we can assert in runtime.  They are replacing the output S3 buckets. I can now check whether they receive messages or not.
Now It's time to create the test class. It' s gonna use the"test context" I just created and then validate different test scenarios. It's being done as follows:

@RunWith(classOf[CamelSpringJUnit4ClassRunner])
@ContextConfiguration(classes = Array(classOf[TestApplicationContext]))
class SimpleTest {
  @EndpointInject(uri =  "mock:success")
  val mockSuccess:MockEndpoint = null

  @EndpointInject(uri =  "mock:error")
  val mockError:MockEndpoint = null

  @Produce(uri = "direct:in")
  val template:ProducerTemplate = null

  @Test
  def shouldHitTheSuccesEndpoiny(): Unit ={
    val fileContent =  IOUtils.toString(getClass.getResourceAsStream("/json-file.json"))
    template.sendBody(fileContent)
    mockError.expectedMessageCount(0)
    mockSuccess.message(0).body().convertToString().isEqualTo(fileContent)
    mockSuccess.assertIsSatisfied()
    mockError.assertIsSatisfied()
  }

}

Conclusion

Real world integrations can be much more complex than the example I used. But they still need to be tested somehow. Systems like these without tests will became unmaintainable soon. The test approach i used fits well on case there is routing logic or between the components being integrated.
It is also  important to notice how the spring api made the implementation simpler and testable. As It was implemented before (without spring or any DI technique) would be very hard to achieve the result.
The working example can be found here.

Tuesday 21 April 2015

Apache Camel - AWS3 Integration - Moving files between buckets

Apache Camel is a great tool that helps when integrating heterogeneous systems. I'd say is the best one I've seen. By using Its comprehensive DSL, you can implement several integration patterns very easily. It also offers  well modularised APIs  ready to set and use and an extensive active community. On this post, I'll talk about one of these APIs, which is camel-aws3.
AWS3 is  a very popular storage service offered by Amazon and camel-aws3 is an API that let you manage files stored there. It is different than camel-file component where the idea is to deal with files stored in a given file system.
In order to use camel-aws3,  first thing we need to do is add It as dependency in our project.  Here, I'm using SBT, but it works on the same way for any other build/dependency/management tool.

libraryDependencies ++= Seq(
  "org.apache.camel" % "camel-core" % "2.15.0",
  "org.apache.camel" % "camel-aws" % "2.15.0",
  "commons-io" % "commons-io" % "2.4"
)

The lib commons-io (declared at line 4) isn't mandatory. I'm just using It to make my life easier when dealing with streams coming from AWS3.

Next thing to do is define a route. By defining a route, you tell camel to do things for you. Here I'm using the Java DSL, but you can achieve the same thing by using the  XML configuration support. Here is my route definition:

 
class MyRouteBuilder extends RouteBuilder { 
    override def configure(): Unit ={
        from("aws-s3://json-in?amazonS3Client=#client&maxMessagesPerPoll=15&delay=3000&region=sa-east-1").
         to("bean:fileProcessor").
         choice().
           when(header("status").isEqualTo("ok")).
            to("aws-s3://json-out?amazonS3Client=#client&region=sa-east-1").
           otherwise().
            to("aws-s3://json-error?amazonS3Client=#client&region=sa-east-1")
    }

}


Let's see what is going on here:
  • Defining the target:  on line 4, I'm setting from where files are coming from and how. Camel-aws3 API accepts parameters that specify how files will be fetched.  This is mostly what the rest of the parameters are doing. For more detail about those, check the website documentation.
  • Defining what happens after files are fetched - on line 5, I'm setting who is gonna process each file downloaded from AWS3. It's a simple scala class that is defined on the camel context. You'll more about this one in a sec.
  • Processing output results  - from line 6 until the end, I'm setting what is gonna happen based on the result from fileProcessor. This is the Control based router API acting. Here we have the logic that will move files from the origin bucket or another based on the output from fileProcessor.

The File Processor.


This is the implementation that will be called for each file arriving from AWS3. Nothing special about that. Ideally,this is where we need to apply any logic on the file being processed.
 
class FileProcessor extends Processor {
  import java.util.logging.Logger
  
  val logger = Logger.getLogger(classOf[FileProcessor].getName)

  override def process(msg: Exchange): Unit ={
    val content = msg.getIn.getBody(classOf[String])
    // Do Whatever you need with the content
    logger.info(content)
    Messenger.send(message = msg, status = Some("ok"))
  }

}


Actually, there is something specific happening on line 11. There, I'm sending content that will be used by the next step on the flow. The next step I'm referring here is the one currently implemented by MyRouterBuilder from line 6, that will decide whether it needs to move the file to a given bucket or another.

 
object Messenger {
  import org.apache.commons.io.IOUtils

  def send(message: Exchange, status: Option[String]): Unit ={
    message.getOut.setHeader("status", status.getOrElse("error"))
    message.getOut.setHeader(S3Constants.KEY,  message.getIn.getHeader(S3Constants.KEY).asInstanceOf[String])
    message.getOut.setHeader(S3Constants.BUCKET_NAME, message.getIn.getHeader(S3Constants.BUCKET_NAME))
    message.getOut.setBody(IOUtils.toInputStream(message.getIn.getBody(classOf[String])))
  }

}

By adding information on the output structure, the next step on the process will be able to decide what happens next based on the content. MessageSender isn't actually sending anything. It is just changing the Exchange object by reference. These names were picked just to make what is going on more explicitly to the user, once changing structures by reference makes the code a bit hard to understand.

 
object CamelContext {
  import com.amazonaws.auth.BasicAWSCredentials
  import com.amazonaws.services.s3.AmazonS3Client
  import org.apache.camel.main.Main

  val accessKey = ""
  val secretKey = ""

  def start(): Unit ={

    val camel  = new Main
    camel.bind("client", new AmazonS3Client(new BasicAWSCredentials(accessKey, secretKey)))
    camel.bind("fileProcessor", new FileProcessor)
    camel.addRouteBuilder(new MyRouteBuilder)
    camel.enableHangupSupport()
    camel.run()
  }

}

Here I'm initialising the Camel context.  The client (line 13)  will be used for authorisation and the fileProcessor  (line 14)  will be used to process files. From this point, they can be referenced by any router (as MyRouterBuilder is doing).
Keep in mind you still need to define your own keys on lines 7 and 8.
From this point,  you just need to call the start method, like this:

 

object Main extends App {
  CamelContext.start()
}


That is It. If you want to look into details, a working example can be checked here.

Sunday 15 March 2015

SOA - Dealing with Consumers Requirements

Services are created to be consumed. In case a Service isn't good enough (does not provide enough value), It fails on Its primary reason to exist. This makes sense for most of the companies I know and It is not different when talking about SOA. The goal of this post isn't advocate about SOA itself or disvantages/advantages of this Architectural Style. The idea  is highlight the consequences of drive Services requirements based only on consumers needs.

What is a Service?


It is a high level abstractions exposed as a business functionality to be consumed. By mentioning "high level abstraction", I mean, It should shape a business functionality as It really is (how It behaves). Low level details like  protocols, security, infrastructure, specific data consumer structures should  be out of the scope here.
Some of the ideas behind this definition are:
  1. If business change, Service needs to change quickly - The advantage of dealing with high level business abstractions is because Service will need to change only the when business behaviour changess and not other layers. By avoiding introducing low level details on the Service design, you'll be able to respond quickly to business changes than doing the opposite.
  2. Stay aligned with Business Strategies - That means, if companies wants to be able to consider expose business functionality for different consumers/medias/plataforms or compose several services to create  new business functionality, for instance, Service design should support It.

Dealing with Consumers Requirements


During the development life cycle, there will be cases where might be easier accommodate modifications on Services side in order to attend specifc consumer needs. By following this approach, you are creating a coupling on the Services design that isn't desirable. Services may looses It's capacity to quickly respond to changes without affect It's consumers.

Consider  a Sales Service that returns alls sales for a given Customer. Lets say there is a consumer (an UI) that needs display the sales in a specific order, different than the one returned by the Sales Service. For the UI point of view, would be easier have the sales returned on the order It needs. It may advocates the Service is much closer to the data. So ordering information there would be faster than iterating the sales  again on  the UI side and order the data as It needs.  
But now, lets consider the same services needs to be consumed by another API. What about this API needs the same information in a different order than UI?  
See, consumers driven  decisions increases the coupling by making Services harder to change without breaking other consumers. Services loose the capacity to quickly respond to business changes on these cases


Conclusion


The "sorting" example may looks a bit silly, maybe yes.. but consequences are the same when changing contract data structures/behaviour just to make consumer's life easier. So, be cautious when dealing with consumers driven requirements.  To avoid It, make your services Consumers Agnostic by avoiding  consumer specific requirements fall into the your Service design.


Thursday 26 February 2015

Requirements - Problem Discovery

When Eliciting requirements, these are question I usually raise independently on the domain:

  • Why It is a Problems for you?
  • Why do you need this?
  • Why It needs to be different?

I believe the answers are crucial for the requirement analysis. They basically will show the customer's real needs, their business problems. Detail requirements from this point is supposed to be safer and we'll see why.


What is a Business Problem?


A problem on this context is something that does not let business goes forward as it needs. You can think on it as the reason why you (Software Engineer) was called. A Problem is usually characterised as something that involves money somehow. When talking with business people about some problem, try to get from him how the problem (usually mentioned as a business need) he is talking about makes his company loose money.


Why Identify Problems is Important?


As a Software Engineer, have you received a complied requirements list ready to be implemented? If your answer is yes, how were you sure these requirements are really the ones your customer is waiting for? If you manage to correlate requirements to business problems to be solved, you'll  be able to have this answer. By doing that,  you avoid resources to be wasted by implementing features that are not valuable to business. 
Spreading this mindset to the whole tech team is also valuable due several reasons:

  • Team can prioritise refactorings and design efforts on parts of the systems that will really makes the difference.
  • The feeling that the team is building something really valuable.
  • Team is able to understand what makes business slow down, then create solutions that properly  address these issues.


How to Identify Business Problems?


Looking for business problems when eliciting requirements may seem too vague, a good tip here is look for problems that make the company loose money directly or indirectly. Business people usually report them as ".. this process it taking too long .. " or ".. Because it is too complex and involves too many people be coordinated..", etc. Even not mentioning the word "money" explicitly, they are talking about time, resources, etc. 
Achieve this point may be tricky and will demand you some experience on the field depending the domain you are. BTW, a technique is which is helpful is here to use Business Objective Models. It helps to identify business problems  and objectives. Going through this process, you will also be able to have a set of high level features that the final solution will need to provide. 



Even looking obvious, there are projects where this process is just ignored.  By doing that, people are assuming the risk to be working on requirements that doesn't bring the value business is expecting.  Be suspicious when your customer came to saying: "My solution needs to do this and that...". You are may running in this scenario.
By doing the Problem Discovery approach, it is gonna be also easier to have everyone on the same page (business and tech team) on which regards where the efforts needs to be applied, once the objectives are clear to everybody.
It also works well with the Iterative approach. You'll may have a chance to work with your customer showing him business problems that even him isn't not aware of.








Sunday 25 January 2015

Requirements Discovery - The Iterative approach.


Requirement Analysis is crucial part of development of any kind of system. This is the point where Engineers start shape the solution that  is gonna be built. During this phase, they also need to consider constraints that each particular project will demand (i.e cost, risk, time,quality, schedule, etc).On  the Requirement Analysis, business and IT people start get on the same page in which regards the problems that need to be addressed.
Without a proper Requirement Analysis, certain solutions are almost impossible to be built. There are some problems that are too complex to be solved without a proper understanding. It does not means Engineers will need to spend the whole budget available just be on the same than business. Here is where the agile mindsets and the lean principles came into the show. Ok, So far, so good, but why are we still seeing problems like misunderstanding on projects where there is time and space for Requirement Analysis? There can be plenty of reasons, Each project has It's own context with different problems, but I'll talk more about one I've seen several times in different projects. As  Engineers, we frequently fail to help business, go through the right direction.
I've seen quite often on projects of any size.  You may have experienced it already in case you are able to answer some of following questions:
  • As a software Engineer, have you received a list of requirements to be addressed and schedule already defined? 
  • Did someone get any input from the final user who is gonna be using the system?
  • Have someone thought  how complex is the architecture that needs to be built in order to attend the requirements?
  • What about some business concepts  that are still hidden on this requirements list that we may realise only few months after the project is already started?
All the aspects that involve the previous questions are the people usually ignore. Basically answering these questions will lead you to the symptoms the problem. Even in projects where people say they are agile, they are not free from situation. The issue to be solved here is not the methodology but involves approach and trust.


The Iterative Approach


In order to help business to find out the right requirements, the first step is to be more involved on the Requirements Elicitation process. The Iterative approach suggests this phase happens by constantly getting feedback from business. There are two main advantages by following this approach to Elicit requirements:

  • Engineers and Business people on the same page: By Applying series Engineering techniques, we can help business to find out the right requirements to solve the problem they have. Engineers and business tend to be on the same page once they get constant feedback from each other in which regards the how problems  will be addressed and  technical challenges around them.
  • Just enough analysis - As far it is a iterative process, business can see the list of requirements as they are being formed and they understand the challenges around them. They can decide the the right time stop and go forward with the release. On this way, you wont spend the whole budget available on the requirement analysis.
  • Looking for the right Architecture  - Make business understand the challenges sometimes pass through making proves that the architecture chosen will work as expected. By doing POCs, you can get these answers before making calls that will involve more money and risk.
The Iterative approach can actually be something continuous. It means the process can be repeated as soon you reach the Implementation phase. It also means you can go back to the modelling step once the result from the POCs does not attend the expectations. This approach works pretty well on cases where teams are looking for a MVP, for instance. It also means the cycle can restart as soon you  reach the implementation phase.(i.e plan the next release).

In order to make this approach works, collaboration is mandatory. Engineers needs to get feedback from people that fully understand the business and the problems they are trying to solve, otherwise the chances of wasting resources and time building a solution that wont attend the business expectations is high.