Sunday, 10 May 2015

Devops Culture - Why Silos are an issue?

Devops is becoming a hot subject nowadays. There are some nice materials here and here explaining the principles behind It. At this point, we know that Deveops isn't only about automation tools and frameworks. It's about a different way to thing I.T, One of these principle is the Culture.
This is actually an essential dimension when talking about changing the way companies build software. They can pick the best automation tools available or hire the best market consultants to built their architecture, but if cultural aspects aren't improved, they may wont ship software as they expect. Today I'm gonna talk about one these cultural aspects, the Silos.

The Traditional Division

It's very commons see companies splitting up teams by their skills. A traditional division that follows this idea would look like that:

On this model, the software creation will be supported by a series of  specialised teams. These teams will "somehow" be talking to each other in order to build a solution. Using this structure, companies can have specialists on each that will lead people, ensuring issues/solutions are addressed properly. This structure can change a bit depending on the case. There are cases where companies nominate people to supervision (manage) to whole pipeline. They can be responsible by make everybody talk, that environments are delivered on schedule, issues are addressed by the right people/groups, etc.
Software has been delivered on top of this model during years. But we're living in a different era now and there are some scenarios this model does not handle well.

The Software Complexity

The problems we need to solve with software today are different than the ones we needed 10 years ago. Lets see some examples that illustrate this:

  • The data volume systems need to handle is way bigger than before. Architectural decisions need to take place in order to build systems that met business expectations. Such decisions can even change the way business sells their products to Its consumers.
  • The decision of moving to the cloud or not is today a strategic business decision not from some geek on the ops area. 
  • Solutions needs to support traffic growth in terms of seconds. Systems need to be elastic, but architectural decisions can make it impossible to achieve. Ops and arch teams need to be on the same page here before take any architectural decision.
  • How to test a solution with the characteristics  as highlighted before? QA team will need special skills.

Teams needs to be on the same page since the beginning here. Even small decisions can affect everybody's and make the release process slower release after release as far  application grows. What happens with the traditional model is that people tend to ignore the software complexity as whole and focus the area where they are experts. People  can workaround this issue and achieve the necessary coordination even on the traditional model, BTW, It is gonna demand more effort from everybody involved.
The complexity is well handled when people understand the system as whole. Adding barriers between teams does not help achieve this goal.

The Barriers Between Teams

When workstreams are used as silos, the development cycle is over simplified. Software is treated as a simple package that could be sent here and there. The issue is that the whole software complexity described before is ignored. As the complexity grows, releases  and maintenance start take more time and then became more expansive.  When dealing with complex software,  communication, alignment and proper architectural understanding are essential to support the whole process. The barriers created due oversimplification wont help on this scenario. Companies tend to normalise the communication between teams in  order to reduce misunderstanding but they actually just add more noise. The idea behind "breaking Silos" is remove the barriers to let people collaborate and then understand the system as whole.

As more barriers are created between business and production less agile the development cycle will be. Does not matter how many experts are on each Silo or how many automation tools they implement, The barriers between them will be always something to improve. By removing Silos, people are able to see the system as whole rather than only their "working-area". The real benefits became when system is optimised as whole instead of specific parts.

Saturday, 2 May 2015

Apache camel - Testing integrations

As far Systems are became more complex, It turns out that Test Automation is a mandatory Software Engineering discipline that needs to be followed. It is specially true when talking about Integrations between systems.On such scenarios, there are some challenges
  • Hard to isolate components - to test a single component, call to other components may need to be made in order to make the test possible, which makes the test process expansive
  • How to simulate system failures -  How to test the implementation behaviour in case one of the system being integrated fails? It is gonna be complicated or at least very time consuming simulate this test scenario because we need to control components that we usually can't.
  • Slow Build  - calls to external systems can be slow (slow connection, external system unavailability, etc). If your tests are build calling external systems, the build time might get slower over time as far you test coverage grows.
At the end of the day, these are things we need to workaround because tests would need to be done anyway. The good news is that It is possible to achieve certain level of test coverage even  on such scenarios. By using some DI techniques and a couple of frameworks I'm gonna make it happen.
I'm gonna use my last post as base. Actually I'll keep It as It is and create a different implementation applying modifications that will let It testable.

What is worth to test?

Looking to the implementation as It were, would be desirable test if the routing logic works as we expect on both cases (when it finishes successfully and when there is an error). To achieve that, I don't necessarily need  to rely on Amazon S3 or a different external component. I can "mock" them and then test the routing logic isolatedly.

How to do that?

In order to not depend on Amazon S3 on my tests, I need somehow  "replace" It only during the tests by "mocked" endpoints. By doing that, I'll be able to isolate what needs to be tested (the routing logic) and also control the input data, then simulate the behaviour I want.
First thing to do is to remove the hard coded references to S3 externalising them. The code will look as follows:

class FileRouter extends RouteBuilder {
  @Autowired val sourceJsonIn: String = null
  @Autowired val targetJsonOut: String = null
  @Autowired var targetJsonError: String = null

  override def configure(): Unit = {


Here I'm using Camel Spring support for java. The FileRouter class will receive the endpoints from the spring context in runtime. In fact, these endpoints are now spring beans defined as follows:

trait S3Beans {
  def sourceJsonIn = "aws-s3://json-in?amazonS3Client=#client&maxMessagesPerPoll=15&delay=3000&region=sa-east-1"
  def targetJsonOut = "aws-s3://json-out?amazonS3Client=#client&region=sa-east-1"
  def targetJsonError = "aws-s3://json-error?amazonS3Client=#client&region=sa-east-1"
  def client = new AmazonS3Client(new BasicAWSCredentials("[use your credentials]", "[use your credentials]"))


There is also the FileProcessor that handles the file content. It is also defined as a spring bean as follows:

trait NoThirdPartBeans {
  @Bean def fileProcessor() = new FileProcessor

The S3 endpoints,  FileProcessor and FileRouter classes are ready to be added into the spring context. As far we are using spring support from camel, they will on be available on the camel context as well. It's being done as following:

class MyApplicationContext extends CamelConfiguration with S3Beans with NoThirdPartBeans {}

Now the implementation is ready to be tested. In order to achieve to behaviour I want, I need to replace all endpoints set on S3Beans class by "mocked" endpoints. By doing that, I'll be able to "control" the external dependencies and then simulate different scenarios. To do that, I'll create a different "test context"but only replacing the beans I need to mock.

class TestApplicationContext extends SingleRouteCamelConfiguration with NoThirdPartBeans {
  @Bean override def route() = new FileRouter
  @Bean def sourceJsonIn = "direct:in"
  @Bean def targetJsonOut = "mock:success"
  @Bean def targetJsonError = "mock:error"

Direct is a camel component that works as a "memory" queue. It is gonna replace the S3 bucket where the files come from. Mock is another camel component that we can assert in runtime.  They are replacing the output S3 buckets. I can now check whether they receive messages or not.
Now It's time to create the test class. It' s gonna use the"test context" I just created and then validate different test scenarios. It's being done as follows:

@ContextConfiguration(classes = Array(classOf[TestApplicationContext]))
class SimpleTest {
  @EndpointInject(uri =  "mock:success")
  val mockSuccess:MockEndpoint = null

  @EndpointInject(uri =  "mock:error")
  val mockError:MockEndpoint = null

  @Produce(uri = "direct:in")
  val template:ProducerTemplate = null

  def shouldHitTheSuccesEndpoiny(): Unit ={
    val fileContent =  IOUtils.toString(getClass.getResourceAsStream("/json-file.json"))



Real world integrations can be much more complex than the example I used. But they still need to be tested somehow. Systems like these without tests will became unmaintainable soon. The test approach i used fits well on case there is routing logic or between the components being integrated.
It is also  important to notice how the spring api made the implementation simpler and testable. As It was implemented before (without spring or any DI technique) would be very hard to achieve the result.
The working example can be found here.