Recently, I participated in an online panel discussion on the subject of CI Acceleration, as part of Continuous Discussions (#c9d9), a series of community panels about Agile, Continuous Delivery and DevOps.
You can watch a recording of the panel discussion here
#C9D9, is a community initiative by Electric Cloud, which powers many Fortune clients’ Continuous Delivery to businesses, by automating their build, test and deployment processes, essentially the same space we are in, the difference being that AutoRABIT focuses on the Salesforce.com technologies. There were some interesting insights from the panel discussion that I wanted to share, especially from a Salesforce.com ecosystem perspective that I am involved with.
Defining Continuous Integration?
When you talk of Continuous Integration, a lot of discussions start off with Martin Fowler’s popular post. Several panelists talked about optimizing their infrastructure and how they are burning large budgets for running builds and automated tests. If you consider examples of CI at any social networking companies (Facebook, Twitter, etc.), they are aiming for Continuous Deployment or Continuous Delivery and not just merely Continuous Integration (CI), since a few mistakes are fine. It justifies the time-to-market acceleration and instant feedback from their users is critical. The testament to this is many issues/bugs that surface for new features in sites such as Facebook, Tumbler, SnapChat, etc.
I wanted to give the perspective of a cloud-based technologist, specifically Salesforce.com. In the ecosystem that we work, clients are satisfied with automation of basic development tasks and rudimentary tooling, which is a Herculean process for a relatively new technology. For most of my clients, CI is a journey they are starting and version control enablement is the first step. In a typical team, they have around 15-20 members, though there are larger teams, but I would focus on the average customer (a typical Salesforce.com implementation).
Builds: Tips and Tricks
Most of our client code base is not huge, and the major challenge is release management, accelerating the release cycles and reacting faster to the business needs. This is critical for a business-centric and a revenue-generating software like Salesforce.com.
In this Salsesforce.com ecosystem, a developer would typically spend about 30% of his time managing his/her code. There is a lot of impedance, since the environment is in a multi-tenant cloud and the related restrictions, called governor limits, prohibits one organization (tenant) or one team from monopolizing the shared infrastructure. The typical limits/restrictions are SOQL queries, heap size, data transfer, and session time-out – essentially anything that burdens the infrastructure.
There are workarounds, but most of them are expensive and need extensive tooling and superior DevOps skills, which are rare to find. Typically, most of the Salesforce.com implementations aim for starting a mainline code, a version control, deployment automation, a branching strategy and then later (mostly Phase-II) build automation, test automation (Regression), code coverage and deployment automation.
Everyone understands the importance of testing and the need to get budgets. But, the effort to showcase the benefits and ROI stops them from securing the budgets. Managers should understand and make the case for testing resources and budgets. The key is data; once managers start collecting data and take the report at the ‘appropriate’ time, it’s a no-brainer most of the time. (You can add/extrapolate standard ROI templates available online)
For Salesforce.com implementations that are more evolved, we suggest categorizing the test cases as ‘gold’, ‘silver’ and ‘bronze’ and using voting for each test case. So, when a test case comes in, the admin knows which ones to use as default, the frequency, the environments, and the test cases that need to be used. In addition, breaking down test cases and having a test plan helps optimize resources. Ultimately, it’s about ‘weight’ in terms of your infrastructure and speed.
From a build management process, one of the key elements is to have a build management strategy. At any given point of time, it’s a process of an evolving document, and so it is with testing, too. The leader or the manager should have a test planning strategy in terms of his budgets, infrastructure usage, and the kind of new processes and new tools that are acquired. He/she needs to understand that every new process takes two or three extra months.
Last, but not the least, test analytics have come a long way and it’s one of the areas that analytics are most underutilized in. Let the algorithms predict which test cases to use, which ones to build, on which module, etc., and be able to run the appropriate test cases. Analytics are becoming mature and dependable. Spend some time and create a project around this; it has great potential, but not every company is leveraging it.
AutoRABIT is an end-to-end Release Management and Continuous Delivery suite, specifically designed for Salesforce applications. AutoRABIT helps Salesforce developers, Admins and Analysts with out-of-box features and automation processes. This helps organizations in achieving Salesforce Continuous Delivery. It enables, Automated Metadata Deployment and Version Control support. This includes Advanced Dataloading and Sandbox Management, end-to-end Release-Management, Defect tracking, and Test Automation for public and private clouds, but is not limited to all these.
With our unique approaches for deployment and automation, AutoRABIT can help release managers migrate both metadata and data seamlessly across environments with the click of a button.
With AutoRABIT, you will achieve higher release velocity (days instead of weeks/months) and rapidly improved time-to-market schedules.