Support & Downloads

Quisque actraqum nunc no dolor sit ametaugue dolor. Lorem ipsum dolor sit amet, consyect etur adipiscing elit.

s f

Contact Info
198 West 21th Street, Suite 721
New York, NY 10010
youremail@yourdomain.com
+88 (0) 101 0000 000
Follow Us

Recent posts

What is Data Masking?

Your live production Salesforce org contains some of your organization’s most sensitive and confidential data. In the production environment, the data benefits from rigorous security and privacy protection, but once it is migrated into a test environment for use by developers, administrators, or QA it is unlikely to receive the same level of attention. If steps are not taken to protect this data, your organization may find it is not in compliance with industry regulations and at increased risk of data loss during a security breach.

Data masking, also known as data anonymization or pseudonymization, solves this problem. Live data is anonymized to make it safe for use in non-production environments. Anonymization adds fictitious details to the data to mask sensitive information, such as credit card numbers and customer addresses. If a security breach occurs and the non-production data is compromised, data masking can minimize the risk of exposing sensitive and confidential information.

There are multiple techniques for masking live data. Information can be augmented with prefixes and suffixes, shuffled to rearrange the existing contents, replaced with random noise, or replaced with user-specified data. These techniques protect the production information without diminishing its usefulness.

AutoRABIT helps you secure vital information assets by masking sensitive live data for use outside of the production environment. There are four reasons why data masking is a best practice for Salesforce operations.

4 Reasons to Mask Data

1. Regulatory compliance
Almost all organizations are subject to some form of regulation involving data. Maintaining compliance frequently involves following specific rules for data security. For example, the Payment Card Industry Data Security Standard (PCI DSS), Health Insurance Portability and Accountability Act (HIPAA), and General Data Protection Regulation (GDPR) include specific directives for managing credit card information, health records, and all forms of personally identifiable information (PII), respectively. Companies governed by these regulations face strict legal and financial penalties for non-compliance.

Data masking offers a safe way to maintain access to live data for testing, without compromising sensitive and confidential information. For example, when migrating data into a QA/UAT sandbox, organizations subject to PCI DSS, HIPAA, or GDPR regulations can obfuscate credit card details, health information, and all forms of PII to maintain security and privacy of the data.

2. Insider threats
Data breaches initiated from outside the organization get the lion’s share of attention, but a 2013 study by the Open Security Foundation found that close to 20% of incidents started inside the organization, and these were responsible for almost 70% of exposed data. While developers, administrators, and QA engineers have a legitimate need for test data, they do not need access to sensitive and confidential information from the live Salesforce environment. Masking live data ensures that those who need access to data can perform their job, without increasing the risk of compromising data during a breach.

3. External parties
Outside consultants and service providers play an essential role in many organizations, and it is not uncommon for staff to share data with third parties as part of their daily routine. These transactions have the potential to expose the organization’s most sensitive Salesforce data. Data masking is an effective way of mitigating this risk. Masking production data ensures that staff and outside vendors can share access to test data without compromising sensitive and confidential information from the production environment.

4. Data encryption is not data masking
Data encryption is not the same thing as data masking. This common misconception likely stems from the use of data encryption to secure confidential information as it is migrated between servers or transmitted across a network. Unlike data masking, data encryption can be reversed to reveal the original production data. This makes it an ineffective tool for securing confidential data used during the software development lifecycle.

How Can AutoRABIT Help with Data Masking?
AutoRABIT is an end-to-end release management toolbox for Salesforce. One of AutoRABIT’s most popular features is the advanced data loader, Data Loader Pro. Data Loader Pro can migrate data between sandboxes, without using CSV files, while maintaining relational hierarchies. Built-in data masking enables the data loader to protect sensitive data during migration. Users specify the object, fields, and masking style, and Data Loader Pro protects data during transit and storage.
Masking style options include:

1. Prefix: Adding characters at the beginning of a field’s data
2. Suffix: Adding characters at the end of a field’s data
3. Replace: Completely replacing data in a field with data entered by a user
4. Shuffle: Shuffling the data in one column while all other columns are untouched
5. Random: Generating random and unique values across a given data set

Data masking is integral to any data security strategy. Masking data not only ensures compliance with data security and data privacy regulations but also reduces the risk of compromised data following a security breach.

To learn more about data masking and how AutoRABIT can help meet your data security needs, contact us at info@autorabit.com
Abhilash Murali is a Sr. DevOps Engineer at AutoRABIT. Follow him on Twitter at @abhimur.

Today’s business users expect the apps they use at work to be updated with bug fixes and new features as frequently as the apps they use on their phones. This has pushed IT organizations to evaluate DevOps as a way of providing secure, fast, test-driven deployment of app updates. However, while there are best practices that IT can follow when applying DevOps to business apps, like Salesforce, there is still a lot of confusion and misunderstanding about the nature of DevOps. After discussing this topic with several of our clients and industry leaders, I thought it would help to review the top 6 DevOps myths that apply to implementations of Salesforce.

Myth #1. Continuous integration and continuous delivery (CI/CD) is only for custom web apps and doesn’t apply to apps like Salesforce, ServiceNow, and Workday.

CI/CD was originally developed by modern operations teams at Software as a Service (SaaS) companies, like Amazon and Netflix, but the concept of CI/CD can apply just as easily to modern software. Users of web apps have become accustomed to SaaS companies rolling out the latest versions of their software quickly and reliably. DevOps offers a way to provide a similarly seamless experience for apps from non-web software providers.

Many cloud software platforms do not provide adequate DevOps tools to help manage customer-specific code and feature developments. This is where AutoRABIT can help. AutoRABIT offers fully integrated software versioning, release management, and test automation, with secure build deployments to production and non-production environments, to help you move quickly to a DevOps culture.

Myth #2. DevOps and continuous delivery are the same.

It’s true that continuous delivery of software does indicate that an organization has established key components of a DevOps culture, but continuous delivery and DevOps are not dependent on one another. They certainly aren’t the same thing.

Myth #3. Continuous delivery means releasing software every five minutes.

This myth centers around the ambiguity of the term “continuous.” Even companies known for their DevOps skills, like Amazon, Netflix, and Google, do not deliver new software versions continuously. These organizations have achieved a level of confidence in their systems and processes that allows them to release new software when required. That may mean releasing new code every two weeks or it may mean releasing several times a day.

Facebook’s engineers are renowned for their “ship often” moto but even this well-known maxim does not indicate a time frame. At any given moment, Facebook engineers can rollout new enhancements or changes, for a variety of reasons. To Facebook, continuous delivery means refusing to roll back production changes, even if a problem is detected. Instead, engineers release a fix as soon as possible. With this approach, the code is continually improving.

Myth #4. Adopting DevOps doesn’t require C-level buy-in.

In their 2015 State of DevOps Report, Puppet Labs found that successful adoption of DevOps requires buy-in from both grassroots and upper management. While a small “skunk works” team can get the DevOps ball rolling and start a cultural shift, eventually you will need the CEO and executives on board, too.

The sooner you can get everyone focusing on the DevOps transformation effort, the better. Navigating the communication channels, paperwork, and red-tape necessary for a meaningful conversation with upper management can be challenging, especially in a large dispersed organization. But it’s not impossible. Wells Fargo, Capital One, Anthem, Disney, and many other large organizations have already adopted DevOps practices and have begun their transformation.

Myth #5. DevOps is not for our company.

Another common myth is that DevOps is not relevant to all organizations. This argument runs counter to the idea of digital transformation that has become a near-universal strategic goal at almost all organizations. Business users, customers, and partners now expect their digital tools to be updated frequently with bug fixes, software updates, and new features. This is the very definition of DevOps, and in fact, DevOps and digital transformation go hand-in-hand.

There is also a belief that DevOps does not provide high quality, secure deliverables. This idea has taken hold because many DevOps tools to-date have been home-grown approaches that required a lot of custom coding and scripting. AutoRABIT provides a fully integrated DevOps platform that enables organizations to implement faster, provide higher quality deliverables, and ensure secure deployments.

Myth #6. What are the key elements of a true DevOps maturity model?

The goal of establishing a DevOps maturity model is to provide a path to seamless integration and continuous delivery. In agile software development methodology, DevOps automation is the key to achieving the best possible software delivery process. However, there are a number of challenges that require programmatic assistance in support of the model.


Generic DevOps tools can provide a foundation for a software delivery process, but they require custom scripting to support integration and ongoing version control and change management. This scripting demands a particular technical skill set and is time-consuming to develop, adding delays and costs to the rollout of DevOps processes. AutoRABIT recently reviewed the build versus buy dilemma in an informative webcast that discusses choosing the right path for DevOps automation.

AutoRABIT provides a seamless developer experience to help you mature the software delivery process. It offers fully integrated versioning, release management, test automation, secure code scanning, build integration, and deployment to a variety of cloud solution providers platforms, including Salesforce.

FOR IMMEDIATE RELEASE

June 07, 2019

AUTORABIT ACHIEVES SOC 2 COMPLIANCE

Report Demonstrates AutoRABIT’s Leadership in Compliance & Security for DevOps

San Ramon, CA – AutoRABIT is pleased to announce receiving its Service Organization Control 2 (SOC 2) Type 1 Report after a thorough audit across its policies and processes.  As a leading provider of Automated Release Management products used by DevOps organizations to automate their CI/CD process for Salesforce, this achievement demonstrates AutoRABIT’s leadership and commitment to security and compliance controls for our global customers.

The audit was conducted by a global leader in SOC 2 compliance after a comprehensive review of AutoRABIT operations as the company strengthens its security controls, adopts best practices and assumes responsibility for maintaining a well-controlled and secured environment on behalf of their customers. With SOC 2 compliance, AutoRABIT becomes the first software company in the DevOps for Salesforce space to receive this report.

In addition to the SOC 2 Report, AutoRABIT recently achieved the ISO/IEC 27001:2013 certification, the international standard for best practices in information security management systems. The certification affirms AutoRABIT’s ongoing commitment to following the highest standards in data security and privacy for its cloud-based Automated Release Management suite for Salesforce and throughout every level of the organization.

“We are mindful of protecting the confidentiality and integrity of consumers’ personal information, which is our customers’ most sensitive data,” said Vishnu Datla, CEO, AutoRABIT. “Our commitment to ongoing compliance audits and security certifications allows us to earn our customers’ trust to ensure we have meticulous controls throughout development, test and production software environments that meet demanding government and industry standards.”

Additional Resources:

For more information on AutoRABIT, visit autorabit.com.

Visit our blog, check out our events & webinars and join the conversation on Twitter, LinkedIn and YouTube.

About AutoRABIT

AutoRABIT offers a suite of products used by DevOps organizations to automate their CI/CD process for cloud-based development platforms. Its Automated Release Management Suite for Salesforce integrates a variety of tools and processes used by DevOps teams to configure, build, test and manage development, environments and deployments on their Salesforce instance. AutoRABIT’s technology is driven by Metadata MasteryTM proprietary IP developed to manage the dependencies, profiles and relationships associated with metadata.

Contact: Shoni Honodel
P: 925-226-8514
shoni.h@autorabit.com

###

Data loss in Salesforce can happen for many reasons: code errors, human error, data migration errors, integration errors, and malicious intent. Whatever the cause, losing customer data can have a devastating effect on your business. Imagine the impact of accidentally deleting 50% of your leads from a production Salesforce instance. A disaster like that might bring your company to a standstill, or worse. According to the US Federal Emergency Management Agency, 40% of businesses don’t reopen after a disaster, and a further 25% fail after the first year. 

You can’t predict whether your business will suffer from Salesforce data loss. But you can prepare for it.

A business continuity plan provides a roadmap for your business to follow after a disaster event. The plan identifies the steps you need to take to get business operations up and running again, with minimal or no downtime and data loss. Without a business continuity plan, the recovery of your business after a disaster is left to chance.

A critical component of any business continuity plan for a SaaS application like Salesforce is the ability to backup and restore customer data in a timely manner. We recommend taking regular backups of Salesforce data. This lets you choose which backup to restore after a disaster. Your recovery point objective (RPO) dictates the frequency of Salesforce backups. RPO is a measure of your business’s tolerance for data loss. It’s the point in time after which lost data significantly disrupts your business. 

Restoring data from a backup can take hours, if not days, especially if your customer data runs into hundreds of gigabytes. Selectively restoring only critical customer data, such as accounts, contacts, opportunities, while a full restore happens in the background, can enable you to resume business operations within minutes of a disaster. 

AutoRABIT Vault gives you flexible backup and restore options for Salesforce data. Automate full and incremental backups with single-click recovery to any point in the past, based on field, record, or full backup restore. Use Vault to put your Salesforce business continuity plan on steroids. 

Three factors are critical to the success of any project implementation: people, processes, and technology. A deficiency in any one of these elements will upset the balance and threaten the success of your project. But, how do you tell when there is a problem? This is where burndown can help. 

What is Burndown?

In agile software development methodology, product owners define user stories. Architects, administrators, and developers take the user story and design and deploy useable product features. Each story, and its associated tasks, are sized by how much time they will take to complete — stories are sized by points, with one-point equivalent to one day’s work for a developer, and tasks are sized by the hours they will take to finish. Work on the story and tasks is combined into finite timeboxes called sprints.

Burndown is the rate at which stories and tasks are completed. Agile project management systems, such as JIRA and Rally (formerly CA Agile Central), can visually depict burndown. Ideally, a burndown graph will show a steady downward sloping line that ends at the completion of the sprint. 

OK, So What if my Burndown Looks Like a Cliff or is Janky? 

A burndown graph that doesn’t show a steady downward sloping line is typical of fledgling agile implementations. Generally, burndown improves after a few sprint iterations. However, if you don’t see improvement and your burndown looks like a cliff or is janky, it’s time to look for bottlenecks.

Accounting for every task in a sprint is vital, as it will help identify where bottlenecks lie. For instance, if you see that development isn’t starting until late in the sprint, you may need to look for bottlenecks in your processes and team structure — perhaps business users are taking too long to respond to inquiries or stories might not be clearly defined. Similarly, if you are using changesets to migrate changes, you may see tasks being completed successfully until it comes to moving changes across environments. This is where a continuous integration and continuous deployment (CI/CD) solution like AutoRABIT can help. 

Why is it Important to Identify Bottlenecks? 

A successful project requires a sustainable balance between people, processes, and technology and a predictable flow, as stories and tasks are completed throughout the sprint. If teams are not following processes correctly, it will put a strain on the technology. If the technology is not delivering, people and processes become stressed — which often results in people allocating blame. If processes are not clearly defined and adhered to, technology and people will become strained. Ultimately, overextended resources add an element of unpredictability to your project that can lead to bottlenecks and unproductive blaming as people become frustrated with the delays. This can cause substantial roadblocks for your project.

How can AutoRABIT help? 

AutoRABIT is a complete CI/CD solution that integrates Salesforce with agile management systems like JIRA and version control systems, including Git. AutoRABIT streamlines processes and automates activities, enabling you to complete deployment tasks for a story across different environments, from development all the way to production. Developers check their changes and link them to user stories, improving documentation and traceability while reducing effort. AutoRABIT is built on the combined expertise gained from many agile and DevOps implementations, enabling us to help guide your CI/CD project to success.

Jacques Grillot is an Enterprise Architect at AutoRABIT. Feel free to contact him on LinkedIn.

In any field, professionals just want to do their job well to deliver results and impact. Usually, that satisfaction comes from the knowledge that goals are achieved, and problems are solved. 

Every industry has its own challenges and ways of measuring whether a job is being done as well as it possibly can be. Within a DevOps culture, the challenge is to shorten the development life cycle while delivering the required quality software by following an iterative process.

While it may be possible to accomplish that in different ways, one of the best ways is by adopting Continuous Integration & Continuous Delivery (CI/CD) practices. 

At this point, most people in software know the benefits of CI/CD, but I suspect not everyone understands the intangible benefits. The tangible benefits of quality, speed, and collaboration have been widely discussed and documented on the web. I have not found, however, a lot of material on how CI/CD benefits the organization from the ground up.

Of course, CI/CD is about providing regular checks, detecting errors, and releasing quality code. But it’s also about improving the health of your entire organization and increasing employee/developer satisfaction. 

Here’s what happens when you adopt this practice: 

CI shortens the life cycle of bugs, giving developers more time for strategic projects.

That’s longer than most of us like to wait before fixing a problem in our code. And unfortunately, those mistakes sometimes make it all the way down the chain before they’re sent back to the developers to solve. The is problematic because it’s often difficult for developers to keep track of something they did a week or two ago. Suddenly, they’re expected to stop what they’re doing and revisit work that’s not fresh in their mind. 

A developer should be able to input their code into the system and learn whether they checked in bad code within 24 hours or sooner. This ability is exactly what you get when you adopt continuous integration practices. Overnight, a functioning system performs or runs tests to ensure accuracy so when something fails, it’s almost always evident by the following day—not the following week.

If you give your IT team the ability to immediately figure out when something is wrong with the code, they’ll learn to make fewer mistakes. And, crucially, they’ll have to spend less time fixing those mistakes.

Less time spent editing code means there’s more time available for working on the strategic projects that are essential to your company’s success.

Developers gain more confidence and coding discipline.

Just as writers use style guides to keep their work uniform, developers also have parameters that describe the best way to write code.

Ideally, once a developer writes a piece of code, he or she will describe it in detail so that the next person who looks at the code will understand how to work with or around the method. In practice, this doesn’t happen very often. Developers hate documentation. Any developer worth their salt will view it as a secondary—and hence, unnecessary—part of the job. I’d say less than half of all developers actually create documentation.

That means you can inevitably expect some resistance to the documentation and processes continuous integration require when you first adopt it. But once your team sees the benefits of following the CI guidelines—documents become much clearer, coding discipline improves, developers gain confidence, and the code gets to end users faster—it will be easier to get everyone on board.

Initially, CI practices may seem burdensome, but putting in the documentation work will lead to a healthier organization as coding discipline increases.

As your team develops better processes, your entire company improves.

Unlike other industries, the tech industry is human capital driven and the best way to accelerate is to improve the human capital. Companies spend a lot of time and money on planning perks, benefits and systems to ensure higher retention, hiring, and improving the morale and satisfaction of their employees.

Advantages of implementing/adopting CI/CD processes is that benefits are not strictly technical. Yes, the code gets to end users faster and bugs are caught much earlier. But those technical aspects will actually boost team morale and increase your developer’s satisfaction as well.

When a developer knows that the CI/CD system will kick back issues the next morning, they gain confidence. They know their work will go live faster. They know they are creating and not maintaining code, meaning they won’t have to backtrack and work on the same code they were working on two weeks ago when a bug is finally caught. And they know they won’t have to burn the midnight oil on a Friday evening before the software is deployed.

This increased satisfaction has a cumulative effect. A happy team makes recruitment much easier, which in turn improves the talent on the team. As you gain more talented individuals, the overall health of your company improves. Management spends less time worrying about morale and improving coding discipline because the CI system itself is helping to build better code and a happier organization.

No other perks can replace this sense of employee contentment, (otherwise, ALL the high-flying and heavily-funded startups would have succeeded). Of course, perks help, but job satisfaction for a professional is the visible proof that he is contributing value to his organization. A software developers’ true satisfaction is to solve a problem and validate that their piece of code reaches the customer/user quickly and the customer/user is using the feature that was developed, with ease

The tangible benefits (of this intangible value) are satisfied employees, greater retention, and improved bottom-line.

All that leads to increased job satisfaction among the professionals on your team, which is the holistic aspect of continuous integration adoption that is rarely talked about. But it’s arguably just as important as the technical implementation when it comes to the health and happiness of your company.

Vishnu Datla is founder and CEO of AutoRABIT. You can follow Vishnu on twitter @vishnuraju

A common need that teams developing on the Salesforce platform express is the desire for a single solution that will take their developers through the entire lifecycle. To make this easier, Salesforce rolled out Salesforce DX, which provides organizations with integrated tooling designed for end-to-end life cycle management.

Salesforce DX is a revolutionary platform that enables modern development practices with the following three key features:

  1. Scratch Orgs – Lightweight source-driven development environments that can be created with ease where changes can be validated, tested and integrated into the mainline quickly with CI platforms such as AutoRABIT.
  2. Unlocked Packages – Customers and partners have seamless distribution and delivery of apps with unlocked packages; this is a paradigm shift towards upgrades rather than battling with metadata deployment failures.
  3. CLI – Salesforce DX comes with an integrated CLI with support for scratch org creation, metadata conversion and migration, as well as data migration.

The real success that Salesforce DX can drive goes beyond CLI and scratch orgs. A team will be successful with Salesforce DX if they are able to seamlessly implement the three key features outlined above. To fully leverage the power of Salesforce DX, they need the ability to restructure their complex metadata into apps so they can manage shared metadata and dependencies. That may sound obvious, but it’s often overlooked. Teams must also be able to set up a unified development and delivery process driven from ALM systems such as JIRA. Finally, they need to be in line with continuous delivery guidelines so that any complete user story can effortlessly go through the deployment chain to production.

AutoRABIT is an end-to-end CI platform designed specifically for Salesforce. It empowers Salesforce teams to be successful with DX with several out-of-the-box features that Salesforce users have long desired. 

The first feature is modularization. Salesforce architects can now restructure and componentize their source code into applications with the modularization feature in AutoRABIT for Salesforce DX. Modularization offers a few key capabilities which play critical roles in long-term success. The first – and maybe the most important – is the ability to fetch and select metadata from DevHubs that form your application. This includes dependent metadata and the ability to create automated test data sets based on the objects selected in the module along with parent/child relationships. In addition, developers can validate a module for successful deployment, publish the module as an unlocked package, and add dependent unlocked package information and automatically include them in deployments. FInally, they can also push the source code in SFDX-compatible format into GIT along with sample data set and JSON configuration files.

A second important feature is Scratch Org Management, which allows Salesforce development teams to create contextual scratch orgs that add more power to conventional Salesforce scratch orgs. The Scratch Org Management feature in AutoRABIT for Salesforce DX has two key capabilities that drive organizational success – DevHub management and user story management.

  • DevHub management allows users to register and maintain DevHubs in AutoRABIT as well as create, maintain and delete Scratch orgs directly from AutoRABIT. They can also create contextual scratch orgs that let them set their own features and preferences, load the unlocked packages of base apps at the time of creation, and load intelligent sample data sets into scratch orgs.
  • User story management ensures holistic Application Lifecycle Management (ALM), as developers can rely on AutoRABIT to connect the dots by creating a user story in ALM, a Scratch Org in Salesforce, or a feature branch in GIT. AutoRABIT lets users assign and create a Scratch Org for a user story in any ALM (Jira/VersionOne/TFS) or in GIT for a user story and check validations, review and approve details of commits and make sure that all relevant user story information is updated back to the ALM.

Continuous delivery is critical for organizations that need the capability to promote a completed requirement anytime from development to production — and ensure quality and compliance parameters are met. With AutoRABIT for Salesforce DX, a developer can create a scratch org with all the required metadata and data pre-requisites. They can also validate the changes done in scratch orgs, complete the review and approval process for changes, push approved changes to a GIT branch, and raise a pull request. Release Manager allows users to merge the approved user stories into a release line and validate them, and automated functional tests can be run post-deployment for further quality assurance. More importantly, the user story will be ready for production deployment.

A comprehensive CI Server with a check in-merge-deploy framework and full support for SFDX ensures that Salesforce development teams experience continuous delivery for their applications. And for organizations that rely on Salesforce to drive their market success — there’s nothing more important than that.

To learn more, check out our video on Experiencing 360-Degrees with Salesforce DX

Niranjan Gattupalli, is Senior Director of Enterprise Services at AutoRABIT. Follow him on Twitter at @tweet_niranjan

In the 1990’s evaluating a build vs. buy approach for a line of business system (such as HR or Finance) was sometimes considered a worthwhile use of time. But nowadays, with feature rich cloud-based systems, any executive considering building such systems internally would likely be asked “why aren’t we focusing our efforts on building customer facing applications rather than building non-differentiating operational systems”. Today, there are many cloud-based DevOps platforms on the market. So, for the rapid accelerating field of Salesforce based DevOps, whether it be for Apex or DX based development, decision makers need to consider a unique set, of often un-expected requirements, when considering build vs. buy. 

Since Salesforce development is different from traditional development, building a DevOps system requires an additional layer of complexity on top of any internally built (or an extended Jenkins) platform. This is because unlike traditional software development, Salesforce development is about managing changing object configurations, keeping sandboxes in synch, avoiding conflicts & efficiently releasing deployments without breaking Salesforce’s org structures. The level of effort to build these capabilities into a home-grown DevOps platform can catch many IT teams by surprise.

Arun Purushothaman, Salesforce development team leader at Land O’Lakes, went through a build vs. buy evaluation after they initially tried, but failed, to build a DevOps platform based on Jenkins. One of their key requirements, per Arun, was “to be able to keep sandboxes refreshed and be able to roll-back versions instantly”. In addition, Land O’Lakes needed to preserve parent child relationships when data-loading. He points out that Land O’Lakes teams kept investing in building their own platform, but deployments kept failing because “we weren’t able to address the ability to continuously synch sandboxes and we aren’t able to pre-validate developers code submissions”. Land O’Lakes eventually scrapped their admittedly cobbled together platform build efforts and proceed with a “purpose-built DevOps platform for Salesforce development.

Land O’Lakes experience indicates the high cost and complexity of trying to build Salesforce metadata awareness & highly synchronized capability into a home-grown DevOps platform. As more purpose-built Salesforce development support is built into a platform, the cost to build and maintain it goes up even higher.
Build vs Buy

A DevOps platform to support Salesforce’s org structure complexities must not only have all the foundational competencies of DevOps capabilities (found in hundreds of DevOps vendors’ platforms in the market today), it must also be able to: 

  • Parse & interpret Salesforce data structures
  • Spot only the delta changes within large Salesforce file structures & deploy incrementally
  • Identify destructive changes & provide an early warning system before org structures break
  • Monitor & gate every step in the DevOps process to ensure only the healthiest of codes is promoted to production
  • Keep Salesforce sandboxes in synch so a large development team can be aware of the most recent (delta based) changes
  • Automate otherwise manual steps (that are not Salesforce’s metadata API) during pre/post deployment phases
  • Be able to roll back destructive (to the Salesforce org structure) changes seamlessly
  • Proactively identify object dependencies in order to preserve parent-child data structure when loading data

 

Not only would such a build approach have to deal with the complexity of supporting Salesforce development (listed above), any one enterprise would also need to establish an ongoing relationship with Salesforce’s own product management team. They would have to monitor and keep up with changes to Salesforce’s platform roadmap and then perform major releases to their internal home-grown DevOps platform, each time Salesforce changes their platform. 

Needless to say, the time spent by an internal IT team’s building, monitoring and maintaining their own hand-built DevOps platform is time not spent building customer facing and business differentiating applications in support of their business needs.

Any organization considering a build vs. buy decision must face this time allocation trade-off.

Salesforce provided its Force.com platform-as-a-service (PaaS) so its customers IT teams could spend less time on building their service delivery stack (hardware, software, etc.) and more of their time focusing on developing business solutions. The same trade-off holds true for building vs. buying a Salesforce DevOps platform.

John Wooden was coach of UCLA Bruins and won 10 NCAA championships in a 12-year period achieving a record 88 straight wins. If you were filling out your March Madness brackets during the Wooden era, UCLA would be the easy pick to win it all. While his basketball coaching skills are legendary, he is equally recognized as a life coach and has dozens of quotes that are often repeated in any team situation. 

Here are some words of wisdom from the “Wizard of Westwood” for your DevOps team:

“Be quick, but don’t hurry”
DevOps was designed to support an agile world and the accelerated pace of business. While DevOps speed is desirable, the process cannot be hurried and without discipline, or quality suffers, and the outcomes become unpredictable. Wooden would stress perfecting fundamentals to get it right. Working as a team with a collaborative set of tools that facilitates the rapid delivery of code into an integrated build with automated testing and deployment can reduce your time to deploy by more than 10 times that of a traditional process, all while producing quality results. Mastering these fundamental processes of DevOps will allow you to be quick with quality, and not hurried and haphazard. 

“Failure to prepare is preparing to fail”
Setting up your DevOps processes based on evolving best practices and levels of DevOps maturity is critical to the success of your program. Job one for any company embarking on DevOps is a readiness assessment. It will identify the needs of the organization for team development, process definition, establishing performance metrics and continuous delivery tools and services. You don’t have to do much research to discover DevOps is a cultural change as much as a technology approach. Preparing your organization for this important shift to deliver on business agility before you buy technology means avoiding failure.

“If you’re not making mistakes, then you’re not doing anything. I’m positive that a doer makes mistakes.”
While preparing to avoid failure is sage advice, it does not mean you will not make mistakes. The only way to avoid mistakes is to not transition to continuous delivery. But then your business will likely suffer as others move to the new pace of delivery. Making mistakes is part of putting the process in place and letting your developers be responsible for error free commits, learn from the process, and re-engage each and every time to get comfortable with the pace and expectations. Hundreds of companies have transitioned to DevOps and Continuous Delivery, not one made the journey without their fair share of mistakes. 

“It’s the little details that are vital. Little things make big things happen.”
DevOps is not a big bang transformation, but rather a series of small steps that transitions your org from two independent operations into a collaborative, continuous process with more speed and quality than ever imagined. There are several small steps in the process than can be rolled out and perfected before moving to the next and then the next. Know where you are on the maturity model and take deliberate steps to move up the scale. Big things will happen.

There are many more words of wisdom from Wooden – take a look. 

Good luck with your journey to become a model DevOps organization. The competition is getting more intense every year and your company is counting on you to make this happen.  One final quote from John Wooden: “It isn’t what you do, but how you do it.”  Do it well.

Dean Alms is VP of Product and Strategy at AutoRABIT. Follow him on LinkedIn.

The term DevOps has been around for over a decade. Most accounts say that Belgian Patrick Debois coined the phrase around 2007, as a way to ease his frustration around the role he had as a project manager and agile consultant with the Belgian government to help with data center migrations. In particular, his role in readiness and certification, required him to combine activities and relationships between applications development teams and operations teams – DevOps. So what is DevOps, in today’s terminology, and why should you care?

DevOps Is Not a Process

There’s a lot of confusion around what DevOps really is. It’s not a process. It’s not a technology. And it’s not a standard. It’s really more of a movement, or cultural shift that focuses on rapid information technology service deliver through the adoption of agile, lean practices using a system-oriented approach. Why DevOps is different, is because it places emphasis on people with the goal of improving collaboration between operations and development teams. DevOps implementations typically utilize technology, and in particular, automation tools that can leverage a programmable, rapidly changing infrastructure, and this is done from a life cycle point of view.

Amazon Web Services defines DevOps as “the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes.”

Developers typically try to push our new software faster and faster. That’s typically why they are hired. Operations, on the other hand, tend to be the voice or reason, trying to slow the pace somewhat, so that proper safeguards can be put into place to maintain system stability.

Why DevOps Matters

In the DevOps approach, the siloes between development and operations teams are torn down. Engineers work side-by-side with Operations, Quality Assurance and Security teams throughout the entire application lifecycle. What this means is that solutions in an organization’s tech stack can be developed more quickly and more reliably, and can help the organization’s overall velocity increase. The key benefits most companies see from a DevOps approach include:

  • Speed of Innovation- companies can innovate faster to help their customers stay on top by driving business results.
  • Rapid Delivery – as companies innovate faster, new features and bug fixes can be rolled out continuously to better satisfy the needs of their customers and gain a competitive advantage.
  • Reliability – through utilizing continuous integration and delivery to test changes provides assurance that functionality is maintained. Monitoring and logging practices help keep everyone informed in real-time.
  • Improved Collaboration – because of the close-knit nature of a culture of DevOps, teams work better together, take more ownership in their work and help themselves more accountable. This reduces inefficiencies as responsibilities and workflows are shared between developer and operational teams.

All of this leads to a more secure and scalable methodology that can be utilized across industries. Software moves from simply supporting business initiatives, to becoming an integral part of every business unit, thus driving more efficiency and profitability for the entire company.

To learn more about if adopting a DevOps methodology is right for you, watch Salesforce DevOps: Which Path Should You Take – Build or Buy, This webinar reply is hosted by Salesforce MVP, Eric Dreshfield, and features AutoRABIT customer, Arun Manjila Purushothaman, DevOps Automation Engineer, Land O’Lakes.