Skip to main content

4 posts tagged with "Devops"

View All Tags

Product-Management in Agile Projects: Addressing Technical Debt in DevOps Projects

· 5 min read

While developing products in DevOps teams, we take decisions on which features to develop, how to ship them quite quickly, in order to meet the customer requirements. Often these decisions causes more problems in the long run. These kind of decisions lead to “Technical Debt”.

Tech debt is phenomenon which happens when we prioritise the speed of delivery now, by forgoing everything like code-quality or maintainability. Although the agility of delivery of products is key to stay relevent in this agile world, but we have to make decisions also that the changes are sustainable.

In this article, we’ll talk about what technical debt is, how to handle quick decisions during development, and give examples to help you understand how to avoid future issues.

Tech debt is the extra work we has to be done later because of the technical decisions that we make now. Although it was coined by software developer Ward Cunningham in 1992, but it’s still holds relevance .

Usually, Technical debt occurs when teams rush to push new features within deadlines, by writing write code, without thinking about other considerations such as security, extensibility etc. Over the time the tech debt increases and becomes difficult to manage. The only way to deal with tit then becomes to overhaul the entire system and rewrite everything from scratch. To prevent this scenario we need to continuously groom the tech debt and to that we need to understand the type of tech debt we are dealing with.

Causes of Tech Debts:

Prudent and deliberate: Opting for swift shipping and deferring consequences signifies deliberate debt. This approach is favoured when the product’s significance is relatively low, and the benefits of quick delivery outweigh potential risks.

Reckless and deliberate: Despite knowing how to craft superior code, prioritising rapid delivery over quality leads to reckless and deliberate debt.

Prudent and inadvertent: Prudent and inadvertent debt occurs when there’s a commitment to producing top-tier code, but a superior solution is discovered post-implementation.

Reckless and inadvertent: Reckless and inadvertent debt arises when a team strives for excellence in code without possessing the necessary expertise. Often, the team remains unaware of the mistakes they’re making.

Given these different causes for tech debts, lets try to understand the types of tech debts. These can be broadly categorised under three main heads

Types of Tech Debts:

  • Code Debt: When we talk about talk debt, code debt is the first thing that comes to the mind. It is due to bad coding practices, not following proper coding standards , insufficinet code documentation etc. This type of causes problem in terms of maintainability, extensibility, security etc.
  • Testing Debt: This occurs when the entire testing strategy is inadequate , which includes the absence of unit tests, integration tests, and adequate test coverage. This kind of debt causes us to loose confidence pushing new code changes and increases the risk of defects and bugs surfacing in production, potentially leading to system failures and customer dissatisfaction.
  • Documentation Debt: This manifests when documentation is either insufficient or outdated. It poses challenges for both new and existing team members in comprehending the system and the rationale behind certain decisions, thereby impeding efficiency in maintenance and development efforts.

Architecture Debt:

  • Design Debt: This results from flawed or outdated software architecture or design choices. It includes overly complex designs, improper use of patterns, and a lack of modularity. Design debt creates obstacles to scalability and the smooth incorporation of new features.
  • Infrastructure Debt: This is linked to the operational environment of the software, encompassing issues such as outdated servers, inadequate deployment practices, or the absence of comprehensive disaster recovery plans. Infrastructure debt can result in performance bottlenecks and increased periods of downtime.
  • Dependency Debt: This arises from reliance on outdated or unsupported third-party libraries, frameworks, or tools. Such dependency exposes the software to potential security vulnerabilities and integration complexities.

People/Management Debt:

  • Process Debt: This relates to inefficient or outdated development processes and methodologies. It includes poor communication practices, a lack of adoption of agile methodologies, and a lack of robust collaboration tools. Additionally, not automating the process can greatly affect the software delivery’s agility.
  • People/Technical Skills Debt: This occurs when the team lacks essential skills or knowledge, resulting in the implementation of sub-optimal solutions. Investing in training and development initiatives can help reduce this type of debt.

Managing and Prioritising Tech Debt

Technical debt is something that happens when teams are developing products in aglie way. It’s like borrowing against the future by taking shortcuts now. But if the team knows about this debt and has a plan to deal with it later, it can actually help prioritise tasks. Whether the debt was intentional or not, it is crucial that the team grooms the technical debt during a backlog refinement session.

Value to customer vs Cost of solving it

  1. Do It Right Away: These tasks are crucial for the product’s smooth operation.
  2. A Worthy Investment: These tasks contribute to the product’s long-term health, such as upgrading outdated systems.
  3. Quick and Easy Wins: These are minor tasks that can be fixed easily. They’re great for familiarising new team members with the product.
  4. Not Worth Considering: Sometimes, the problem might solve itself or it might not be worth the time and effort to fix, especially if a system upgrade or retirement is planned.

While facing deadlines and working on new products, it’s easy to overlook accumulating technical debts. But if left unchecked, these debts can cause long-term problems. It’s key to balance the need for quick solutions with the importance of long-term stability.

While fast delivery and continuous improvement are central to agile development, it’s important to be mindful of accruing technical debts. Effectively managing technical debt can help ensure your projects’ long-term success.

Liked my content ? Feel free to reach out to my LinkedIn for interesting content and productive discussions.

Exploring an Object-Oriented Jenkins Pipeline for Terraform:A novel architecture design in Jenkin's multi-stage Terraform CD pipeline to improve CI/CD granularity

· 3 min read

Usually, when we perform terraform plan, terraform destroy, or terraform apply, we apply these actions to all the resources in our target files, often main.tf (you can use any name for the file, but this name is just used as a convention).

In the age of CI/CD, when we have everything as pipelines right from data, and application code to infrastructure code, it is usually difficult to this granularity. Usually, at least in Terraform, to achieve these three different actions, we have three different pipelines to perform terraform plan: terraform apply and terraform destroy. And when we select a certain action (let's say terraform plan), this action is performed on all the stages and on all resources within the pipeline.

But when we observe all these pipelines, there is a commonality that can be abstracted out to create a generality, on which the dynamic nature can be inherited. Just as we create a class, using different objects with different attribute values can be built, is it possible to create a similar base class (read pipelines) which when instantiated can create different pipeline objects?

###One Pipeline to create them all###

##The Modular Infrastructure

In order to build this class-based pipeline, we first need to create a terraform script. This script developed should be loosely coupled and should be modular in nature. For this, we have created this modular script, which has three modules named “Networking,” “Compute,” and “Notifications.” The components that each of these modules create is as follows:

  1. Networking: 1 VPC and 1 subnet
  2. Compute : 1 IAM role, 1 Lambda, 1 EC2 t2.micro instance
  3. Notifications: 1 SNS topic and 1 email subscription

And the file structure is as follows:

Once we have this ready, let’s create a groovy script in declarative style in a Jenkins file.

Class-Based Jenkins Pipeline

To create this class-based architecture style to flexibly create pipeline objects at the action and resource level, we are going to utilize a feature called “parameters” in Jenkins. This feature helps us create multiple objects using a single base class Jenkins pipeline. In this example, let’s create three actions namely:

  • terraform plan: This creates and prints out a plan of the resources that we are going to create in the respective provider ( can be AWS, Kubernetes, GCP, Azure, etc.)
  • terraform apply: This command creates the resources in the respective provider and creates a state-file that saves the current state of resources in it.
  • terraform destroy: This removes all the resources that are listed within the state-file.

These actions are performed on three modules/resources namely “Networking,” “Compute,” and “Notifications.”

The above parameters create a UI for the end user, as shown below, which would help the end user to create objects of the base pipeline on the fly.

Based on the actions selected and the resources on which these actions have to be done, Jenkins will create a dynamic pipeline according to your requirement. In the picture below, we see that we have applied terraform for the networking and compute resources in #24, and run terraform apply on networking and notification in run #25. To clean the infrastructure, we ran terraform destroy on run #26.

The present approach implemented is more in line with Continuous delivery principles than continuous deployment.

For the Jenkins file and Terraform code, refer to this link.

**Want to Connect?**Feel free to reach out to my [LinkedIn](https://www.linkedin.com/in/krishnadutt/) for interesting content and productive discussions.

DevOps Wizardry: Crafting Your Parlay GitHub Action - Improve your Development Process with Personalized Custom Automation

· 7 min read

Recently while trying to integrate a devsecops tool in my pipeline, i was trying to find the GitHub action which would simplify my workflow. Since i could not find it, i have to write the commands inline to run the command. Although it is not a hassle to write it within the script, it would be beneficial to have an action which we could directly call, pass parameters and run the action within the pipeline.

In this blog, i will walk you through different steps on how you can create a custom GitHub actions which would satisfy your requirement. The blog will be of 2 parts:

  • Understanding what the GitHub action are
  • Creating your custom GitHub actions

GitHub actions:

Often when we write pipelines, we would have a set of actions which we would like to perform based on the type of application that we are developing. In order for us to run these actions across the repos in our organization, we would have to copy + paste this code across the repositories, which would make this process error prone and maintenance tussle. It would be better if we take DRY principle of software engineering and apply it to CI/CD world.

GitHub action is exactly this principle in practice. We create and host the required action in a certain GitHub public repository and this action is used across the pipeline to perform the action defined in the action. Now that we understand what GitHub action is, lets explore how we can build a custom GitHub action which can help automate set of actions. For this blog, i illustrate it with an example of SBOM enrichment tool Parlay, for which i have built a custom action.

Creating Custom Action — A case on Parlay

We will be creating our custom action in the following steps:

  • Defining inputs and outputs in action.yml
  • Developing business logic in bash script
  • Dockerize the bash application
  • Test the action
  • Publish it in GitHub action Marketplace

Defining inputs and outputs in action.yml

To start creating custom action create a custom git repository, clone that repo in your local system and open it up in your favourite code editor. We start by creation a file named actions.yml. This actions.yml defines the inputs that the action would take, the outputs that it would give and the environment it will run. For our use case we have 3 inputs and 1 output. The actions.yml should have following arguments:

  • name: This would be the name of the action, which would be used to search in GitHub action marketplace. Since it would be published in marketplace, it’s name should be globally unique like s3 bucket.
  • description: This describes what your action would do. This would be helpful to identify which action would be the right fit for our use case.
  • inputs: Defines the list of options which would be used within the action. These can be compulsory or optional, which can be defined using “required” argument. In our current use case we are passing 3 arguments, input_file_name, enricher and output_file_name.
  • outputs: This enlists the list of outputs that the action gives.
  • runs: defines the environment in which action will execute , which in our case is docker

The action.yml will look something like this:

\# action.yaml
name: "Parlay Github Action"
description: "Runs Parlay on the given input file using the given enricher and outputs it in your given output file"
branding:
icon: "shield"
color: "gray-dark"
inputs:
input_file_name:
description: "Name of the input SBOM file to enrich"
required: true
enricher:
description: "The enricher used to enrich the parlay sbom. Currently parlay supports ecosystems, snyk, scorecard(openssf scorecard)"
required: true
default: ecosystems
output_file_name:
description: "Name of the output file to save the SBOM enriched using the parlay's enricher"
required: true
outputs:
output_file_name:
description: "Prints the output file"
runs:
using: "docker"
image: "Dockerfile"
args:
- ${{ inputs.input_file_name }}
- ${{ inputs.enricher }}
- ${{ inputs.output_file_name }}

Developing business logic in bash script

Once we have defined the inputs, outputs and environment that we are going to use, we would like to define what we are going to do with those inputs ( basically our logic) in a file. We can either define this in JavaScript or bash. For my current use case, i am using bash.

In my current logic, i am going to check if all the inputs are first given, if not the action fails. Once i have these 3 arguments, i am going to construct the command to run the action and save the output in an output file. This file is printed in stdout and formatted using jq utility.

#!/bin/bash
\# Check if all three arguments are provided
if [ "$#" -ne 3 ]; then
echo "Usage: $0 <input> <input_file_name> <output_file_name>"
exit 1
fi
\# Extract arguments
INPUT_INPUT_FILE_NAME=$1
INPUT_ENRICHER=$2
INPUT_OUTPUT_FILE_NAME=$3
\# Construct command
full_command="parlay $INPUT_ENRICHER enrich $INPUT_INPUT_FILE_NAME > $INPUT_OUTPUT_FILE_NAME"
eval "$full_command"
\# Check if the command was successful
if [ $? -eq 0 ]; then
echo "Command executed successfully: $full_command"
cat $INPUT_OUTPUT_FILE_NAME | jq .
else
echo "Error executing command: $full_command"
fi

Dockerize the bash application

Once we have the bash script ready, we will be dockerizing it using the following script. Whenever we invoke the action, this action which is defined in the bash script runs in an isolated docker container. In addition to the bash script in entrypoint.sh, we would also be adding the the required libraries such as wget, jq and installing parlay binary.

\# Base image
FROM --platform=linux/amd64 alpine:latest
\# installes required packages for our script
RUN apk add --no-cache bash wget jq
\# Install parlay
RUN wget <https://github.com/snyk/parlay/releases/download/v0.1.4/parlay_Linux_x86_64.tar.gz>
RUN tar -xvf parlay_Linux_x86_64.tar.gz
RUN mv parlay /usr/bin/parlay
RUN ls /usr/bin | grep parlay
RUN parlay
\# Copies your code file repository to the filesystem
COPY . .
\# change permission to execute the script and
RUN chmod +x /entrypoint.sh
\# file to execute when the docker container starts up
ENTRYPOINT ["/entrypoint.sh"]

Test the action

No amount of software is good without running some tests on it. To test the action, lets first push the code to GitHub. Once pushed, lets define the pipeline in pipeline.yaml file in .github/workflows folder. For the sake of input file, i am using a sample sbom file in cyclonedx format and have pushed it to GitHub. In my pipeline.yaml file, i am cloning the GitHub repo and using my action called krishnaduttPanchagnula/parlayaction@main on cyclonedx.json.

on: [push]
jobs:
custom_test:
runs-on: ubuntu-latest
name: We test it locally with act
steps:
- name: Checkout git branch
uses: actions/checkout@v1

- name: Run Parlay locally and get result
uses: krishnaduttPanchagnula/parlayaction@main
id: parlay
with:
input_file_name: ./cyclonedx.json
enricher: ecosystems
output_file_name: enriched_cyclonedx.json

Once the pipeline runs, this should give output in the std-out in pipeline console as follows.

Parlay Github action Output

Publish it in GitHub action Marketplace

Once we have tested the action and that is running fine, we are going to publish it GitHub actions market place. TO do so, our custom app should have globally unique name. To make it more unique we can add icon with our custom symbol and colour to uniquely identify the action in marketplace.

Once that is done you would see the button, “Draft a Release”. Ensure that your action.yml file has Name, description, Icon and color.

Once you have tick marks, you would be guided to release page where you can mention the title and version of the release. After that, click on “publish release” and you should be able to see your action in GitHub Actions marketplace.

Liked my content ? Feel free to reach out to my LinkedIn for interesting content and productive discussions.

Pushing Digital Transformation boundaries beyond Technology : A Radical Perspective

· 5 min read

Digital transformation is a radical re-imagination of how an organization utilizes bleeding-edge technologies to fundamentally change their business models and performance. Implementing technology in both processes and products is key to digital transformation, as it is not just about implementing new technologies, but about fundamentally changing the way an organization functions.

Digital transformation is both digital and cultural. On the digital side, it involves the implementation of new technologies and the optimization of processes and systems to take advantage of those technologies. This can include things like cloud computing, data analytics, automation, and other cutting-edge technologies.

However, digital transformation is not just about the technology. It also involves a cultural shift within the organization. This includes things like customer-centricity, agility, continuous learning, unbounded collaboration, and an appetite for risk. These cultural changes are necessary to enable the organization to stay competitive in an increasingly digital world.

Current Digital transformation is based on Digital — implementing new technologies but not on transformation — which is about how we function as individuals in a social setting. Lots of companies are implementing these cool ideas of DEVOPS while there is no change fundamentally in how they work/function.

Why should we care about Digital Transformation:

As technology advances, it is essential to change the way we live, work, and do business. Organizations that fail to adapt to these changes will struggle to stay competitive and may eventually be left behind.

Digital transformation is not just about implementing new technologies, but about fundamentally changing the way an organization functions.

This includes optimizing processes, products, systems, and organizational structure to take advantage of the latest technologies. By embracing digital transformation, organizations can improve their business performance, reduce costs, and increase efficiency.

Digital transformation can also help organizations improve their customer experience. By using technology to collect and analyze data, organizations can better understand their customers and provide personalized, relevant products and services. This can lead to increased customer satisfaction and loyalty.

Characteristics for Digital transformation culture:

  1. Customer - Centricity: In the past, organizations would implement the same transformation strategies for all of their customers. However, this one-size-fits-all approach is no longer effective. Today’s organizations must consider each customer’s unique vision and goals, and create personalized transformation strategies that align with those goals.
  2. Agility: In a rapidly-changing world, organizations must be able to pivot quickly and adapt to new situations. Hierarchical structures, while useful for reliability, can be a hindrance to agility. As such, many organizations are adopting agile methodologies and flattening their hierarchies to enable faster decision-making and response times.
  3. Continuous learning : As technology and the world around us change rapidly, organizations must be able to adapt and learn new skills and knowledge. This requires a culture of curiosity and a willingness to try new things. Organizations are hiring and working with individuals who are open to new ideas and ready to build new products and services.
  4. Unbounded collaboration: In the past, teams within organizations would often work in silos, with limited communication and collaboration across teams. Today, organizations are fostering cultures that encourage and incentivize cross-team collaboration. This cross-functional knowledge sharing leads to more innovation and better results.
  5. Appetite for risk: Many of the most exciting innovations are created at the edge of what is currently known. This requires organizations to venture into unknown territories, which can be risky. However, by fostering a culture of intelligent failure (failures that occur when trying to do new things) and minimizing preventable failures (failures due to sloppy work), organizations can improve their appetite for risk and drive innovation.

Actions to imbibe cultural change towards digital transformation:

  1. Communicate the importance and benefits of digital transformation: Employees may be resistant to change, especially if they do not understand why it is necessary. By communicating the importance and benefits of digital transformation, organizations can help employees understand why it is necessary and how it will benefit the organization and its customers.
  2. Encourage and reward experimentation: Digital transformation requires a culture of continuous learning and experimentation. Organizations should encourage employees to try new things and should reward them for their efforts, even if those efforts don’t always lead to success.
  3. Foster collaboration and knowledge sharing: Digital transformation often involves cross-functional collaboration and the sharing of knowledge and expertise across teams. Organizations should foster a culture that encourages and incentivizes collaboration and knowledge sharing.
  4. Provide training and support: Digital transformation can be a daunting process, especially for employees who are not familiar with the latest technologies. Organizations should provide training and support to help employees learn new skills and adapt to the changes brought about by digital transformation.
  5. Create a positive, inclusive culture: Digital transformation can be stressful and disruptive, especially for employees who may feel threatened by the changes it brings. Organizations should strive to create a positive, inclusive culture that supports and empowers employees during the transformation process.

The future of digital transformation is uncertain, but it is likely that technology will continue to advance and play an increasingly important role in our lives and in business. Organizations must continue to embrace digital transformation in order to stay competitive and adapt to the changing digital landscape. By taking steps to improve the social culture around digital transformation, organizations can make it easier for employees to adapt to the changes brought about by digital transformation.