I am Susil

RazorpayX helps businesses to automate repetitive financial payment and enables cash flow insights.

It can be used to process

  • Transaction refund
  • Employee salary payouts
  • Vendor payment

Additional deduction of PF, TDS can be performed.

You could use Razorpay (used to accept payments) account to enable transaction. It provides clean user interface to perform the above mentioned steps adding fund, contact creation and payouts. The web portal has additional feature where you could upload excel file with payment details for bulk processing. In addition APIs are provided for integrating with application for programmatic payments.

Payments can be made to debit, credit cards / UPI Id without the need to create account for transfer. Amazon pay account can be recharged. Account statements can be generated for any provided transaction period. Payment types can be custom created.

Webhooks is available to update payment processing status back to your application with out any additional hassle to query transaction details.

Comment Section

Comments are closed.

Azure Pipelines Scaler – KEDA

In our previous article we looked into KEDA and scalers it offers. We will discuss Azure Pipelines scaler offered to scale build agents. Kubernetes HorizontalPodScaler offers scale-out / in functionality based on memory or cpu utilization. Only few project’s CICD utilizes configured resource threshold for scale-out. If you have configured min and max pods to be 10 and 50 respectively, hardly new pods will be created. This would cause most of build jobs to be queued or even if there are no jobs 10 pods will be idle adding to your cloud bill.

KEDA’s Azure Pipeline scaler solves this by listening to agent pool for pending job requests. If a new build job is created and goes to pending state then an event would be triggered. Scaler configured in your cluster will listen to this event and spin new pod to serve the request.

Scaler configuration

Azure Pipeline scaler has three resources to be configured.

  1. Secret – Create a PAT with just agent pool read access. Then create Kubernetes secret personalAccessToken variable with new PAT value. If you have a PAT for build agents do not use that as it might have additional permissions and secrets value can be decoded.
  1. TriggerAuthentication – Identifies how scaler should authenticate for communicating with Azure Devops. It could read auth information from Secret or environment variables.
  2. ScaledObject – Contains configuration that describes how to scale agents and authenticate. Scaling information and authentication needs to be configured in trigger sections. You will need to associate the resource that you want to scale, we will be associating with Azure build agents.


apiVersion: v1
kind: Secret
type: Opaque
  name: pipeline-auth
  personalAccessToken: encodedpersonalaccesstoken
apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
  name: pipeline-trigger-auth
  namespace: default
    - parameter: personalAccessToken
      name: pipeline-auth
      key: personalAccessToken
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
  name: azure-pipelines-scaledobject
  namespace: default
    name: azdevops-deployment
  minReplicaCount: 1
  maxReplicaCount: 5 
  - type: azure-pipelines
      poolID: "1"
      organizationURLFromEnv: "AZP_URL"
     name: pipeline-trigger-auth

The scaleTargetRef has reference to the deployment name that will be scaled. Agent pool id will be required and can be obtained using REST API. Organization URL will be obtained from the build agent pod environment variable.

After creating the resource you could check if scaler is authenticating and all configuration are correct by using below command

kubectl get scaledobject azure-pipelines-scaledobject

If column active and enabled contains value true then configuration is good and scaler is listening to events from Azure Devops agent pool.

Note: Azure Devops agent pool’s agent state will be shown as offline even after the pod is terminated. When scaler tries to retrieve pending job requests the response will include offline agent details. Request will timeout if response is huge is size and might cause scaler to fail scaling agents. To workaround this issue create a cronjob to remove offline agents periodically.

Have a minimum of one pod and desired maximum pods and observe how agents are scaled based on job requests.

Comment Section

Comments are closed.

KEDA scales kubernetes containers based on events from various external source / internal metrics. Kubernetes supports horizontal scaling using HPA (Horizontal Pod Scaler) in a replication controller, replicaset, stateful set or deployment based on memory or CPU utilization. HPA is a Kubernetes API and controller periodically adjusting number of replicas.

Kubernetes supports custom metrics to scale pods based on external metric apart from CPU and memory. KEDA works with HPA and extends functionality without duplicating.



Event-driven It scales resources based on events occurring inside cluster or from any external resource.

Scalers It provides out-of-the-box scalers which are vendor agnostic and supports multitude of software at different application levels. Below are few of the scalers, entire list is available here

  • Apache Kafka
  • AWS CloudWatch
  • Azure Pipelines
  • MongoDB
  • Prometheus

Workloads It supports scaling different types of resources Deployment, CronJob, Pods etc

Most of the scaler has two parts to scale resources

  1. Trigger Authentication – Scaler might require authentication to listen for events. Configuration for authentication will be stored individually objects to be scaled and can be reused across objects for storing credentials. It has ability to retrieve information from k8s secrets and environment variables.
  2. Scaled Object – This custom resource contains details about the type of scaler being used, min and max replica count, authentication reference (used details from trigger authentication) and most importantly object being scaled (deployment, pod, job)


Installing KEDA using HELM is very simple

  1. Add Helm repo
    helm repo add kedacore https://kedacore.github.io/charts
  2. Update Helm repo
    helm repo update
  3. Install keda Helm chart
    kubectl create namespace keda
    helm install keda kedacore/keda --version 1.4.2 --namespace keda

Helm chart installs keda-operator and metrics-apiserver pods

$ kubectl get po -n keda
NAME                                               READY   STATUS    RESTARTS   AGE
keda-operator-7fc5699d47-ng7rk                     1/1     Running   0          75s
keda-operator-metrics-apiserver-57fc85685f-zqnzr   1/1     Running   0          75s

$ kubectl get apiservices v1beta1.external.metrics.k8s.io
NAME                              SERVICE                                AVAILABLE   AGE
v1beta1.external.metrics.k8s.io   keda/keda-operator-metrics-apiserver   True 

We have installed KEDA operator and custom metrics server in our cluster.

In next blog we will see installing custom scaler to auto-scale Azure Devops build agents.

Comment Section

Comments are closed.

dotnet core has been seeing drastic improvements in its startup performance. It separated services extensions required for asp.net and console application. Now with .net 6 it has gone and providing bare minimum startup code. You will need .net 6 preview to be installed.

Create a new web application from terminal with .net 6 preview sdk

dotnet new web –o minimalapi

Project is created with just program.cs and you would not have startup.cs file containing all bootstrapping code

var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

if (app.Environment.IsDevelopment())

app.MapGet("/", () => "Hello World!");


Its just super simple hello world code that any dotnet developer would have wanted something similar to starting a node js application. Run the application and you will be presented with “Hello World!”


You could add service extension like full blown application by adding package reference and including the code

var app = builder.Build();


app.MapGet("/", () => "Hello World!");


Refer Hanselman blog post for additional details

Comment Section

Comments are closed.

Kubernetes has deprecated api versions periodically with major changes in their v1.16 and current release v1.22 Old APIs will be deprecated in few releases and eventually removed.

Kubernetes cluster needs to be upgraded to latest version whenever CVEs are found. We will need a progressive plan to identify and fix deprecated apis. If you try to apply resource with removed apiVersion, deployment will fail trying to identify the version. This article will go over steps to identify and update apiVersion before we upgrade Kubernetes cluster version.

Identify deprecated APIs

You could check the cluster for available api extensions. Kubectl will provide you with first matching extension and could potentially confuse. There are various tools available to identify deprecated version with lots of options. We will discuss about couple here

Pluto is an tool provided by Fairwinds to find deprecated kubernetes apiVersion in your code repositories and helm releases. You can download latest tool release from their repository. Note: this tool not available for windows. Find installation steps in their documentation page.

Detect charts files for deprecated apis available in a folder

pluto detect-files -d application/chartsfolder

Detect deprecated apis in helm release

pluto detect-helm -owide

Kubent (kube no trouble) is another utility tool that identifies deprecated k8s apis. This tool looks for deprecated api is cluster as well in addition files and helm release. You can control where to scan using flags cluster, helm2 / helm3.


Updating apiVersion - Above mentioned tools will identify deprecated apis, after identifying we need to fix by updating to required apiVersion. You will need to update apiVersion in your source code repositories to required version. To update deployed helm release we will use mapkubeapis plugin. This updates helm release metadata in-place by removing old apiVersion and creating new revision with updated apiVersion.

Install plugin

helm plugin install https://github.com/hickeyma/helm-mapkubeapis

Update release

use –namespace <> for HELMv2

helm mapkubeapis v2-oldapi

Re-run pluto / kubent utility tools to identify if old apiVersions are updated. You could provide Kubernetes version to target specific version. Kubernetes cluster can now be updated without causing any failure.

Including tools to identify and update Kubernetes resource apiVersion removes lot of manual work and errors that could be introduced. It facilitates smooth Kubernetes cluster upgradation.

Comment Section

Comments are closed.

<< Older Posts