Tuesday, 28 February 2023

Renovate

 Renovate bot is a popular open-source tool used to automate dependency updates in software projects. It scans the project’s dependencies and automatically creates pull requests to update them to the latest version, based on configurable rules and schedules. This helps to keep the project up-to-date with the latest security patches and bug fixes, while reducing the manual effort required to manage dependencies. Renovate bot supports a wide range of programming languages and package managers, including Java and Maven.      

Why Use Renovate?

  • Get automated Pull Requests to update your dependencies
  • Reduce noise by running Renovate on a schedule, for example:on weekends outside of working hours each week each month
  • Relevant package files are discovered automatically
  • Supports monorepo architectures like Lerna or Yarn workspaces with no extra configuration
  • Bot behavior is customizable via configuration files (config as code)
  • Use ESLint-like shared config presets for ease of use and simplifying configuration (JSON format only)
  • Lock files are supported and updated in the same commit, including immediately resolving conflicts whenever PRs are merged
  • Get replacement PRs to migrate from a deprecated dependency to the community suggested replacement (npm packages only)
  • Open source (installable via npm/Yarn or Docker Hub) so can be self-hosted or used via GitHub App                                                         

Renovate bot:  https://docs.renovatebot.com/


KEDA

KEDA is a Kubernetes-based Event Driven Autoscaler.  It is an open source project that enables the dynamic scaling of containerised workloads running on Kubernetes, based on the number of events received from event sources such as Azure Event Hubs, Azure Queue Storage, Kafka etc.

KEDA is designed to simplify the process of autoscaling containers in Kubernetes by automatically scaling up or down the number of containers based on the load generated by events. This can help to reduce the cost of running containerised workloads and improve the responsiveness of applications during periods of high traffic. With KEDA you can explicitly map the apps you want to use event-driven scale, with other apps continuing to function. This makes KEDA a flexible and safe option to run alongside any number of any other Kubernetes applications or frameworks.

Some of the key features of KEDA include:

  1. Autoscaling: KEDA can automatically scale the number of replicas of a deployment based on the number of events in a queue, such as RabbitMQ or Azure Service Bus.

  2. Event-driven: KEDA supports a variety of event sources, including Azure Event Hubs, Kafka, and AWS Kinesis.

  3. Extensibility: KEDA can be extended to support new event sources or scalers by creating custom Kubernetes operators.

  4. Custom Metrics: KEDA can scale based on custom metrics in addition to event sources.

  5. Flexibility: KEDA can be used with any programming language and framework that can run in Kubernetes.

  6. Efficient resource utilization: KEDA can reduce resource utilization by scaling down to zero replicas when there are no events to process, resulting in lower costs.

  7. Easy integration: KEDA can be easily integrated with other Kubernetes tools and services, such as Prometheus and Grafana.


Disadvantages:

  • Complexity: While KEDA aims to simplify the configuration of autoscaling, it may still be challenging to set up for users who are not familiar with Kubernetes or event-driven architecture.
  • Limited functionality: KEDA is designed specifically for event-driven autoscaling and may not be the best solution for other types of scaling needs.
  • Additional layer: As an additional layer on top of Kubernetes, KEDA can add complexity to an already complex system.


REF: https://keda.sh/

Tuesday, 7 February 2023

Migrating from SpringFox - Springdoc-openapi:

 Migrating from SpringFox - Springdoc-openapi:

 

Remove springfox and swagger 2 dependencies. Add springdoc-openapi-ui dependency instead.

 

Before:

implementation group: 'io.springfox', name: 'springfox-swagger2', version: '.'2.9.2''

implementation group: 'io.springfox', name: 'springfox-swagger-ui', version: '2.9.2'

After:

implementation group: 'org.springdoc', name: 'springdoc-openapi-ui', version: '1.6.8'

 

Before:

@Bean

public Docket api() {

return new Docket(DocumentationType.SWAGGER_2)

.useDefaultResponseMessages(false)

.genericModelSubstitutes(Optional.class)

.select()

.apis(RequestHandlerSelectors.withClassAnnotation(RestController.class))

.paths(PathSelectors.any())

.build()

.securitySchemes(apiKeyList());

}

 

private List<ApiKey> apiKeyList() {

return

newArrayList(

new ApiKey("Authorization", "Authorization","header"),

new ApiKey("ServiceAuthorization", "ServiceAuthorization", "header")

);

 

After:

 

@Bean

public GroupedOpenApi publicApi(OperationCustomizer customGlobalHeaders) {

return GroupedOpenApi.builder()

.group("rd-location-ref-api")

.pathsToMatch("/**")

.build();

}

 

@Bean

public OperationCustomizer customGlobalHeaders() {

return (Operation customOperation, HandlerMethod handlerMethod) -> {

Parameter serviceAuthorizationHeader = new Parameter()

.in(ParameterIn.HEADER.toString())

.schema(new StringSchema())

.name("ServiceAuthorization")

.description("Keyword `Bearer` followed by a service-to-service token for a whitelisted micro-service")

.required(true);

Parameter authorizationHeader = new Parameter()

.in(ParameterIn.HEADER.toString())

.schema(new StringSchema())

.name("Authorization")

.description("Authorization token")

.required(true);

customOperation.addParametersItem(authorizationHeader);

customOperation.addParametersItem(serviceAuthorizationHeader);

return customOperation;

};

}

 

  • Replace swagger 2 annotations with swagger 3 annotations
  • Package for swagger 3 annotations is io.swagger.v3.oas.annotations.
  • @Api → @Tag
  • @ApiIgnore → @Parameter(hidden = true) or @Operation(hidden = true) or @Hidden
  • @ApiImplicitParam → @Parameter
  • @ApiImplicitParams → @Parameters
  • @ApiModel → @Schema
  • @ApiModelProperty(hidden = true) → @Schema(accessMode = READ_ONLY)
  • @ApiModelProperty → @Schema
  • @ApiOperation(value = "foo", notes = "bar") → @Operation(summary = "foo", description = "bar")
  • @ApiParam → @Parameter
  • @ApiResponse(code = 404, message = "foo") → @ApiResponse(responseCode = "404", description = "foo")
  • If you’re using an object to capture multiple request query params, annotation that method argument with @ParameterObject
  • This step is optional: Only if you have multiple Docket beans replace them with GroupedOpenApi beans.

Saturday, 4 February 2023

SOLID principle

 

The SOLID principles are a set of five design principles that are aimed at helping software developers create maintainable, scalable, and flexible software systems. The

SOLID principles are:

Single Responsibility Principle (SRP): A class should have only one reason to change, meaning that a class should have only one responsibility.

Example: A class that represents a bank account should only be responsible for performing operations related to the bank account such as deposit, withdrawal, and balance inquiry, and not be responsible for printing statements or sending notifications to the account owner.

Open/Closed Principle (OCP): Software entities should be open for extension but closed for modification, meaning that existing code should not be modified when adding new functionality.

Example: A class that represents a shape can be designed to be open for extension (i.e., adding new shapes), but closed for modification (i.e., modifying existing code). The class can be designed with an abstract shape class and concrete classes for each type of shape, and adding new shapes would not require modifying existing code.

Liskov Substitution Principle (LSP): Subtypes must be substitutable for their base types, meaning that objects of a derived class should be able to replace objects of the base class without affecting the correctness of the program.

Example: A class that represents a rectangle should be a subtype of a class that represents a shape. This means that any code that works with a shape should work with a rectangle without any modification, as a rectangle is a valid substitute for a shape.

Interface Segregation Principle (ISP): Clients should not be forced to depend on interfaces they do not use, meaning that each interface should have a specific and well-defined purpose.

Example: A class that represents an animal should not be forced to implement an interface that defines methods for flying and swimming if the animal cannot fly or swim. Instead, separate interfaces can be created for flying and swimming animals and implemented only by those classes that can perform those actions.

Dependency Inversion Principle (DIP): High-level modules should not depend on low-level modules. Both should depend on abstractions, meaning that the design should strive to reduce coupling between components and make them more modular.

Example: A class that represents a high-level policy should not depend on a low-level implementation detail. Instead, both the high-level and low-level classes should depend on an abstraction, such as an interface, that defines the contract between the two. This allows the high-level class to be more flexible and less prone to breaking if the low-level implementation changes.

Adhering to the SOLID principles can help us create systems that are easier to maintain, test, and extend, and can also make it easier to identify and correct problems when they arise.

Wednesday, 1 February 2023

Design Patterns and Methodologies

The 12-factor methodology is a set of best practices for building software-as-a-service (SaaS) applications that run in the cloud. It was introduced by Heroku, a cloud platform for building, deploying, and scaling web applications. The 12 factors are:

  1. Codebase: A single codebase for the entire application, with multiple deploys.

  2. Dependencies: Declare and isolate dependencies through a dependency declaration manifest.

  3. Config: Store configuration data in the environment, not in the code.

  4. Backing Services: Treat backing services as attached resources, such as databases or message queues.

  5. Build, Release, Run: Automate the build, release, and run stages to ensure consistent deployment.

  6. Processes: Execute the application as one or more stateless processes.

  7. Port Binding: Export services via port binding, not via a shared database.

  8. Concurrency: Scale out by adding processes, not by cloning existing ones.

  9. Disposability: Maximize robustness with fast startup and graceful shutdown.

  10. Dev/Prod Parity: Keep development, staging, and production as similar as possible.

  11. Logs: Treat logs as event streams.

  12. Admin Processes: Run administrative and management tasks as one-off processes.

By following these principles, developers can build scalable, maintainable, and robust cloud-based applications that are easy to manage and deploy. The 12-factor methodology has become a popular reference for building SaaS applications, and is widely adopted by companies such as Netflix, Slack, and Airbnb.


Here are some real-world examples of applications that follow the 12-factor methodology:

  1. Heroku: Heroku is a popular cloud platform that provides a platform for building, deploying, and scaling web applications. It follows the 12-factor methodology by using environment variables to store configuration, using backing services such as databases and queues, and by having a stateless design that makes it easy to scale horizontally.

  2. Netflix: Netflix is a streaming video service that uses the 12-factor methodology to build and operate its cloud-based applications. It uses services such as AWS for its infrastructure and storage, and its applications are designed to be stateless and scalable, making it easy to manage and maintain its large-scale, global operations.

  3. Slack: Slack is a popular team collaboration platform that uses the 12-factor methodology to build and operate its applications. It uses environment variables for configuration, and its architecture is designed to be stateless and scalable, making it easy to manage and maintain its operations.

  4. Airbnb: Airbnb is a popular vacation rental platform that uses the 12-factor methodology to build and operate its applications. It uses cloud-based infrastructure, such as Amazon Web Services, to run its applications, and its architecture is designed to be stateless and scalable, making it easy to manage and maintain its operations.

  5. Stripe: Stripe is a payment processing platform that uses the 12-factor methodology to build and operate its applications. It uses environment variables for configuration, and its architecture is designed to be stateless and scalable, making it easy to manage and maintain its operations.

Examples of applications that can be designed using the 12-factor methodology include:

  1. E-commerce websites

  2. Social media platforms

  3. Collaboration tools

  4. Project management software

  5. Customer relationship management (CRM) systems

  6. Human resources management systems

  7. Accounting and financial management systems

  8. Supply chain management systems

  9. Inventory management systems

  10. Marketing automation tools

  11. Healthcare management systems

  12. Learning management systems

Here is a sample architecture for a 12-factor application:

  1. Codebase: The entire application code is stored in a version control system, such as Git, and multiple deploys can be made from this single codebase.

  2. Dependencies: The application dependencies are declared in a file, such as a Maven pom.xml file, and isolated from the application through a dependency declaration manifest.

  3. Config: Configuration data, such as database connection strings or API keys, are stored in environment variables and are not part of the application code.

  4. Backing Services: Backing services, such as databases or message queues, are treated as attached resources and accessed through APIs or service discovery mechanisms.

  5. Build, Release, Run: The build, release, and run stages are automated through a continuous integration and continuous deployment (CI/CD) pipeline, using tools such as Jenkins or TravisCI.

  6. Processes: The application is executed as one or more stateless processes, which can be scaled horizontally by adding more instances of the process.

  7. Port Binding: Services are exported via port binding, rather than through a shared database or other mechanism, allowing for easier scaling and load balancing.

  8. Concurrency: The application is designed to scale out by adding more processes, rather than by cloning existing ones.

  9. Disposability: The application is designed to be highly disposable, with fast startup and graceful shutdown, to maximize robustness and reduce downtime.

  10. Dev/Prod Parity: Development, staging, and production environments are kept as similar as possible, with the same tools, processes, and dependencies, to reduce the risk of environment-specific bugs.

  11. Logs: Logs are treated as event streams, with log data emitted to a centralized log management system, such as Logstash or Fluentd, for analysis and troubleshooting.

  12. Admin Processes: Administrative and management tasks are executed as one-off processes, rather than being part of the main application, to ensure separation of responsibilities and to allow for easy maintenance and management.



Here are some of the best design patterns to build modern applications:

  1. Model-View-Controller (MVC): A pattern that separates the application into three components: the model, which represents the data; the view, which is responsible for presenting the data; and the controller, which manages the interaction between the model and the view.

  2. Microservices: A pattern that decomposes a monolithic application into a set of smaller, independent services, which can be developed, deployed, and scaled independently.

  3. Command Query Responsibility Segregation (CQRS): A pattern that separates the responsibility for writing data from the responsibility for reading data, enabling better scalability and performance

  4. Domain-Driven Design (DDD): A design pattern that focuses on modeling the business domain, using concepts such as entities, services, value objects, and aggregates to create a rich, expressive model that can capture the complexity of the business domain.

  5. Event Sourcing: A pattern that uses a log of events to store the state of the application, rather than a traditional database, to provide a full history of the changes to the application state.

  6. Serverless: A pattern that uses cloud-based functions to execute code, without the need to manage the underlying infrastructure, to provide cost-effective and scalable computing.

  7. Reactive: A pattern that enables applications to respond to changing conditions and handle large amounts of data and concurrency, using reactive programming techniques.

  8. Decorator: A pattern that provides a flexible way to extend the functionality of an object, without modifying its code, by using wrapper classes to add or override behaviour.

  9. Singleton: A pattern that ensures that a class has only one instance, and provides a global point of access to that instance, to simplify resource management.

  10. Factory Method: A pattern that creates objects without specifying the exact class of object that will be created, allowing for greater flexibility and modularity.

  11. Dependency Injection: A pattern that separates the construction of objects from their behavior, allowing for greater flexibility, testability, and maintainability

  12. Observer: A pattern that allows objects to receive notifications of changes in the state of other objects, allowing for loosely coupled, event-driven systems.





Saga Design Principle

The Saga design principle is a pattern used in microservices architecture to handle long-running transactions that involve multiple microservices and is a way to handle distributed transactions in a microservices architecture. It provides a mechanism for coordinating and managing the lifecycle of a transaction across multiple microservices. which. means consistency and reliability in a distributed system by breaking down a complex transaction into a series of smaller, independent steps.

The basic idea behind the Saga pattern is that each step in a transaction is represented by a separate microservice, and each microservice implements a compensating transaction that can undo the effects of the transaction if it fails(the sequence of local transactions, each of which updates the state of a single service. If a step fails, the Saga records the failure and compensating transactions are executed to undo the changes made by previous steps and bring the system back to its original state).

This allows transactions to be automatically rolled back in the event of a failure, which helps to ensure consistency and maintain data integrity.

Sagas provide a number of benefits, including:

  • Atomic transactions: Sagas ensure that a transaction is either fully completed or fully rolled back, even in a distributed system.

  • Resilience: Sagas can handle failures and network partitions, ensuring that the system remains consistent even in the face of failures.

  • Loose coupling: Sagas promote loose coupling between microservices by allowing each service to focus on its own local transactions, while the Saga coordinates the overall transaction.

  • Flexibility: Sagas can be easily extended or modified, making it easy to add new steps or compensating transactions as the system evolves.

The Saga design principle is an important tool for designing and implementing distributed systems, and is widely used in microservices architecture to handle complex and long-running transactions in a reliable and scalable manner.

The key principles of the Saga design pattern are:

  1. Saga execution: The Saga is executed as a sequence of transactions, each represented by a separate microservice.

  2. Compensating transactions: Each transaction in the Saga has a compensating transaction that can undo the effects of the transaction if it fails.

  3. Coordination: The Saga coordinator manages the lifecycle of the Saga and ensures that the compensating transactions are executed if a failure occurs.

  4. State management: The Saga coordinator maintains the state of the Saga, including the state of each transaction in the Saga.

  5. Idempotency: Each transaction in the Saga must be idempotent, so that it can be executed multiple times without causing any side effects.

The choreography-based approach to implementing the Saga design pattern is a decentralized mechanism, where each local transaction is executed as an independent service and communicates with other services through messages or events. If a transaction fails, it sends a compensation message or event to trigger the compensating transaction.

In this approach, there is no central coordinator, and the services communicate with each other directly to coordinate the execution of the transactions and compensations. The state of the Saga is maintained by the services themselves, which keep track of the status of their own transactions and respond to messages or events from other services.

This approach provides a more scalable and flexible solution, as each service can be developed and deployed independently and can handle its own transactions and compensations. However, it also requires a higher level of coordination and cooperation between the services, and it can be more complex to design and implement the compensations.

Overall, the choreography-based approach to implementing the Saga design pattern is suitable for scenarios where decentralization and autonomy are desired, and where the complexity of the compensations can be handled by the services themselves. However, it may not be the best choice for scenarios where central control and coordination are required.


The orchestration-based approach to implementing the Saga design pattern is a central coordinator-based mechanism, where a central coordinator is responsible for managing the execution of the transactions and compensations. The central coordinator communicates with the local transactions, sending commands to execute transactions and triggering compensations if necessary.

In this approach, the central coordinator is responsible for maintaining the state of the Saga, keeping track of the status of each transaction, and making decisions on what actions to take based on the status of the transactions. If a transaction fails, the coordinator sends a compensation command to undo the changes made by the failed transaction.

This approach provides a centralized control over the Saga and ensures that the transactions are executed in the correct order. However, it also has some drawbacks, such as increased complexity, a single point of failure, and the need for strong consistency guarantees, which can be difficult to achieve in a distributed system.

Overall, the orchestration-based approach to implementing the Saga design pattern is suitable for scenarios where central control is required and a high degree of coordination and consistency is desired. However, it may not be the best choice for large and complex systems, where scalability and reliability are key requirements.

The main difference between choreography and orchestration in the context of the Saga design pattern is the level of centralization and control over the execution of the transactions and compensations.


Difference between orchestration vs Choreography:

Orchestration refers to a centralized mechanism, where a central coordinator is responsible for managing the execution of the transactions and compensations. The central coordinator communicates with the local transactions, sending commands to execute transactions and triggering compensations if necessary.

Choreography, on the other hand, refers to a decentralized mechanism, where each local transaction is executed as an independent service and communicates with other services through messages or events. If a transaction fails, it sends a compensation message or event to trigger the compensating transaction.

Orchestration provides a high degree of central control and coordination over the Saga, and ensures that the transactions are executed in the correct order. However, it also has some drawbacks, such as increased complexity, a single point of failure, and the need for strong consistency guarantees, which can be difficult to achieve in a distributed system.

Choreography, on the other hand, provides a more scalable and flexible solution, as each service can be developed and deployed independently and can handle its own transactions and compensations. However, it also requires a higher level of coordination and cooperation between the services, and it can be more complex to design and implement the compensations.

The choice between choreography and orchestration depends on the specific requirements and constraints of the system, such as the need for central control, the number of services involved, the level of independence and autonomy of the services, and the need for scalability and reliability.