Tuesday, 31 January 2023

Java 17 Features

 Some of the new features and improvements introduced in Java 17 include:

  1. Sealed Classes: Sealed classes provide a way to limit the implementation of a class to a specific set of classes or interfaces, making it easier to enforce contracts and reduce the risk of errors.

  2. Records: Records provide a compact and easy-to-use syntax for declaring simple data classes that represent an immutable value.

  3. Pattern Matching for instanceof: Java 17 introduces pattern matching for the instanceof operator, making it easier to write type-safe and concise code.

  4. Improved Concurrency: Java 17 includes several improvements to the Java concurrency API, including the addition of new classes and methods to simplify the development of concurrent applications.

  5. Text Blocks: Java 17 includes text blocks, a new feature that makes it easier to work with multi-line string literals in your code.

  6. Foreign Linker API: Java 17 introduces the Foreign Linker API, which provides a way to link native code and libraries directly into a Java program, improving the performance and integration of Java applications with native code.


Here is the example of sealed class

sealed interface Shape permits Circle, Rectangle { }

final class Circle implements Shape {
    private final double radius;

    public Circle(double radius) {
        this.radius = radius;
    }

    public double getRadius() {
        return radius;
    }
}

final class Rectangle implements Shape {
    private final double length;
    private final double width;

    public Rectangle(double length, double width) {
        this.length = length;
        this.width = width;
    }

    public double getLength() {
        return length;
    }

    public double getWidth() {
        return width;
    }
}


In this example, Shape is a sealed interface that permits the classes Circle and Rectangle to implement it. This means that no other classes can implement the Shape interface. By using sealed classes, you can restrict the types that can implement an interface and ensure type safety in your code.

Java Records is a new feature introduced in Java 16 that provides a compact syntax for declaring classes that are purely transparent data carriers. Records are a way to define simple data classes that have a private final field for each component, a public constructor, and automatically generated accessor methods (getters), equals, hashCode and toString methods.

Here is an example of how you could define a Person record in Java:

record Person(String name, int age) { }


This record definition is equivalent to the following class definition:

class Person { private final String name; private final int age; public Person(String name, int age) { this.name = name; this.age = age; } public String getName() { return name; } public int getAge() { return age; } @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; Person person = (Person) o; return age == person.age && Objects.equals(name, person.name); } @Override public int hashCode() { return Objects.hash(name, age); } @Override public String toString() { return "Person{" + "name='" + name + '\'' + ", age=" + age + '}'; } }

With records, you can define data classes in a more concise and readable way, while still having the benefits of automatically generated accessor methods, equals, hashCode, and toString methods.

Pattern matching in Java is a new feature introduced in Java 16/17 that provides a more concise and type-safe way to perform type checking and extract values from objects. With pattern matching, you can use the instanceof operator in a switch expression to match objects against a specific pattern and extract values from the matched object.

Here's an example of how you could use pattern matching in a switch expression to match an object against a specific type and extract values from the matched object:


public static void printArea(Shape shape) {
    switch (shape) {
        case Circle c:
            System.out.println("The area of the circle is " + Math.PI * c.getRadius() * c.getRadius());
            break;
        case Rectangle r:
            System.out.println("The area of the rectangle is " + r.getLength() * r.getWidth());
            break;
        default:
            System.out.println("Unknown shape");
            break;
    }
}


In this example, the printArea method takes a Shape object as an argument and uses a switch expression to match the object against specific patterns. If the object is a Circle object, the radius is extracted and used to calculate the area. If the object is a Rectangle object, the length and width are extracted and used to calculate the area. If the object is not a Circle or a Rectangle, the default case is executed and an "Unknown shape" message is printed.

Pattern matching provides a more concise and type-safe way to perform type checking and extract values from objects, and can make your code more readable and maintainable.


In Java, pattern matching is performed using the instanceof operator in a switch expression. The instanceof operator is used to check the type of an object and determine whether it matches a specified pattern.

Here's an example of how you could use the instanceof operator in a switch expression to match an object against a specific type and extract values from the matched object:


public static void printArea(Object shape) { if (shape instanceof Circle) { Circle c = (Circle) shape; System.out.println("The area of the circle is " + Math.PI * c.getRadius() * c.getRadius()); } else if (shape instanceof Rectangle) { Rectangle r = (Rectangle) shape; System.out.println("The area of the rectangle is " + r.getLength() * r.getWidth()); } else { System.out.println("Unknown shape"); } }


In this example, the printArea method takes an Object object as an argument and uses the instanceof operator in an if-else statement to check the type of the object. If the object is a Circle object, the radius is extracted and used to calculate the area. If the object is a Rectangle object, the length and width are extracted and used to calculate the area. If the object is not a Circle or a Rectangle, the else case is executed and an "Unknown shape" message is printed.

The instanceof operator can be useful for performing type checking and extracting values from objects, but it can also lead to code that is verbose and harder to maintain. The introduction of pattern matching in Java 16 provides a more concise and type-safe way to perform type checking and extract values from objects.



Sunday, 22 January 2023

AWS Security- GuardDuty

 GuardDuty is an intelligent threat detection service

identifies malicious activity or unauthorised activities, such as anomalous behaviour, credential exfiltration, or command and control infrastructure (C2) communication is detected.

GuardDuty provides broad security monitoring of your AWS accounts, workloads, and data to help identify threats, such as attacker reconnaissance; instance, account, bucket, or Amazon EKS cluster compromises; and malware

GuardDuty is a regional service

GuardDuty analyses CloudTrail data events for Amazon S3 logs, CloudTrail management event logs, DNS logs, Amazon EBS volume data, Kubernetes audit logs, Amazon VPC flow logs, and RDS login activity.

Able to send notifications using cloudwatch events.

produces security reports called findings.

GuardDuty does not look at historical data,

GuardDuty operates completely independent of your AWS resources and therefore should have no impact on the performance or availability of your accounts or workloads.

GuardDuty does not manage or retain your logs


Not capable of doing any resource changes, like rate-limiting protection or DDOS attack migration.

https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html.

Unauthorised infra, unusual api calls, password strengths etc,,,







AWS Security- AWS Shield

 AWS Shield: Managed DDOS Protection service

- provides protection against Distributed Denial of Service (DDoS) attacks for applications running on AWS.

Shield comes with 2 tiers: Standard and Advanced

-AWS Shield Standard is automatically enabled to all AWS customers at no additional cost

- AWS Shield Advanced is an optional paid service,  provides additional protections against more sophisticated and larger attacks for your applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Route 53. DRT support during DDOS attacks

-Mitigate different type of flood attacks (layer 3 and 4) attacks such as SYN/UDP floods, reflection attacks

Protects the applications that use Amazon EC2, Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Route 53

Pricing

  • Shield Standard  no additional charge.
  • Shield Advanced paid service, requires a 1-year subscription commitment and charges a monthly fee, plus a usage fee based on data transfer out from CloudFront, ELB, EC2, and AWS Global Accelerator.
https://aws.amazon.com/shield/faqs/










Saturday, 21 January 2023

AWS Security- WAF

 WAF-- Web Application Firewall service

WAF protect web applications from common web exploits.

WAF allows you to create custom rules that block common web exploits like SQL injection and cross site scripting.

WAF lets you create rules to filter web traffic based on conditions that include IP addresses, HTTP headers and body, or custom URIs.

WAF can be integrated with Cloudfront, ALB, API Gateway and AWS AppSync

WAF charges based on the number of web ACL, no.of rules that you add per web ACL, and the number of web requests that you receive.

WAF provides Geo match condition, blocks requests from certain countries, allow request only from certain countries.

https://aws.amazon.com/waf/faqs/

https://aws.amazon.com/waf/features/

https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html








Tuesday, 4 May 2021

Continuous Delivery of Microservices on AWS using AWS Elastic Beanstalk

 

Continuous Delivery of Microservices on AWS using AWS Elastic Beanstalk

AWS Elastic Beanstalk service can be used to deploy and scale applications and services (from docker containers) developed with Java, .NET, PHP, Nodejs, Python, Ruby, Go etc on servers such as Apache, Tomcat, Nginx etc. Docker containers provide the flexibility of selecting one's runtime environment of choice including platform, programming language, app/services dependencies, and also, configuring the environment appropriately. All one is required to do is simply push the docker image to the image repository, and deploy the container. Elastic Beanstalk service, then, takes care of different aspects of deployment such as capacity provisioning, load balancing, auto-scaling, application/service health monitoring etc.

Docker platform for Elastic Beanstalk has two generic configurations such as following:

  • Single container Docker
  • Multicontainer Docker

We shall try and cover the use cases for both the configuration types.

Single Container Docker

Before getting into details of single container docker configurations for Elastic Beanstalk, lets quickly look into the solution architecture for deploying microservices on AWS using AWS Elastic Beanstalk.

Solution Architecture

Following represents the solution architecture of deploying microservices on AWS using AWS Elastic Beanstalk using single container docker configurations.

Solution Architecture - Microservices to AWS Elastic Beanstalk

In the above diagram, pay attention to some of the following:

  1. Code is checked into code repository such as Gitlab
  2. Webhook configured in GitLab triggers the Jenkins job
  3. Jenkins job starts executing which results in following steps:
    • Retrieve the microservice artifacts from Gitlab
    • Build the microservice
    • Run the tests such as unit and integration tests
    • Build the image if all of the above steps are successful
    • Push the image to image repository such as Dockerhub or AWS ECR
    • Deploy using AWS Elastic Beanstalk CLI command

Single Docker Container Configuration

Following steps need to be taken to get setup with Elastic Beanstalk to deploy application/services/microservices from docker containers based on single docker container configuration:

  1. Create a Beanstalk application using AWS console for creating new application. Pay attention to some of the following while creating the application.
    • Select environment as Web Server environment.
    • On environment type page, select configuration as "Docker" (and not Multi-container Docker) and environment type as "Load balancing, auto scaling"
    • Continue with choosing default in each step. In "Application version", you may choose to upload the Dockerfile or Dockerrun.aws.json. Later, the same Dockerfile or Dockerrun.aws.json can be uploaded using "eb deploy" command as part of Jenkins build post-steps.
  2. Install Elastic Beanstalk command line interface (EB CLI). EB CLI provides a set of commands for creating, updating and monitoring environments from a local repository.
  3. Go to the project folder consisting of Dockerrun.aws.json or Dockerfile (for single docker container configuration). Use "eb init" command to choose some of the following:
    • Region
    • Access key and secret key
    • Select an application (created earlier using EB console).
    • Select a keypair (one which was selected while creating the application using EB console)

With above steps followed, one should be all set to execute the following command from the project folder.

eb deploy

Make sure that project folder consists of either Dockerfile or Dockerrun.aws.json. In the code below, image is retrieved from Dockerhub. Note the AWSEBDockerrunVersion as "1". For multi-container configuration, the value becomes "2".

{
  "AWSEBDockerrunVersion": 1,
  "Image": {
    "Name": "xyz/springboot-web-app",
    "Update": "true"
  },
  "Ports": [
    {
      "ContainerPort": "8080"
    }
  ],
  "Logging": "/var/log"
}

Configure Jenkins Post-steps

Jenkins post-steps can be configured to achieve following:

Pushing images to Dockerhub; Deploy to Elastic Beanstalk

# Build the docker image
sudo docker build -t ImageName:tag /var/jenkins_home/workspace/SpringBootApp

# Login into Dockerhub; dockerhubLogin and dockerhubPassword is login and password respectively for dockerhub.com
sudo docker login -u="dockerhubLogin" -p="dockerhubPassword"

# Push docker image into Dockerhub
sudo docker push ImageName:tag

# EB Deploy; Go to the project folder and execute the command, "eb deploy"
cd /var/jenkins_home/workspace/awsebdemo
eb deploy

In above code samples, note some of the following:

  • ImageName:tag should be replaced with image such as xyz/springboot-web-app:latest.

Reference

Continuous Delivery of Microservices with AWS ECS

 

Continuous Delivery of Microservices with AWS ECS

AWS EC2 Container Service (ECS) is a highly scalable container management service which is used to start, stop and run microservices within Docker containers on a cluster of AWS EC2 instances. In this project, it is demonstrated as to how to deploy container-based microservices using CLI commands from within Jenkins. In order to achieve the same, following needs to be done:

  1. Setup ECS Service
    • Create a repository (EC2 Repository - ECR)
    • Create a task definition
    • Create an ECS cluster
    • Create a service
  2. Configure Jenkins build Post-steps

Note: For the demonstration purpose, both Gitlab and Jenkins are setup within Docker Containers. In real world scenario, Gitlab and Jenkins may get setup within different VMs.

Before getting into the setup details, let us try and understand the solution architecture related with achieving continuous delivery of microservices with AWS ECS.

Solution Architecture

Following represents the solution architecture of deploying micro-services using AWS ECS.

Solution Architecture - Microservices to AWS ECS

In above diagram, pay attention to some of the following:

  1. Code is checked into code repository such as Gitlab
  2. Web-hook configured in GitLab triggers the Jenkins job
  3. Jenkins job starts executing which results in following steps:
    • Retrieve the micro-service artifacts from Gitlab
    • Build the micro-service
    • Run the tests such as unit and integration tests
    • Build the image if all of the above steps are successful
    • Push the image to image repository such as Dockerhub or AWS ECR
    • Register task definition with AWS ECS
    • Update AWS ECS

Setup ECS Service

Before configuring steps into Jenkins, following needs to be setup using AWS ECS console.

Create an Image Repository with AWS ECR

First step is getting setup with AWS ECR. Following command needs to be executed in order to create an ECR repository.

# Login into ECR
aws configure
aws ecr get-login
docker built -t ImageName .
docker tag ImageName:tag AWS_ECR_URL/ImageName:tag
docker push AWS_ECR_URL/ImageName:tag

Note some of the following with above command:

  • AWS_ECR_URL is of the format https://aws_account_id.dkr.ecr.region.amazonaws.com. One can get the value of Account id by logging into console and going to My Account page. Region information can be found from the region and availability zones page
  • aws configure command is used to setup AWS CLI installation. The command would require one to enter credentials and config information before one starts working with their AWS services. The command would require one to enter details for access key ID, secret access key, default region name and default output format. However, as we need to achieve this without entering details at the prompt, following needs to be done in order to achieve the promptless command such as aws configure --profile default
    • Create a folder, .aws in the home.
    • Create a file named as config within above folder, .aws, with following content. One could get access key id and secret access key information by logging into the AWS console and accessing "My Security Credentials".
[default]
aws_access_key_id=AKXXIXXYXX4X4XXXXJRY
aws_secret_access_key=DyxxxxxxeqQyxyyyyytXcwwthbbCxaaaa8Qi0y
region=us-west-2
output=json
  • aws ecr get-login command is used to get the login prompt which needs to be executed further to login (start authenticated session) into AWS ECR and thereafter, push the image.
  • Other commands are usual commands to push the docker image to the AWS ECR image repository.

Executing above commands leads to user entering the details at the prompt. If one wants to achieve the same without prompt, from within Jenkins, the same could be achieved using following command which is a combination of commands to achieve promptless execution.

yes "" | aws configure --profile default ; aws ecr get-login > awslogin.sh ; sudo sh awslogin.sh

One can observe that executing command such as "aws ecr get-login" leads to output of command such as following which needs to be executed for successfully logging in. The command below is sent to awslogin.sh file as shown in the command and then awslogin.sh is executed.

docker login -u AWS -p SomeRandomPasswordStringSentByAWS -e none https://**aws_account_id**.dkr.ecr.**region**.amazonaws.com

Create a Task Definition

Next step is to create a task definition. Command such as following could be used to create the task definition:

aws ecs register-task-definition --family TaskDefinitionFamily --container-definitions "[{\"name\":\"ContainerName\",\"image\":\"ImageName:tag\",\"memory\":300,\"portMappings\":[{\"hostPort\":0,\"containerPort\":8080,\"protocol\":\"tcp\"}]}]" 

In above command, note the following two aspects:

  • TaskDefinitionFamily is the name of family for a task definition, which allows you to track multiple versions of the same task definition. The family is used as a name for your task definition.
  • ContainerName which is the name of the container.
  • container-definitions which is used to provide information related with one or more containers which will be started as a result of executing task based on the task definition.

One may want to login and access the AWS console at Services/EC2 Container Service/Task Definitions and try and create task definition to understand different aspects of task definition creation. Further details in relation with register-task-definition can be found on this page, register-task-definition.

Create an ECS Cluster

As this is one time activity, one may want to use AWS console at Services/EC2 Container Service/Clusters to create the cluster. It is pretty straight forward and very easy to create the cluster.

Create/Update the Service

Once done with creating cluster, one will be required to update ECS service. update-service command is used to modify the task definition and deploy a new version of the service.

aws ecs update-service --cluster ClusterName --service ServiceName --task-definition TaskDefinitionName --desired-count 2

In above code snippet, note some of the following:

  • TaskDefinitionName is name of the task definition. The family and revision (family:revision ) or full Amazon Resource Name (ARN) of the task definition to run in your service. If a revision is not specified, the latest ACTIVE revision is used. If you modify the task definition with update-service , Amazon ECS spawns a task with the new version of the task definition and then stops an old task after the new version is running.
  • ClusterName is name of the ECS cluster
  • ServiceName is name of the service
  • desired-count is used to configure number of instantiations of the task to place and keep running in your service.

Further details in relation with update-service command can be found on this page, update-service.

Configure Jenkins Post-steps

Jenkins post-steps can be configured to achieve following:

Pushing images to Dockerhub; Register task definition; Update ECS

# Build the docker image
sudo docker build -t ImageName:tag /var/jenkins_home/workspace/SpringBootApp

# Login into Dockerhub; dockerhubLogin and dockerhubPassword is login and password respectively for dockerhub.com
sudo docker login -u="dockerhubLogin" -p="dockerhubPassword"

# Push docker image into Dockerhub
sudo docker push ImageName:tag

# Login using AWS CLI
yes "" | aws configure --profile default ; aws ecr get-login > awslogin.sh ; sudo sh awslogin.sh

# Register task definition`
aws ecs register-task-definition --family TaskDefinitionFamily --container-definitions "[{\"name\":\"ContainerName\",\"image\":\"ImageName:tag\",\"memory\":300,\"portMappings\":[{\"hostPort\":0,\"containerPort\":8080,\"protocol\":\"tcp\"}]}]" 

# Update service
aws ecs update-service --cluster ClusterName --service ServiceName --task-definition TaskDefinitionName --desired-count 2

In above code samples, note some of the following:

  • ImageName:tag is the image name. For example, ajitesh/springboot-web-app:latest.
  • TaskDefinitionFamily is the name of family for a task definition, which allows you to track multiple versions of the same task definition. The family is used as a name for your task definition.
  • ContainerName which is the name of the container.
  • ClusterName is the name of the ECS cluster
  • ServiceName is the name of the service

Pushing images to ECR; Register the task definition ; Update the ECS service

# Login into AWS
yes "" | aws configure --profile default ; aws ecr get-login > awslogin.sh ; sudo sh awslogin.sh

# Build the docker image
sudo docker build -t ImageName /var/jenkins_home/workspace/SpringBootApp

# Tag the image; Push docker image into AWS ECR
sudo docker tag ImageName:tag 153819127898.dkr.ecr.us-west-2.amazonaws.com/ImageName:tag
sudo docker push AWS_ECR_URL/ImageName:tag

# Register task definition`
aws ecs register-task-definition --family TaskDefinitionFamily --container-definitions "[{\"name\":\"ContainerName\",\"image\":\"ImageName:tag\",\"memory\":300,\"portMappings\":[{\"hostPort\":0,\"containerPort\":8080,\"protocol\":\"tcp\"}]}]" 

# Update service
aws ecs update-service --cluster ClusterName --service ServiceName --task-definition TaskDefinitionName --desired-count 2

Reference