Description:
 
The task involves extracting specific information from an Apache log file called apache_logs. The script should identify requests with a 404 status code and extract the address, HTTP method, and path details from these requests. The extracted data should be written to an output file named not_found.txt 

The solution provided uses Linux command-line tools. It starts by using grep to filter lines containing the string '404' in the apache_logs file. Then, it pipes the results into sed, which processes the lines to remove unwanted parts, such as timestamps and extra characters. Finally, the extracted data is saved in the not_found.txt output file.


Solution:

grep -w '404' apache_logs | sed -e 's/\[[^][]*\]//g' -e 's/ - - //g' -e 's/ HTTP.*//g' > not_found.txt\

Description:

The task is about modifying permissions in a directory named somefolder, which contains various subdirectories and files. The goal is to set permissions as follows:

  1. For each subdirectory, only the owner (and root) of a file can remove the file within that subdirectory.
  2. For each subdirectory with the name sharedfolder, files created within that subdirectory should be owned by the group that owns the subdirectory.

The provided solution uses the find command to search for directories within somefolder. For each directory found, it uses chmod to set the permissions accordingly. For non-shared directories, the permissions are set to 1755, which allows only the owner to remove files. For directories named sharedfolder, the permissions are set to 2775, which ensures that newly created files are owned by the group that owns the subdirectory.


Solution:

find somefolder -type d -exec chmod 1755 {} \; -o -name sharedfolder -exec chmod 2775 {} \;


Description:

This script processes an Apache log file and calculates the size downloaded by each IP address. It then prints a report sorted by download size and IP address, including the total downloaded size, number of unique IPs, and human-readable size in IEC standard format.


list-process.sh

declare -A arr
total=0
while read ip size; do
[[ -z $size ]] && break # Empty or incorrect input
[[ $size = '-' ]] && continue # Some bots download nothing
let "arr[$ip] += $size"
let "total += $size"
done < <(cat $1 | cut -d ' ' -f 1,10) # Cut IP and size fields

# Present the results:
totalH=$(numfmt --to=iec-i --suffix=B --format="%.1f" $total)
echo "There are ${#arr[@]} unique IPs"
echo "Total downloaded: $total ($totalH)"
for ip in "${!arr[@]}"; do
# 1st column width 15 aligned left, 2nd - width 9 aligned right
printf '%-15s %9s\n' $ip ${arr["$ip"]}
done | sort -t . -k1,1n -k2,2n -k3,3n -k4,4n | sort -rns -k2
# Sort by IP ascending and then by size descending

Description:

A Java application that uses Maven as the build system needs to be integrated with Jenkins for automated build and testing. The goal is to create a Jenkinsfile in the root of the repository with the following requirements:

Result:

  1. Build Steps: The Jenkinsfile includes instructions for building the Java application. Detailed guidelines can be found here.
  2. Unit Testing Steps: The Jenkinsfile specifies steps for running unit tests for the application using Maven. Further details on executing JUnit tests with Maven can be found here.
  3. An environment variable named APP_PORT is set to 9090 within the Jenkinsfile to specify the port for running the application.
  4. The name of the Jenkins job is saved in a global variable.
  5. A build of the project is created in Jenkins.
  6. The application and integration tests are run in parallel stages.
  7. To launch the application, return to the build folder, utilizing the saved variable for this purpose.
  8. Only the RestIT integration test is run during the test phase of Maven.
  9. A timeout is set to stop the parallel stages if necessary, ensuring the pipeline can continue.
  10. To ensure the process can complete successfully, the Jenkinsfile uses the try {} catch {} construct.


Jenkinsfile

pipeline {
agent any
environment {
APP_PORT = '9090'
}
stages {
stage('Set Global Job Name') {
steps {
script {
// Save the job name in a global variable
def jobName = currentBuild.fullDisplayName
// You can access it later as env.JOB_NAME
env.JOB_NAME = jobName
}
}
}
stage('Build Project') {
steps {
// Use Maven to build the project
sh 'mvn clean package'
}
}
stage('Run Application and Integration Tests') {
parallel {
stage('Run Application') {
steps {
// Change to the build folder
dir('target') {
// Run the application with arguments
sh 'java -jar your-application.jar'
}
}
}
stage('Run Integration Tests') {
steps {
// Run only the RestIT integration test
sh 'mvn -Dtest=RestIT test'
}
}
}
}
stage('Stop on Timeout') {
steps {
// Set a timeout to stop the pipeline if necessary
timeout(time: 30, unit: 'MINUTES') {
// Wrap the steps in a try-catch block
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
// Put the steps that might fail here
}
}
}
}
}
}

Description:

This task involves setting up a GitHub Actions workflow named "CI Node.js Project" for an application built using npm. The workflow is triggered by push and pull request events and is responsible for installing the application's dependencies, running tests, and building the application.


main.yml


name: CI Node.js Project

on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]

jobs:
build:

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v3
- name: Use Node.js 14.x
uses: actions/setup-node@v3
with:
node-version: '14.x'
cache: 'npm'
- run: npm ci
- run: npm run build
- run: npm test


Description:

In this task, a GitHub Actions workflow is created for a Java application built with Maven. The workflow is named "CI Maven Project" and is designed to build and test the application using multiple Java versions (8, 11, and 17). It is triggered by push and pull request events on the main branch.


main.yml


name: CI Maven Project on: push: branches: [ "main" ] pull_request: branches: [ "main" ] jobs: build: runs-on: ubuntu-latest strategy: matrix: java-version: [8, 11, 17] env: APP_PORT: 8080 steps: - uses: actions/checkout@v3 - name: Set up JDK ${{ matrix.java-version }} uses: actions/setup-java@v3 with: java-version: ${{ matrix.java-version }} distribution: 'temurin' cache: maven - name: Build with Maven run: mvn -B package --file pom.xml -DAPP_PORT=$APP_PORT - name: Run tests run: mvn test -DAPP_PORT=$APP_PORT


Description:

The task involves configuring permissions and traffic separation for a server running an application.  The bash script add-rules.sh automates the setup of firewall rules, ensuring that the database, admin panel, and management panel are accessible as per the specified requirements. It also provides an export of the UFW List Rules to the user-specified file for reference.

What was done:

  • Created a bash script named add-rules.sh to automate the configuration.
  • Allowed access to the database port (3306) only from the server component on port 3000.
  • Allowed access to the admin panel (port 3005) only from the IP address 192.168.32.55 and rejected other connections.
  • Allowed incoming traffic to the management panel (port 8099) only from the eth0 network interface.
  • Set a connection limit of 1 connection per second for ports in the range 6050:6055.
  • Exported UFW List Rules to a specified filed.

add-rules.sh

#!/bin/bash

# Allow access to the database port on 3306 only from the server component on port 3000
ufw allow from 172.168.0.100 port 3000 to any port 3306

# Allow access to the administration panel on port 3005 only from IP address 192.168.32.55
ufw allow from 192.168.32.55 to any port 3005

# Reject all other connections with a message
ufw reject from any to any port 3005

# Allow incoming traffic to the management panel on port 8099 only from the eth0 interface
ufw allow in on eth0 to any port 8099

# Set a connection limit of 1 connection per second for ports in the range 6050:6055
ufw limit 6050:6055/tcp

# Export UFW List to the specified file
ufw status verbose > "$1"


Description:

The task involves containerizing a web application that uses the Flask framework and Redis store. This will be achieved by creating a Dockerfile and a docker-compose.yml file to enable the application to run in a containerized environment.

Dockerfile:

In the Dockerfile, the following environment variables are set:

  • FLASK_APP with a value of home.py.
  • FLASK_RUN_HOST with a value of the server address.

The steps to run the application locally include:

  1. Installing Python 3.7.
  2. Executing the command pip install -r requirements.txt.
  3. Running flask run.

Docker-compose file:

The docker-compose.yml file defines two services:

  • web with a container_name of app that builds the Dockerfile in the project's root.
  • redis, which pulls the Redis image from the Docker Hub.

Result:

The Dockerfile and docker-compose.yml configuration allow the Flask web application to run in a containerized environment. The application is set up to use Python 3.7, install required dependencies, and run the Flask app with the specified environment variables. The docker-compose.yml file defines two services, one for the web application and another for the Redis store, ensuring a complete containerized setup for the application.




 Dockerfile

#!/bin/bash

# Allow access to the database port on 3306 only from the server component on port 3000
ufw allow from 172.168.0.100 port 3000 to any port 3306

# Allow access to the administration panel on port 3005 only from IP address 192.168.32.55
ufw allow from 192.168.32.55 to any port 3005

# Reject all other connections with a message
ufw reject from any to any port 3005

# Allow incoming traffic to the management panel on port 8099 only from the eth0 interface
ufw allow in on eth0 to any port 8099

# Set a connection limit of 1 connection per second for ports in the range 6050:6055
ufw limit 6050:6055/tcp


docker-compose.yml

version: '3'

services:
  web:
    container_name: app
    build: .
    depends_on:
      - redis

  redis:
    image: redis



Description:

The task is to set up Docker containers for both the API and UI components of an application. This involves creating Dockerfiles, configuring Docker containers, and defining Docker Compose services to ensure the application runs in a containerized environment.

Setting up Docker container for the API component:

  1. Create a Dockerfile in the api directory and name it Dockerfile-dev.
  2. Use Node.js version 16 as the base image in the Dockerfile.
  3. Configure the container to run on port 3080.
  4. Set the working directory to /usr/src/app/api in the Dockerfile.

Setting up Docker container for the UI component:

  1. Create a Dockerfile in the ui directory and name it Dockerfile-dev.
  2. Use Node.js version 16 as the base image in the Dockerfile.
  3. Configure the container to run on port 4201.
  4. Set the working directory to /usr/src/app/app-ui/ in the Dockerfile.

Setting up API server:

  1. In the root directory, create a new file named docker-compose.yml.
  2. Define the first service named nodejs-server in the Docker Compose file.
  3. Specify the Dockerfile for this service as Dockerfile-dev located in the api directory.
  4. Set the service to run on port 3080.

Setting up UI server:

  1. Define the second service named angular-ui in the Docker Compose file.
  2. Specify the Dockerfile for this service as Dockerfile-dev located in the ui directory.
  3. Set the service to run on port 4201.

Result:

The Docker setup involves two containers, one for the API and the other for the UI component. The Dockerfiles are configured to use Node.js version 16 and specify the working directories and ports. The Docker Compose file defines services for both components, ensuring they run on the specified ports. This setup allows the application to be deployed in a containerized environment.


 api/Dockerfile

# Set NodeJS version
FROM node:16

# Set the working directory
WORKDIR /usr/src/app/api

# Expose the port
EXPOSE 3080

# Start the API server
CMD ["npm", "start"]


ui/Dockerfile

# Set Node.js version 16
FROM node:16

# Set the working directory
WORKDIR /usr/src/app/app-ui/

# Expose the port for UI
EXPOSE 4201

# Start the UI server
CMD ["npm", "start"]


docker-compose.yml

version: "3"

services:
  nodejs-server:
    container_name: api
    build:
      context: ./api
      dockerfile: Dockerfile-dev
    ports:
      - "3080:3080"

  angular-ui:
    container_name: ui
    build:
      context: ./ui
      dockerfile: Dockerfile-dev
    ports:
    - "4201:4201"

Description:

The task involves creating Terraform code to configure a GitHub repository according to specific requirements. These requirements encompass various aspects of repository management and settings, and the Terraform code needs to perform these actions. Additionally, the Terraform code should be executed and saved as a repository secret named "TERRAFORM."

The Terraform code was written to automate these tasks, ensuring efficient and consistent repository management. The code executed successfully, configuring the GitHub repository as per the specified requirements. The Terraform code has been stored as a secret named "TERRAFORM" in the repository for future reference and reusability

What was done:

  1. Assigned the user "softservedata" as a collaborator for the GitHub repository, granting them appropriate permissions and access.
  2. Created a branch named "develop" as the default branch for the repository, making it the primary branch for version control.
  3. Implemented branch protection rules for the "main" and "develop" branches:
    • Ensured that users cannot merge changes into both branches without creating a pull request, promoting structured development.
    • Allowed merging into the "develop" branch only if there are two approvals, ensuring code quality.
    • Permitted merging into the "main" branch only if the owner approved the pull request, enhancing control.
  4. Added a pull request template named "pull_request_template.md" to the ".github" directory. This template encourages users to provide information about their changes, including issue ticket numbers, and a checklist for self-review and testing.
  5. Created and configured a deploy key named "DEPLOY_KEY" for secure access to the repository.
  6. Established a Discord server and enabled notifications to be triggered when a pull request is created, facilitating communication and collaboration.
  7. For GitHub Actions, performed the following actions:
    • Created a Personal Access Token (PAT) with specific permissions, including "Full control of private repositories" and "Full control of orgs and teams, read and write org projects."
    • Added the PAT to the repository's actions secrets with the name "PAT," allowing secure access and authentication for GitHub Actions workflows.

 main.tf

provider "github" {
  token = var.PAT
  owner = var.GITHUB_OWNER
}

resource "github_actions_secret" "pat" {
  repository      = var.REPOSITORY
  secret_name     = "PAT"
  plaintext_value = var.PAT
}

resource "github_repository_collaborator" "softservedata_collaborator" {
  repository = var.REPOSITORY
  username   = "softservedata"
  permission = "push"
}

resource "github_branch" "develop" {
  repository = var.REPOSITORY
  branch     = "develop"
}

resource "github_branch_default" "default" {
  repository = var.REPOSITORY
  branch     = "develop"
}

resource "github_branch_protection" "main" {
  repository_id              = var.REPOSITORY
  pattern                    = "main"
  allows_deletions           = false
  require_code_owner_reviews = true
}

resource "github_branch_protection" "develop" {
  repository_id              = var.REPOSITORY
  pattern                    = "develop"
  allows_deletions           = false
  required_pull_request_reviews {
    dismiss_stale_reviews          = false
    required_approving_review_count = 2
    dismissal_restrictions          = [github_repository_collaborator.softservedata_collaborator.id]
  }
}

resource "github_repository_file" "pull_request_template" {
  repository = var.REPOSITORY
  file      = ".github/pull_request_template.md"
  content   = "Describe your changes\n\n ##Issue ticket number and link\n\n ##Checklist before requesting a review\n- I have performed a self-review of my code\nIf it is a core feature, I have added thorough tests\nDo we need to implement analytics?\nWill this be part of a product update? If yes, please write one phrase about this update "
}

resource "github_repository_deploy_key" "deploy_key" {
  repository = var.REPOSITORY
  title      = "DEPLOY_KEY"
  key        = var.DEPLOY_KEY
}

resource "github_repository_webhook" "discord_webhook" {
  repository = var.REPOSITORY
  events     = ["pull_request"]

  configuration {
    url          = var.DISCORD_WEBHOOK_URL
    content_type = "json"
  }
}



Description:

The task involves creating a main.tf Terraform file to provision two Docker containers: an Nginx web server and a MariaDB database server. The Nginx container should serve web content with specific text, and the MariaDB container should have its root password configured using a Terraform variable.

Nginx Container:

The Nginx container's purpose is to serve web content with a specific response text: "My First and Lastname: Your first and lastname." The text should be returned when accessing the web server through a browser.

MariaDB Container:

The MariaDB container should be configured with a root password, and this password should be set using the db_root_password Terraform variable. The variable's value should be passed when running the terraform apply command. This ensures that the root password for the MariaDB container is customizable.

Result:

The main.tf Terraform configuration file provisions two Docker containers: an Nginx container for serving web content with a custom response and a MariaDB container with a configurable root password. This setup enables you to easily deploy and manage both containers for your application.



 main.tf

# Define the variable for the MariaDB root password
variable "db_root_password" {
type = string
default = "passexample"
}

# Set up the Docker provider
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "3.0.2"
}
}
}

provider "docker" {
host = "unix:///var/run/docker.sock" # Using the Unix socket for Docker
}

# Create a Docker network for the containers
resource "docker_network" "web" {
name = "web"
}

# Configure and provision the Nginx container
resource "docker_container" "nginx" {
name = "nginx"
image = "nginx:latest"
network_mode = "web"

ports {
internal = 80
external = 8080
}

provisioner "local-exec" {
command = "echo 'My First and Lastname: Your Name' > index.html"
working_dir = "/var/www/html" # Directory inside the container where Nginx serves content
}
}

# Configure and provision the MariaDB container
resource "docker_container" "mariadb" {
name = "mariadb"
image = "mariadb:latest"
network_mode = "web"

env = [
"MYSQL_ROOT_PASSWORD=${var.db_root_password}",
]
}


Description:

The task is about creating an Ansible playbook named main.yml to install and configure an Apache server with an application and a MariaDB server on two Ubuntu 22.04 machines. The playbook should follow specific requirements and adhere to the defined structure. The Ansible playbook, main.yml, has been created to automate the installation and configuration of the application and MariaDB on the designated machines. The playbook structure follows the provided example, ensuring that the web server and database server are set up correctly. To run the playbook, you can use the following command, replacing the variables with appropriate values:

ansible-playbook main.yml --extra-vars "https://db_host%3Ddb.some.net/ db_name=app_db db_user=app_user db_pass=app_pass"

What was done:

  1. The playbook is organized into two groups, [server] for the web server and [db] for the database server, to manage the two distinct components.
  2. Variables named db_host, db_name, db_user, and db_pass have been defined to facilitate the configuration of the connection between the application and the database.


 main.yml

- import_playbook: mariadb.yml
- import_playbook: server.yml


mariadb.yml

- hosts: db
  become: yes
  vars_files:
    vars/db_vars.yml
  tasks:
    - name: Start message
      https://ansible.builtin.debug/
        msg: 'Create task to install and preparing DB server'

    - name: Generate for database
      https://ansible.builtin.set_fact/
        login_user: root
        login_pass: lookup('https://community.general.random_string/')
        root_pass: lookup('https://community.general.random_string/')
      when: lookup('https://ansible.builtin.env/', 'CI') == 'true'

    - name: Install MariaDB
      https://ansible.builtin.apt/
        name:
          - mariadb-server
          - python3-pymysql
        update_cache: yes

    - name: Start MariaDB
      https://ansible.builtin.service/
        name: mysql
        state: started
        enabled: yes

    - name: Set new password for root user
      https://ansible.builtin.mysql_user/
        name: "{{ login_user }}"
        password: "{{ login_pass }}"
        login_user: "{{ login_user }}"
        login_password: "{{ root_pass }}"
        login_unix_socket: /var/run/mysqld/mysqld.sock
        check_implicit_admin: yes

    - name: Create a new MariaDB database
      https://ansible.builtin.mysql_db/
        login_user: "{{ login_user }}"
        login_password: "{{ login_pass }}" 
        name: "{{ db_name }}"
        state: present
        encoding: utf8

    - name: Create user for app
      https://ansible.builtin.mysql_user/
         login_user: "{{ login_user }}"
         login_password: "{{ login_pass }}"
         name: "{{ db_user }}"
         password: "{{ db_pass }}"
         host: '%'
         priv: "{{ db_name }}.*:ALL"
         state: present

    - name: Add custom config 
      https://ansible.builtin.copy/
         dest: /etc/mysql/my.cnf
         content: |
           [mysqld]
           bind-addres=0.0.0.0
      notify: Restart MariaDB

  handlers:

    - name: Restart MariaDB
      https://ansible.builtin.service/
        name: mysql
        state: restarted



Task1


Description:

The task involves parsing book information from both JSON and XML strings, where each string contains details about books, including their title, author, and publication year. The goal is to extract this information and store it in a list of dictionaries, with each dictionary representing a book.

Solution:

import json
import https://xml.etree.elementtree/ as ET

def parse_books(json_string, xml_string):
    books = []

    # Parse the JSON data and add books to the list
    json_data = json.loads(json_string)
    for book in json_data:
        books.append({
            'title': book['title'],
            'author': book['author'],
            'year': book['year']
        })

    # Parse the XML data and add books to the list
    xml_data = ET.fromstring(xml_string)
    for book in xml_data.findall('book'):
        title = book.find('title').text
        author = book.find('author').text
        year = int(book.find('year').text)
        books.append({
            'title': title,
            'author': author,
            'year': year
        })

    return books



Task 2


Description:

The task involves parsing book information from both JSON and XML strings, where each string contains details about books, including their title, author, and publication year. The goal is to extract this information and store it in a list of dictionaries, with each dictionary representing a book.


Solution:

import os

def search_files(extension):
found = []
directory = '.'
for root, dirs, files in os.walk(directory):
for file in files:
if file.endswith(extension):
found.append(https://os.path.join/(file))

return found

for file in search_files('.bmp'):
print(file)




Task 3


Description:

The task involves creating a Python function, count_log_files, that counts the number of lines in a text file that start with the word "log." The function takes a filename as input, opens the file, and reads it line by line to check for lines starting with "log." The total count of such lines is returned.

Solution:

import re

def count_log_files(filename):
    logs_lines_number = 0
    with open(filename, 'r') as file:
        for line in file:
            if re.match('^log', line):
                logs_lines_number += 1
    return logs_lines_number

# Example usage:
result = count_log_files("file_list1.txt")
print(result)


​Description: Containerizing a Web Application with Prometheus Monitoring

This task involved containerizing a web application and its associated Redis database and setting up container monitoring using Prometheus.

Solution:

For the web application, a Dockerfile was created to build the application container. The Dockerfile includes environment variables for configuration and installs required dependencies. The application is exposed on port 8080, and it runs in the Alpine Linux environment. The application's functionality includes connecting to a Redis database.

A docker-compose.yml file was configured to define services. The services are structured as follows:

  1. The webapp service runs the web application container, which is accessible on port 8080.
  2. The redis service runs the Redis database container, allowing the web application to connect to it.
  3. The prometheus service runs the Prometheus container, providing monitoring. Prometheus is accessible on port 9090, and it scrapes metrics from targets such as the web application.

The prometheus.yml file defines the global and scrape configurations for Prometheus. It specifies which targets to scrape for metrics.

Additional monitoring for container resources is enabled using the cadvisor service, which collects container-related metrics.

By executing the command docker-compose up, the containers are started, enabling the web application to be accessed on port 8080 and Prometheus on port 9090. You can use Prometheus to monitor various metrics, including container_cpu_usage_seconds_total, providing insights into CPU resource utilization over time.


prometheus.yml

global:
  scrape_interval: 7s

scrape_configs:
  - job_name: "webapp"
    static_configs:
      - targets: ["webapp:8080"]

  - job_name: "cadvisor"
    static_configs:
      - targets: ["cadvisor:8080"]


Dockefile

FROM alpine:3.18

ENV APP_PORT=8080 
ENV REDIS_ADDRESS=redis
ENV REDIS_PORT=6379

RUN apk update && apk add --no-cache \
    redis \
    prometheus

WORKDIR /app
COPY ./ /app
RUN apk add --update nodejs npm

RUN npm install
EXPOSE 8080 9090 8081 3000

CMD ["sh", "-c", "redis-server & prometheus --config.file=/etc/prometheus/prometheus.yml"]


docker-compose.yml

version: "3"
services:
  webapp:
    build:
      context: .
    ports:
      - "8080:3000"
    volumes:
      - ./src:/app/data

  redis:
    image: redis:latest
    ports:
      - "6379:6379"

  prometheus:
    image: prom/prometheus
    volumes:
      - ./:/etc/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
    ports:
      - "9090:9090"

  cadvisor:
    image: https://gcr.io/cadvisor/cadvisor:latest
    ports:
      - "8081:8080"
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:rw
      - /sys:/sys:ro
      - /var/lib/docker/:/var/lib/docker:ro


​Description: Enabling Logging and Log Aggregation with Grafana Loki for a Go Echo Web Application

The task involves setting up logging for a web application developed in the Go Echo framework, containerizing the application, and enabling log aggregation using Grafana Loki.


Dockerfile

FROM golang:1.21-alpine AS build-stage

WORKDIR /app
COPY ./ ./
RUN go mod download && go build -o webapp

FROM alpine

WORKDIR /app
COPY --from=build-stage /app /app
EXPOSE 3000

ENTRYPOINT ["/app/webapp"]


docker-compose.yml

version: '3.8'

services:
  webapp:
    build: .
    labels:
      logging: "promtail"
      logging_jobname: "webapp_varlogs"
    ports:
      - "8080:3000"
    volumes:
      - log-data:/app/log/
    environment:
      - PORT=3000
      - LOG_PATH=/app/log/app.log

  loki:
    image: grafana/loki
    ports:
      - "3100:3100"
    command: -config.file=/etc/loki/local-config.yaml

  grafana:
    image: grafana/grafana
    ports:
      - "9090:3000"
    volumes:
      - ./loki-config/grafana-datasources.yaml:/etc/grafana/provisioning/datasources/datasources.yaml
    environment:
      - GF_AUTH_ANONYMOUS_ENABLED=true
      - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
      - GF_AUTH_DISABLE_LOGIN_FORM=true

  promtail:
    image: grafana/promtail
    volumes:
      - ./loki-config/promtail.yaml:/etc/promtail/webapp-config.yaml
      - log-data:/var/log/webapp
    command: -config.file=/etc/promtail/webapp-config.yaml
    depends_on:
      - loki


promtail.config

scrape_configs:
  - job_name: webapp
    static_configs:
      - targets:
          - webapp:8080
    pipeline_stages:
      - json:
          expressions:
            level: level
            ts: timestamp
            msg: message
Terraform: AWS

​Description:

In this task, the goal was to create a Terraform configuration file, main.tf, that deploys infrastructure on AWS. The infrastructure includes a Virtual Private Cloud (VPC) with a public subnet, an internet gateway, and an EC2 instance with a security group. Additionally, variables were defined in the variables.tf file to make the deployment flexible and customizable.

The specific variables defined in the variables.tf and their descriptions are as follows:

  • region: Specifies the AWS region where the infrastructure is deployed. The default value is "eu-central-1".
  • availability_zone: Defines the availability zone where the infrastructure is deployed. The default value is "eu-central-1a".
  • cidr: Sets the CIDR block for the VPC. The default value is "10.0.0.0/16".
  • publicCIDR: A list of CIDR blocks for the public subnets. The default value is ["10.0.1.0/24"].
  • environment: Specifies the environment where the infrastructure is deployed. The default value is "dev".
  • instance_type: Defines the instance type used for the EC2 instance. The default value is "t2.micro".
  • instance_AMI: Specifies the AMI ID of the instance to be launched. The default value is "ami-05d34d340fb1d89e5".
  • allowed_ports: A list of allowed ports for the security group. The default value is ["80", "443", "22", "8080"].

A corresponding outputs.tf file was created to define output variables to retrieve information about the created resources.

Solution:

The Terraform code is structured to provision AWS resources efficiently. It follows the principles of infrastructure as code (IaC) and leverages Terraform's capabilities. The code provides flexibility by allowing customization of variables, making it adaptable to different deployment scenarios.

The defined infrastructure resources include:

  1. A VPC: The VPC is created with the specified CIDR block and associated with the provided region and availability zone. It includes tags for better organization.
  2. Public Subnet: A public subnet is defined within the VPC. It is associated with the provided CIDR block and availability zone.
  3. Internet Gateway: An internet gateway is created and associated with the VPC to enable internet access.
  4. Route Table: A public route table is defined to ensure that traffic flows to the internet gateway.
  5. Route Table Association: This associates the public subnet with the public route table.
  6. EC2 Instance: An EC2 instance is launched with the specified instance type and AMI. User data is provided for configuration. The instance is placed in the public subnet.
  7. Security Group: A security group is created with dynamic ingress rules that allow incoming traffic on the specified ports. A default egress rule allows all outgoing traffic.

The outputs.tf file defines output variables that provide information about the created resources, such as the public IP of the EC2 instance, its AMI, type, VPC ID, subnet ID, availability zone, and region.

This Terraform configuration provides a structured and flexible way to deploy infrastructure on AWS, offering control over various aspects of the deployment while adhering to best practices in IaC. Once applied, it results in a functional environment that includes a VPC, public subnet, internet gateway, EC2 instance, and a security group with custom rules.


main.tf

provider "aws" {
region = var.region
}

resource "aws_vpc" "main" {
cidr_block = var.cidr
enable_dns_support = true
enable_dns_hostnames = true
}

resource "aws_subnet" "public" {
count = length(var.publicCIDR)
cidr_block = var.publicCIDR[count.index]
vpc_id = https://aws_vpc.main.id/
availability_zone = var.availability_zone
map_public_ip_on_launch = true
}

resource "aws_internet_gateway" "gw" {
vpc_id = https://aws_vpc.main.id/
}

resource "aws_route_table" "public" {
vpc_id = https://aws_vpc.main.id/

route {
cidr_block = "0.0.0.0/0"
gateway_id = https://aws_internet_gateway.gw.id/
}
}

resource "aws_route_table_association" "public" {
count = length(aws_subnet.public)
subnet_id = aws_subnet.public[count.index].id
route_table_id = https://aws_route_table.public.id/
}

resource "aws_security_group" "instance" {
dynamic "ingress" {
for_each = var.allowed_ports
content {
from_port = ingress.value
to_port = ingress.value
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

resource "aws_instance" "ec2" {
ami = var.instance_AMI
instance_type = var.instance_type
subnet_id = aws_subnet.public[0].id
}


outputs.tf

output "ec2_public_ip" {
value = https://aws_instance.instance.public_ip/
}

output "ec2_ami" {
value = var.instance_AMI
}

output "ec2_type" {
value = var.instance_type
}

output "public_vpc_id" {
value = https://aws_vpc.main.id/
}

output "ec2_subnet_id" {
value = https://aws_subnet.public.id/
}

output "public_subnet_AZ" {
value = https://aws_subnet.public.availability_zone/
}

output "ec2_region" {
value = var.region
}


variables.tf

svariable "region" {
  type = string
  default = "eu-central-1"
}

variable "availability_zone" {
  type = string
  default = "eu-central-1a"
}

variable "cidr" {
  type = string
  default = "10.0.0.0/16"
}

variable "publicCIDR" {
  type = list(string)
  default = ["10.0.1.0/24"]
}

variable "environment" {
  type = string
  default = "dev"
}

variable "instance_type" {
  type = string
  default = "t2.micro"
}

variable "instance_AMI" {
  default = "ami-05d34d340fb1d89e5"
}

variable "allowed_ports" {
  type = list(string)
  default = ["80", "443", "22", "8080"]
}

After self-studying and practicing on my homelab, I finally decided to do some real-world tasks under the supervision of a mentor and a student community. Here is the biggest part of the solutions I made on the practical course by SoftServe Academy.


Here I show only the parts of the code that were written by me. The code predefined by the tasks is moved out of the brackets.