<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[The OpsVerse with Apurv]]></title><description><![CDATA[Sharing hands-on DevOps, AWS, and Cloud tutorials with real-world projects, tips, and automation guides for students and professionals.]]></description><link>https://apurv-gujjar.me</link><generator>RSS for Node</generator><lastBuildDate>Thu, 16 Apr 2026 19:45:08 GMT</lastBuildDate><atom:link href="https://apurv-gujjar.me/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[🚀 Terraform Series – Day 8]]></title><description><![CDATA[Automating AWS EC2 Setup with Terraform and user_data
Welcome back to our Terraform journey. In infrastructure as code, setting up a server is just the beginning. After your EC2 instance is running, y]]></description><link>https://apurv-gujjar.me/terraform-series-day-8</link><guid isPermaLink="true">https://apurv-gujjar.me/terraform-series-day-8</guid><category><![CDATA[Terraform]]></category><category><![CDATA[Script]]></category><category><![CDATA[automation]]></category><category><![CDATA[AWS]]></category><category><![CDATA[GCP]]></category><category><![CDATA[k8s]]></category><dc:creator><![CDATA[Gujjar Apurv]]></dc:creator><pubDate>Mon, 13 Apr 2026 03:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/76ff913c-44c3-4825-aecc-f7c0a309e516.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Automating AWS EC2 Setup with Terraform and <code>user_data</code></p>
<p>Welcome back to our Terraform journey. In infrastructure as code, setting up a server is just the beginning. After your EC2 instance is running, you need to set it up, install what it needs, and start your apps. Doing this by hand goes against the idea of automation.</p>
<p>In this post, we will demonstrate how to completely automate your server bootstrapping process using Terraform and the AWS user_data feature.</p>
<h3>🎯 Objective</h3>
<p>By the end of this guide, you will learn how to automatically install and configure an Nginx web server on a newly provisioned AWS EC2 instance using a Terraform <code>user_data</code> script.</p>
<h3>🧩 Step 1: Understanding the Power of <code>user_data</code></h3>
<p><strong>The Problem with Manual Configuration</strong></p>
<p>Imagine you just used Terraform to spin up a fresh EC2 instance. Without an automation script, your next steps would look like this:</p>
<ol>
<li><p>SSH into the instance.</p>
</li>
<li><p>Manually run package updates.</p>
</li>
<li><p>Install Nginx.</p>
</li>
<li><p>Start the service.</p>
</li>
<li><p>Create a custom HTML page.</p>
</li>
</ol>
<p>This approach is <strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">time-consuming</mark></strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">, </mark> <strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">prone to human error</mark></strong>, and most importantly, <strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">not scalable</mark></strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">.</mark> If you need to spin up 100 web servers behind a load balancer, logging into each one manually is impossible.</p>
<h3>The Solution: Bootstrap Scripts</h3>
<p>AWS provides a feature called <code>user_data</code> that allows you to pass a script to your instance at launch.</p>
<ul>
<li><p>✔️ <strong>Runs automatically</strong> the very first time the instance boots.</p>
</li>
<li><p>✔️ <strong>Fully automates</strong> software installation and configuration.</p>
</li>
<li><p>✔️ <strong>Scales infinitely</strong> across as many instances as you deploy.</p>
</li>
</ul>
<p><strong>In short:</strong> <code>user_data</code> is your EC2 bootstrapping engine.</p>
<h3>📄 Step 2: Create the Bootstrapping Script (<a href="http://nginx.sh"><code>nginx.sh</code></a>)</h3>
<p>First, we need to define the commands we want our server to run on startup. We will create a simple bash script that installs Nginx and creates a custom landing page.</p>
<p><strong>Create a new file named</strong> <a href="http://nginx.sh"><code>nginx.sh</code></a><strong>:3</strong></p>
<pre><code class="language-shell">touch nginx.sh
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/0670068c-a5ae-4fac-9025-ae97babb83a4.png" alt="" style="display:block;margin:0 auto" />

<p>Add the following content to the file:-</p>
<pre><code class="language-shell">#!/bin/bash

# Update package lists
sudo apt-get update

# Install Nginx silently (-y prevents the prompt)
sudo apt-get install nginx -y

# Start the Nginx service
sudo systemctl start nginx

# Enable Nginx to start automatically if the server reboots
sudo systemctl enable nginx

# Create a custom HTML landing page
echo "&lt;h1&gt; Terraform testing with scripting &lt;/h1&gt;" &gt; /var/www/html/index.html
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/b4762218-23fb-4e3e-8a04-b44cfc7d2e7e.png" alt="" style="display:block;margin:0 auto" />

<h3>🔗 Step 3: Attach the Script in Terraform</h3>
<p>Now, we need to tell Terraform to pass this script to our EC2 instance during creation. We do this by utilizing the <code>file()</code> function within the <code>user_data</code> argument of our <code>aws_instance</code> resource.</p>
<p>O<strong>pen your</strong> <a href="http://ec2.tf"><code>ec2.tf</code></a> file and configure your instance block:-</p>
<pre><code class="language-shell">resource "aws_instance" "my_instance" {
  ami           = var.ec2_ami_id
  instance_type = var.ec2_instance_type

  # Attach your SSH key pair
  key_name = aws_key_pair.my_key.key_name

  # Attach the security group (make sure port 80 is open!)
  vpc_security_group_ids = [aws_security_group.my_groups.id]

  # Inject the bootstrap script here
  user_data = file("nginx.sh")

  # Define root storage
  root_block_device {
    volume_size = var.ec2_root_storage_size
    volume_type = "gp3"
  }

  tags = {
    Name = "terraform-ec2-nginx"
  }
}
</code></pre>
<div>
<div>💡</div>
<div>Note: Using <code>file("</code><a target="_self" rel="noopener noreferrer nofollow" class="text-primary underline underline-offset-2 hover:text-primary/80 cursor-pointer" href="http://nginx.sh" style="pointer-events:none"><code>nginx.sh</code></a><code>")</code> keeps your Terraform code clean by separating the bash logic from the HCL infrastructure definitions.</div>
</div>

<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/7634a350-209b-4297-a02a-08aadbd0ff75.png" alt="" style="display:block;margin:0 auto" />

<h3>⚙️ Step 4: Execute the Pipeline</h3>
<p>With the script created and Terraform configured, it's time to deploy. Run the following commands in your terminal:</p>
<p>Bash</p>
<pre><code class="language-plaintext">terraform init
terraform apply -auto-approve
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/e8c3cb46-5109-4f25-a62d-c03461ef3ed7.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/e8221ec0-5081-4de6-a233-8a42dc4a22f9.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/07eeff41-0c6b-42fd-acca-b2348385ce79.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/f6684dfd-0b58-48a1-89e4-5b441c90927e.png" alt="" style="display:block;margin:0 auto" />

<h3>What happens internally?</h3>
<p>Once you hit apply, an elegant automated workflow kicks off:</p>
<ol>
<li><p><strong>Infrastructure Provisioned:</strong> Terraform calls the AWS API to launch a new EC2 instance.</p>
</li>
<li><p><strong>Script Passed:</strong> The contents of <a href="http://nginx.sh"><code>nginx.sh</code></a> are passed to the instance metadata.</p>
</li>
<li><p><strong>Bootstrapping Execution:</strong> As the EC2 instance boots up, the OS executes the script as the <code>root</code> user.</p>
</li>
<li><p><strong>App Deployed:</strong> Packages are updated, Nginx is installed, the service is started, and your custom HTML page is generated.</p>
</li>
</ol>
<p>Within minutes, you can grab the public IP of your new EC2 instance, paste it into your browser, and see your custom HTML page—zero SSH required.</p>
<h3><mark class="bg-yellow-200 dark:bg-yellow-500/30">🧪 Testing: Verify NGINX on EC2 Instance</mark></h3>
<p>After provisioning the EC2 instance using <strong>Terraform</strong>, we need to test whether <strong>NGINX is properly installed and running</strong>.</p>
<h3>🔹 Step 1: Connect to EC2 via SSH</h3>
<pre><code class="language-shell">ssh -i "your-key.pem" ubuntu@&lt;EC2-PUBLIC-IP&gt;
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/8c3eb8a8-a3c5-4bee-9ef7-0ffc66afec7b.png" alt="" style="display:block;margin:0 auto" />

<p>✔ Replace:</p>
<ul>
<li><p><code>your-key.pem</code> → your private key</p>
</li>
<li><p><code>&lt;EC2-PUBLIC-IP&gt;</code> → instance public IP</p>
</li>
</ul>
<h3>🔹 Step 2: Check NGINX Status</h3>
<pre><code class="language-shell">sudo systemctl status nginx
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/9f427d6c-204c-43e2-9efd-10dad93d2728.png" alt="" style="display:block;margin:0 auto" />

<p>✔ Expected Output:</p>
<ul>
<li><p><code>active (running)</code> → ✅ NGINX is working</p>
</li>
<li><p><code>inactive / failed</code> → ❌ issue needs fixing</p>
</li>
</ul>
<h3>🔹 Step 3: Test via Browser</h3>
<p>Open your browser and hit:</p>
<pre><code class="language-shell">http://&lt;EC2-PUBLIC-IP&gt;
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/619516b2-c1aa-48a3-9402-a3503de4e255.png" alt="" style="display:block;margin:0 auto" />

<p>✔ Expected:</p>
<ul>
<li>Default <strong>NGINX Welcome Page</strong></li>
</ul>
<h3>🔹 Step 5: Test via Curl (CLI Testing)</h3>
<pre><code class="language-plaintext">curl http://localhost
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/d93d1b5b-90a8-4868-96d5-2022836b4e65.png" alt="" style="display:block;margin:0 auto" />

<p>OR from your local system:</p>
<pre><code class="language-plaintext">curl http://&lt;EC2-PUBLIC-IP&gt;
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/012ae3eb-d89e-4c62-b350-e0105478f032.png" alt="" style="display:block;margin:0 auto" />

<p>✔ If HTML response comes → ✅ Server is working</p>
<h3>🔹 Step 6: Check Port 80 (Important for DevOps)</h3>
<pre><code class="language-plaintext">sudo netstat -tulpn | grep :80
</code></pre>
<p>OR</p>
<pre><code class="language-plaintext">sudo ss -tulpn | grep :80
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/cbf8c0a4-7034-4ca3-991d-02050306b8fc.png" alt="" style="display:block;margin:0 auto" />

<p>✔ Confirms:</p>
<ul>
<li>NGINX is listening on port 80</li>
</ul>
<h3>💡 Key Takeaways</h3>
<ul>
<li><p><strong>No More Manual SSH:</strong> Bootstrapping completely eliminates the need to manually configure infrastructure after it is provisioned.</p>
</li>
<li><p><strong>Separation of Concerns:</strong> By using the <code>file()</code> function, you keep your shell scripts separate from your Terraform code, making both easier to maintain.</p>
</li>
<li><p><strong>Idempotency and Scale:</strong> A script guarantees that every server you provision will be configured exactly the same way, every single time.</p>
</li>
</ul>
<h3><strong>👨‍💻 About the Author</strong></h3>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751797710818/123a7231-3dca-4273-ad68-7bd026f69b95.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt="" style="display:block;margin:0 auto" />

<p>“A complete Terraform series covering everything from fundamentals to advanced real-world infrastructure automation in a DevOps environment.”</p>
<h3><strong>📬 Let's Stay Connected</strong></h3>
<ul>
<li><p>📧 <strong>Email</strong>: <a href="mailto:gujjarapurv181@gmail.com"><strong>gujjarapurv181@gmail.com</strong></a></p>
</li>
<li><p>🐙 <strong>GitHub</strong>: <a href="http://github.com/ApurvGujjar07"><strong>github.com/ApurvGujjar07</strong></a></p>
</li>
<li><p>💼 <strong>LinkedIn</strong>: <a href="http://linkedin.com/in/apurv-gujjar"><strong>linkedin.com/in/apurv-gujjar</strong></a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[🚀 Terraform Series – Day 7]]></title><description><![CDATA[Variables & Outputs (Make Your Code Smart 🔥)
In real-world DevOps, writing flexible and reusable code is very important.Today, we will learn how to use Variables and Outputs in Terraform to make our ]]></description><link>https://apurv-gujjar.me/terraform-series-day-7</link><guid isPermaLink="true">https://apurv-gujjar.me/terraform-series-day-7</guid><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[GCP]]></category><category><![CDATA[k8s]]></category><category><![CDATA[ec2]]></category><category><![CDATA[vpc]]></category><dc:creator><![CDATA[Gujjar Apurv]]></dc:creator><pubDate>Sun, 12 Apr 2026 03:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/619fdeca-813e-4244-896f-b0ad5b210c03.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Variables &amp; Outputs (Make Your Code Smart 🔥)</p>
<p>In real-world DevOps, writing flexible and reusable code is very important.<br />Today, we will learn how to use <strong>Variables</strong> and <strong>Outputs</strong> in Terraform to make our infrastructure clean, dynamic, and production-ready.</p>
<p>Till now, we were writing Terraform code…<br />but there was one problem 👇</p>
<p>👉 <strong>Everything was hardcoded</strong></p>
<p>And in real DevOps, <mark class="bg-yellow-200 dark:bg-yellow-500/30">hardcoding = BIG mistake</mark> ❌</p>
<p>So today we fix that 💡</p>
<h3>🧩 Step 1: The Real Problem</h3>
<p>Imagine this:</p>
<pre><code class="language-shell">instance_type = "t2.micro"
</code></pre>
<p>Looks simple… but 👇</p>
<p>❌ Want to upgrade instance? Change everywhere</p>
<p>❌ Want reuse? Not possible</p>
<p>❌ Working in team? Becomes messy</p>
<p>👉 Basically: Not scalable</p>
<h3>💡 Step 2: The Smart Solution → Variables</h3>
<p>Instead of fixing values in code, we use <strong>variables</strong></p>
<p>Think like this:<br />👉 “Keep values separate, keep code clean”</p>
<h3>✔ Why Variables?</h3>
<ul>
<li><p>Change once → Apply everywhere</p>
</li>
<li><p>Clean &amp; readable code</p>
</li>
<li><p>Reusable infrastructure</p>
</li>
<li><p>Industry-level practice</p>
</li>
</ul>
<p>📌 <strong>In short:</strong><br />Variables = Flexibility + Clean Code + DevOps Standard</p>
<h3>📄 Step 3: Create <a href="http://variables.tf"><code>variables.tf</code></a></h3>
<pre><code class="language-shell">variable "ec2_instance_type" {
  default = "t2.micro"
  type    = string
}

variable "ec2_root_storage_size" {
  default = 10
  type    = number
}

variable "ec2_ami_id" {
  default = "ami-0cb91c7de36eed2cb"
  type    = string
}
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/6051ea1a-7882-46e1-93af-4f09f089883f.png" alt="" style="display:block;margin:0 auto" />

<p>🧠 Simple Understanding:</p>
<ul>
<li><p><code>variable</code> → variable name</p>
</li>
<li><p><code>default</code> → default value</p>
</li>
<li><p><code>type</code> → data type</p>
</li>
</ul>
<p>👉 Values are now separated from the main code ✅</p>
<h3>🔗 Step 4: Use Variables in <a href="http://ec2.tf"><code>ec2.tf</code></a></h3>
<p>Now the real magic 🔥</p>
<pre><code class="language-shell"># Create Key Pair
resource "aws_key_pair" "my_key" {
  key_name   = "terra-key-aws"
  public_key = file("terra-key-aws.pub")
}


# Default VPC
resource "aws_default_vpc" "default" {}

# Security Group
resource "aws_security_group" "my_groups" {
  name        = "my-group"
  description = "Security group for EC2"
  vpc_id      = aws_default_vpc.default.id

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    description = "Allow SSH"
  }

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    description = "Allow HTTP"
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
    description = "Allow all outbound"
  }

  tags = {
    Name = "automate-sg"
  }
}

# EC2 Instance
resource "aws_instance" "my_instance" {
  ami                    = var.ec2_ami_id
  instance_type          = var.ec2_instance_type
  key_name               = aws_key_pair.my_key.key_name
  vpc_security_group_ids = [aws_security_group.my_groups.id]

  root_block_device {
    volume_size = var.ec2_root_storage_size
    volume_type = "gp3"
  }

  tags = {
    Name = "terra-ec2"
  }
}
</code></pre>
<p>⚡ Important Line:</p>
<pre><code class="language-shell">var.&lt;variable_name&gt;
</code></pre>
<p>👉 Example:</p>
<pre><code class="language-shell">var.ec2_instance_type
</code></pre>
<h3>🔄 Step 5: Real Power of Variables</h3>
<p>Before:</p>
<pre><code class="language-plaintext">instance_type = "t2.micro"
</code></pre>
<p>After:</p>
<pre><code class="language-plaintext">default = "t3.micro"
</code></pre>
<p>✔ Change in one place</p>
<p>✔ Applied everywhere automatically</p>
<p>🔥 That’s the power</p>
<h3>📤 Step 6: Now Let’s Talk About Outputs</h3>
<p>Deployment is done… but now 👇</p>
<p>👉 How do you get EC2 Public IP?</p>
<h3>😓 Problem</h3>
<p>❌ You have to manually check AWS Console</p>
<h3>💡 Solution → Outputs</h3>
<p>Terraform will show it directly 🔥</p>
<h3>📄 Step 7: Create <a href="http://outputs.tf"><code>outputs.tf</code></a></h3>
<pre><code class="language-shell">output "ec2_public_ip" {
  value = aws_instance.my_instance.public_ip
}

output "ec2_public_dns" {
  value = aws_instance.my_instance.public_dns
}
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/5ea76f21-8d21-40ba-8169-6d35579afa17.png" alt="" style="display:block;margin:0 auto" />

<h3>🧠 What Happens Now?</h3>
<p>When you run:</p>
<pre><code class="language-plaintext">terraform apply
</code></pre>
<p>👉 At the end, Terraform shows:</p>
<p>✔ Public IP</p>
<p>✔ Public DNS</p>
<p>Directly in terminal 🎯</p>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/804365ec-f053-449e-95fc-2dbdb35ca982.png" alt="" style="display:block;margin:0 auto" />

<h2><strong>👨‍💻 About the Author</strong></h2>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751797710818/123a7231-3dca-4273-ad68-7bd026f69b95.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt="" style="display:block;margin:0 auto" />

<p>“A complete Terraform series covering everything from fundamentals to advanced real-world infrastructure automation in a DevOps environment.”</p>
<h3><strong>📬 Let's Stay Connected</strong></h3>
<ul>
<li><p>📧 <strong>Email</strong>: <a href="mailto:gujjarapurv181@gmail.com"><strong>gujjarapurv181@gmail.com</strong></a></p>
</li>
<li><p>🐙 <strong>GitHub</strong>: <a href="http://github.com/ApurvGujjar07"><strong>github.com/ApurvGujjar07</strong></a></p>
</li>
<li><p>💼 <strong>LinkedIn</strong>: <a href="http://linkedin.com/in/apurv-gujjar"><strong>linkedin.com/in/apurv-gujjar</strong></a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[🚀 Terraform Series – Day 6]]></title><description><![CDATA[🎯 Objective
In this hands-on, we will:

Generate SSH key

Create key pair using Terraform

Configure VPC & Security Group

Launch EC2 instance

Connect via SSH

Clean up resources


👉 This is your f]]></description><link>https://apurv-gujjar.me/terraform-series-day-6</link><guid isPermaLink="true">https://apurv-gujjar.me/terraform-series-day-6</guid><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[k8s]]></category><category><![CDATA[GCP]]></category><dc:creator><![CDATA[Gujjar Apurv]]></dc:creator><pubDate>Sat, 11 Apr 2026 05:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/02c34af2-6425-4d0c-807a-f65d028e4ed8.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>🎯 Objective</h3>
<p>In this hands-on, we will:</p>
<ul>
<li><p>Generate SSH key</p>
</li>
<li><p>Create key pair using Terraform</p>
</li>
<li><p>Configure VPC &amp; Security Group</p>
</li>
<li><p>Launch EC2 instance</p>
</li>
<li><p>Connect via SSH</p>
</li>
<li><p>Clean up resources</p>
</li>
</ul>
<p>👉 This is your <strong>first real-world Terraform task</strong></p>
<h3>🧩 Step 1: Generate SSH Key</h3>
<pre><code class="language-shell">ssh-keygen
</code></pre>
<p>✔ This creates:</p>
<ul>
<li><p><code>terra-key-aws</code>→ Private key</p>
</li>
<li><p><a href="http://terra-key-ec2.pub"><code>terra-key-ec2.pub</code></a> → Public key</p>
</li>
</ul>
<p>👉 We will use this to access EC2</p>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/54b6780c-ca45-4254-8db3-9424d6f991fd.png" alt="" style="display:block;margin:0 auto" />

<h3>📄 Step 2: Create Terraform File</h3>
<pre><code class="language-shell">touch ec2.tf
</code></pre>
<p>🧱 Step 3: Add Terraform Code</p>
<pre><code class="language-shell"># Create Key Pair
resource "aws_key_pair" "my_key" {
  key_name   = "terra-key-aws"
  public_key = file("terra-key-aws.pub")
}


# Default VPC
resource "aws_default_vpc" "default" {}

# Security Group
resource "aws_security_group" "my_groups" {
  name        = "my-group"
  description = "Security group for EC2"
  vpc_id      = aws_default_vpc.default.id

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    description = "Allow SSH"
  }

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    description = "Allow HTTP"
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
    description = "Allow all outbound"
  }

  tags = {
    Name = "automate-sg"
  }
}

# EC2 Instance
resource "aws_instance" "my_instance" {
  ami                    = "ami-0cb91c7de36eed2cb"
  instance_type          = "t2.micro"
  key_name               = aws_key_pair.my_key.key_name
  vpc_security_group_ids = [aws_security_group.my_groups.id]

  root_block_device {
    volume_size = 10
    volume_type = "gp3"
  }

  tags = {
    Name = "terra-ec2"
  }
}
</code></pre>
<h3>⚙️ Step 4: Initialize Terraform</h3>
<pre><code class="language-shell">terraform init
</code></pre>
<p>✔ Downloads AWS provider</p>
<p>✔ Prepares working directory</p>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/c17141c1-2782-45b6-bfdc-062e48ac3ff8.png" alt="" style="display:block;margin:0 auto" />

<h3>✅ Step 5: Validate Configuration</h3>
<pre><code class="language-shell">terraform validate
</code></pre>
<p>✔ Ensures syntax is correct</p>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/76034d58-2af8-459e-8219-6435ba6f1611.png" alt="" style="display:block;margin:0 auto" />

<h3>📊 Step 6: Plan Execution</h3>
<pre><code class="language-shell">terraform plan
</code></pre>
<p>✔ Shows resources to be created:</p>
<ul>
<li><p>Key Pair</p>
</li>
<li><p>VPC</p>
</li>
<li><p>Security Group</p>
</li>
<li><p>EC2 Instance</p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/60ce0fec-7f70-4ae4-83bf-d5a7fb4977d2.png" alt="" style="display:block;margin:0 auto" />

<h3>🚀 Step 7: Apply (Create Infrastructure)</h3>
<pre><code class="language-shell">terraform apply
</code></pre>
<p>👉 Type <code>yes</code> to confirm</p>
<h3>❌ Common Error: Not Authorized</h3>
<p>👉 Reason:</p>
<ul>
<li>IAM user does not have required permissions</li>
</ul>
<p>✔ Fix:</p>
<p>Go to <strong>AWS IAM → Attach Policy</strong></p>
<ul>
<li><p><code>AdministratorAccess</code> (easy way)</p>
<p>OR</p>
</li>
<li><p><code>EC2FullAccess</code></p>
</li>
<li><p><code>VPCFullAccess</code></p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/eefe5fda-8649-433a-840b-d4f56916dfe9.png" alt="" style="display:block;margin:0 auto" />

<h3>🖥 Step 8: Verify in AWS Console</h3>
<p>Go to EC2 Dashboard:</p>
<p>✔ Instance running<br />✔ Security group attached<br />✔ Key pair created</p>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/e2d2d40a-1d81-4172-b533-06b6a6c02d16.png" alt="" style="display:block;margin:0 auto" />

<h3>🔐 Step 9: Fix Key Permission</h3>
<pre><code class="language-shell">chmod 400 terra-key-aws
</code></pre>
<p>👉 Required before SSH</p>
<h3>🔗 Step 10: Connect to EC2</h3>
<pre><code class="language-shell">ssh -i terra-key-aws ubuntu@&lt;your-public-ip&gt;
</code></pre>
<p>👉 Now your server is live 🚀</p>
<h3>🧹 Step 11: Destroy Resources (IMPORTANT)</h3>
<pre><code class="language-shell">terraform destroy
</code></pre>
<p>👉 Prevent unnecessary AWS charges 💸</p>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/269447f8-0cf7-4332-a1f7-36eeb4b924ea.png" alt="" style="display:block;margin:0 auto" />

<h2><strong>👨‍💻 About the Author</strong></h2>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751797710818/123a7231-3dca-4273-ad68-7bd026f69b95.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt="" style="display:block;margin:0 auto" />

<p>“A complete Terraform series covering everything from fundamentals to advanced real-world infrastructure automation in a DevOps environment.”</p>
<h3><strong>📬 Let's Stay Connected</strong></h3>
<ul>
<li><p>📧 <strong>Email</strong>: <a href="mailto:gujjarapurv181@gmail.com"><strong>gujjarapurv181@gmail.com</strong></a></p>
</li>
<li><p>🐙 <strong>GitHub</strong>: <a href="http://github.com/ApurvGujjar07"><strong>github.com/ApurvGujjar07</strong></a></p>
</li>
<li><p>💼 <strong>LinkedIn</strong>: <a href="http://linkedin.com/in/apurv-gujjar"><strong>linkedin.com/in/apurv-gujjar</strong></a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[🚀 Terraform Series – Day 5]]></title><description><![CDATA[Terraform Providers, Resource Types & Naming
In today’s Terraform journey, I explored one of the most fundamental concepts that every DevOps engineer must understand Providers and Resource Naming Stru]]></description><link>https://apurv-gujjar.me/terraform-series-day-5</link><guid isPermaLink="true">https://apurv-gujjar.me/terraform-series-day-5</guid><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[k8s]]></category><dc:creator><![CDATA[Gujjar Apurv]]></dc:creator><pubDate>Fri, 10 Apr 2026 03:40:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/d9ea1363-bd2e-407b-9fd5-f6917ec0316b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Terraform Providers, Resource Types &amp; Naming</h3>
<p>In today’s Terraform journey, I explored one of the most fundamental concepts that every DevOps engineer must understand <strong>Providers and Resource Naming Structure</strong>.</p>
<p>These concepts are the backbone of Terraform because they define <strong>how Terraform communicates with real-world infrastructure</strong>.</p>
<h3>🧠 1. Understanding Terraform Resource Structure</h3>
<p>Every infrastructure component in Terraform is defined using a <strong>resource block</strong>.</p>
<p>📌 Syntax:</p>
<pre><code class="language-shell">resource "&lt;provider&gt;_&lt;resource_type&gt;" "&lt;name&gt;" {
  arguments
}
</code></pre>
<h3>📌 Example:</h3>
<pre><code class="language-shell">resource "aws_instance" "my_vm" {
  instance_type = "t2.micro"
}
</code></pre>
<h3>🔍 Deep Breakdown:</h3>
<ul>
<li><p><strong>provider (aws)</strong><br />→ Defines which platform you are using (AWS, GCP, Azure, etc.)</p>
</li>
<li><p><strong>resource_type (instance)</strong><br />→ Specifies what you want to create (VM, bucket, network, etc.)</p>
</li>
<li><p><strong>name (my_vm)</strong><br />→ A local identifier inside Terraform (you can name it anything)</p>
</li>
<li><p><strong>arguments</strong><br />→ Configuration details (size, region, OS, etc.)</p>
</li>
</ul>
<h3>⚡ Important Insight:</h3>
<p>👉 Terraform does <strong>not identify resources by name alone</strong>, but by:</p>
<pre><code class="language-plaintext">provider + resource_type + name
</code></pre>
<p>This combination must always be unique.</p>
<h3>🌐 2. What is a Provider in Terraform?</h3>
<p>A <strong>provider</strong> is a plugin that allows Terraform to interact with external APIs.</p>
<p>👉 In simple words:</p>
<blockquote>
<p>Provider = Bridge between Terraform and Cloud/Service</p>
</blockquote>
<h3>🔧 Why Providers are Needed?</h3>
<p>Without providers:</p>
<ul>
<li><p>Terraform cannot talk to AWS, GCP, or any service</p>
</li>
<li><p>No infrastructure can be created</p>
</li>
</ul>
<h3>📌 Popular Providers:</h3>
<ul>
<li><p>AWS (Amazon Web Services)</p>
</li>
<li><p>Google Cloud Platform (GCP)</p>
</li>
<li><p>Azure</p>
</li>
<li><p>Local (for files, local operations)</p>
</li>
</ul>
<h3>⚡ Real-Life Analogy:</h3>
<p>Think of Terraform as a <strong>remote control</strong><br />and providers as the <strong>signal system</strong> that connects it to devices.</p>
<p>Without signals → remote is useless ❌</p>
<h3><mark class="bg-yellow-200 dark:bg-yellow-500/30">⚙️ 3. Ways to Use Providers in Terraform</mark></h3>
<p>Terraform gives flexibility in how you define providers.</p>
<h3>🔹 Method 1: Implicit Provider (Automatic Way) ✅</h3>
<p>👉 The easiest and most beginner-friendly method.</p>
<p>You don’t explicitly define the provider — Terraform automatically detects it.</p>
<p>📌 Example:</p>
<pre><code class="language-shell">
resource "aws_instance" "my_vm" {
  ami           = "ami-0ec10929233384c7f"   # Example Ubuntu AMI (Mumbai)
  instance_type = "t2.micro"

  tags = {
    Name = "Terraform-VM"
  }
}
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/a4f8dd2a-95cb-47ae-9c63-eea84a90e482.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/d8fbddde-394a-4009-8e2d-efe6f0539043.png" alt="" style="display:block;margin:0 auto" />

<h3>🔍 What Happens Behind the Scenes?</h3>
<ol>
<li><p>Terraform sees <code>aws_instance</code></p>
</li>
<li><p>It understands provider = <strong>aws</strong></p>
</li>
<li><p>During initialization, it automatically downloads the provider</p>
</li>
</ol>
<h3>▶️ Steps:</h3>
<pre><code class="language-plaintext">terraform init
</code></pre>
<p>✔ Provider gets installed automatically  </p>
<p>✔ No manual configuration needed</p>
<h3>👍 When to Use:</h3>
<ul>
<li><p>Learning phase</p>
</li>
<li><p>Small projects</p>
</li>
<li><p>Quick testing</p>
</li>
</ul>
<h3>🔹 Method 2: Explicit Provider (Declarative Way) ✅</h3>
<p>👉 This is the <strong>recommended approach for <mark class="bg-yellow-200 dark:bg-yellow-500/30"> real-world projects</mark></strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">.</mark></p>
<p>You explicitly define:</p>
<ul>
<li><p>Provider source</p>
</li>
<li><p>Version</p>
</li>
</ul>
<p>📌 Step 1: Define Providers</p>
<pre><code class="language-shell">terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&gt; 6.0"
    }
  }
}

# Configure the AWS Provider
provider "aws" {
  region = "us-east-1"
}
</code></pre>
<h3>📌 Step 2: Initialize</h3>
<pre><code class="language-plaintext">terraform init
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/6baa905b-c832-4c08-82df-9ddb4ce18523.png" alt="" style="display:block;margin:0 auto" />

<h3>🔍 What Happens:</h3>
<ul>
<li><p>Terraform downloads exact versions</p>
</li>
<li><p>Ensures consistency across systems</p>
</li>
<li><p>Prevents unexpected breaking changes</p>
</li>
</ul>
<h3>👍 When to Use:</h3>
<ul>
<li><p>Production environments</p>
</li>
<li><p>Team projects</p>
</li>
<li><p>Version-controlled infrastructure</p>
</li>
</ul>
<h3>⚡ 4. Implicit vs Explicit (Quick Understanding)</h3>
<h3>Implicit:</h3>
<ul>
<li><p>Automatic</p>
</li>
<li><p>Less control</p>
</li>
<li><p>Beginner-friendly</p>
</li>
</ul>
<h3>Explicit:</h3>
<ul>
<li><p>Manual definition</p>
</li>
<li><p>Full control</p>
</li>
<li><p>Production-ready</p>
</li>
</ul>
<h3>🚀 5. Pro Tips (Important for DevOps)</h3>
<p>✔ Always use <strong>explicit providers in real projects</strong><br />✔ Lock provider versions to avoid errors<br />✔ Run <code>terraform init</code> after any provider change<br />✔ Keep provider configuration in a separate file (best practice)</p>
<h3>📌 Final Summary</h3>
<ul>
<li><p>Terraform uses <strong>providers</strong> to connect with cloud/services</p>
</li>
<li><p>Resource naming follows:</p>
<pre><code class="language-plaintext">&lt;provider&gt;_&lt;resource_type&gt;
</code></pre>
</li>
<li><p>Providers can be:</p>
<ul>
<li><p>Automatically detected (Implicit)</p>
</li>
<li><p>Manually defined (Explicit)</p>
</li>
</ul>
</li>
<li><p><code>terraform init</code> is required to install providers</p>
</li>
</ul>
<h3>🔥 Conclusion</h3>
<p>Understanding providers is a <strong>game-changer in Terraform</strong>.</p>
<p>Once you master this concept, you unlock the ability to:</p>
<ul>
<li><p>Work with multiple cloud platforms</p>
</li>
<li><p>Write scalable infrastructure code</p>
</li>
<li><p>Build real-world DevOps projects</p>
</li>
</ul>
<h2><strong>👨‍💻 About the Author</strong></h2>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751797710818/123a7231-3dca-4273-ad68-7bd026f69b95.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt="" style="display:block;margin:0 auto" />

<p>“A complete Terraform series covering everything from fundamentals to advanced real-world infrastructure automation in a DevOps environment.”</p>
<h3><strong>📬 Let's Stay Connected</strong></h3>
<ul>
<li><p>📧 <strong>Email</strong>: <a href="mailto:gujjarapurv181@gmail.com"><strong>gujjarapurv181@gmail.com</strong></a></p>
</li>
<li><p>🐙 <strong>GitHub</strong>: <a href="http://github.com/ApurvGujjar07"><strong>github.com/ApurvGujjar07</strong></a></p>
</li>
<li><p>💼 <strong>LinkedIn</strong>: <a href="http://linkedin.com/in/apurv-gujjar"><strong>linkedin.com/in/apurv-gujjar</strong></a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[🚀 Terraform Series – Day 4]]></title><description><![CDATA[Terraform Workflow: init, validate, plan, apply & destroy
🧠 Before Starting (AWS Setup)
Before using Terraform with AWS, we first need to configure AWS access on our local machine.
👉 Steps:

Install]]></description><link>https://apurv-gujjar.me/terraform-series-day-4</link><guid isPermaLink="true">https://apurv-gujjar.me/terraform-series-day-4</guid><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[GCP]]></category><category><![CDATA[vscode extensions]]></category><category><![CDATA[k8s]]></category><dc:creator><![CDATA[Gujjar Apurv]]></dc:creator><pubDate>Fri, 10 Apr 2026 03:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/5f66a303-da04-464c-b45d-df7898ba9e70.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Terraform Workflow: init, validate, plan, apply &amp; destroy</h3>
<h3><mark class="bg-yellow-200 dark:bg-yellow-500/30">🧠 Before Starting (AWS Setup)</mark></h3>
<p>Before using Terraform with AWS, we first need to configure AWS access on our local machine.</p>
<p>👉 Steps:</p>
<ul>
<li>Install <strong>AWS CLI</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/b7913dd3-3f8e-4c4f-ab1e-5c88ee2896ce.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li>Configure AWS using:</li>
</ul>
<pre><code class="language-plaintext">aws configure
</code></pre>
<p>👉 It will ask for:</p>
<ul>
<li><p>AWS Access Key</p>
</li>
<li><p>Secret Key</p>
</li>
<li><p>Region</p>
</li>
<li><p>Output format</p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/227acca6-c6fd-41e3-81d4-68cfbbfa7d19.png" alt="" style="display:block;margin:0 auto" />

<p>✅ After this setup, Terraform can interact with your AWS account.</p>
<h3>🎯 Objective</h3>
<ul>
<li><p>Understand Terraform workflow</p>
</li>
<li><p>Learn core commands</p>
</li>
<li><p>Perform hands-on execution</p>
</li>
</ul>
<h3>🧱 Step 0: Create Terraform Configuration File</h3>
<p>👉 Create a file:</p>
<pre><code class="language-plaintext">main.tf
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/833d3314-bbbf-471d-a374-3e3e54180ccb.png" alt="" style="display:block;margin:0 auto" />

<p>👉 This file contains your Terraform infrastructure code</p>
<h3>⚙️ Step 1: Initialize Terraform</h3>
<p>🔹 Command:</p>
<pre><code class="language-plaintext">terraform init
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/6c1e4c93-6752-4a11-8486-550ed16328d9.png" alt="" style="display:block;margin:0 auto" />

<p>🔹 Purpose:</p>
<ul>
<li><p>Initializes working directory</p>
</li>
<li><p>Downloads required providers</p>
</li>
<li><p>Prepares environment</p>
</li>
</ul>
<h3>✅ Step 2: Validate Configuration</h3>
<p>🔹 Command:</p>
<pre><code class="language-plaintext">terraform validate
</code></pre>
<p>🔹 Purpose:</p>
<ul>
<li><p>Checks syntax of <code>.tf</code> files</p>
</li>
<li><p>Ensures configuration is valid</p>
</li>
</ul>
<p>👉 “Check if your code is correct”</p>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/ac81bc20-bf48-4de3-a6ba-48d0ab43242d.png" alt="" style="display:block;margin:0 auto" />

<h3>📊 Step 3: Review Execution Plan</h3>
<p>🔹 Command:</p>
<pre><code class="language-plaintext">terraform plan
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/25c9b0de-4dad-46d9-b7bb-6a9eeac196ad.png" alt="" style="display:block;margin:0 auto" />

<p>🔹 Purpose:</p>
<ul>
<li><p>Shows what Terraform will do</p>
</li>
<li><p>Lists resources to create/change/destroy</p>
</li>
<li><p>Works as a dry run</p>
</li>
</ul>
<p>👉 “Preview before execution”</p>
<h3>🚀 Step 4: Apply Configuration</h3>
<p>🔹 Command:</p>
<pre><code class="language-plaintext">terraform apply
</code></pre>
<p>🔹 Purpose:</p>
<ul>
<li><p>Executes the plan</p>
</li>
<li><p>Creates real infrastructure</p>
</li>
</ul>
<p>👉 Type <code>yes</code> to confirm</p>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/522bdc22-eeb6-4cb8-ad73-8a315cc9e218.png" alt="" style="display:block;margin:0 auto" />

<h3>🧨 Step 5: Destroy Infrastructure</h3>
<p>🔹 Command:</p>
<pre><code class="language-plaintext">terraform destroy
</code></pre>
<p>🔹 Purpose:</p>
<ul>
<li><p>Deletes all resources</p>
</li>
<li><p>Avoids unnecessary cloud cost</p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/89dd273c-f629-4eb3-936e-de1527a54758.png" alt="" style="display:block;margin:0 auto" />

<h3>⚡ Auto-Approve Option</h3>
<pre><code class="language-plaintext">terraform apply -auto-approve
terraform destroy -auto-approve
</code></pre>
<p>👉 Skips confirmation</p>
<p>👉 Useful in automation (CI/CD)</p>
<h2><strong>👨‍💻 About the Author</strong></h2>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751797710818/123a7231-3dca-4273-ad68-7bd026f69b95.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt="" style="display:block;margin:0 auto" />

<p>“A complete Terraform series covering everything from fundamentals to advanced real-world infrastructure automation in a DevOps environment.”</p>
<h3><strong>📬 Let's Stay Connected</strong></h3>
<ul>
<li><p>📧 <strong>Email</strong>: <a href="mailto:gujjarapurv181@gmail.com"><strong>gujjarapurv181@gmail.com</strong></a></p>
</li>
<li><p>🐙 <strong>GitHub</strong>: <a href="http://github.com/ApurvGujjar07"><strong>github.com/ApurvGujjar07</strong></a></p>
</li>
<li><p>💼 <strong>LinkedIn</strong>: <a href="http://linkedin.com/in/apurv-gujjar"><strong>linkedin.com/in/apurv-gujjar</strong></a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[🚀 Terraform Series – Day 3]]></title><description><![CDATA[Terraform Blocks, Labels, and Arguments
In Day 2, we installed Terraform.Now, before writing real infrastructure code, we must understand how Terraform actually reads and executes configurations.
👉 E]]></description><link>https://apurv-gujjar.me/terraform-series-day-3</link><guid isPermaLink="true">https://apurv-gujjar.me/terraform-series-day-3</guid><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[GCP]]></category><category><![CDATA[k8s]]></category><category><![CDATA[vscode extensions]]></category><dc:creator><![CDATA[Gujjar Apurv]]></dc:creator><pubDate>Fri, 10 Apr 2026 03:20:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/c9aa8dc9-d9dc-42da-82d3-d4d335b889ec.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Terraform Blocks, Labels, and Arguments</h3>
<p>In Day 2, we installed Terraform.<br />Now, before writing real infrastructure code, we must understand <strong>how Terraform actually reads and executes configurations</strong>.</p>
<p>👉 Every Terraform file is built using 3 core concepts:</p>
<ul>
<li><p>Blocks</p>
</li>
<li><p>Labels</p>
</li>
<li><p>Arguments</p>
</li>
</ul>
<h3>🔹 1. What is a Block?</h3>
<p>👉 A block is the <strong>main building unit</strong> in Terraform used to define infrastructure or configuration.</p>
<p>📌 Example:</p>
<pre><code class="language-shell">resource "aws_instance" "my_vm" {
}
</code></pre>
<p>👉 Meaning:</p>
<ul>
<li>Block tells Terraform <strong>what you want to create</strong></li>
</ul>
<p>📌 Common blocks:</p>
<ul>
<li><p><code>resource</code></p>
</li>
<li><p><code>provider</code></p>
</li>
<li><p><code>variable</code></p>
</li>
<li><p><code>output</code></p>
</li>
</ul>
<h3>🔹 2. What are Labels?</h3>
<p>👉 Labels are <strong>identifiers of a block</strong> that define resource type and name.</p>
<p>📌 Example:</p>
<pre><code class="language-shell">resource "aws_instance" "my_vm" {
}
</code></pre>
<p>👉 Meaning:</p>
<ul>
<li><p><code>aws_instance</code> → resource type</p>
</li>
<li><p><code>my_vm</code> → resource name</p>
</li>
</ul>
<p>👉 Used to uniquely identify resources</p>
<h3>🔹 3. What are Arguments?</h3>
<p>👉 Arguments are <strong>key-value pairs inside a block</strong> used to configure the resource.</p>
<p>📌 Example:</p>
<pre><code class="language-shell">instance_type = "t2.micro"
</code></pre>
<p>👉 Meaning:</p>
<ul>
<li><p>Defines how the resource should be created</p>
</li>
<li><p>Controls behavior and properties</p>
</li>
</ul>
<p><mark class="bg-yellow-200 dark:bg-yellow-500/30">🔗 Combined Example</mark></p>
<pre><code class="language-shell">resource "aws_instance" "my_vm" {
  instance_type = "t2.micro"
}
</code></pre>
<p>👉 Breakdown:</p>
<ul>
<li><p>Block → <code>resource</code></p>
</li>
<li><p>Labels → <code>aws_instance</code>, <code>my_vm</code></p>
</li>
<li><p>Argument → <code>instance_type</code></p>
</li>
</ul>
<h2><strong>👨‍💻 About the Author</strong></h2>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751797710818/123a7231-3dca-4273-ad68-7bd026f69b95.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt="" style="display:block;margin:0 auto" />

<p>“A complete Terraform series covering everything from fundamentals to advanced real-world infrastructure automation in a DevOps environment.”</p>
<h3><strong>📬 Let's Stay Connected</strong></h3>
<ul>
<li><p>📧 <strong>Email</strong>: <a href="mailto:gujjarapurv181@gmail.com"><strong>gujjarapurv181@gmail.com</strong></a></p>
</li>
<li><p>🐙 <strong>GitHub</strong>: <a href="http://github.com/ApurvGujjar07"><strong>github.com/ApurvGujjar07</strong></a></p>
</li>
<li><p>💼 <strong>LinkedIn</strong>: <a href="http://linkedin.com/in/apurv-gujjar"><strong>linkedin.com/in/apurv-gujjar</strong></a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[🚀 Terraform Series – Day 2]]></title><description><![CDATA[Terraform Setup on AWS EC2 & Local (Ubuntu , window )
In Day 1, we understood the fundamentals of Terraform and Infrastructure as Code (IaC).Now, in Day 2, we will set up Terraform in real environment]]></description><link>https://apurv-gujjar.me/terraform-series-day-2</link><guid isPermaLink="true">https://apurv-gujjar.me/terraform-series-day-2</guid><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[GCP]]></category><category><![CDATA[Devops]]></category><category><![CDATA[ec2]]></category><category><![CDATA[k8s]]></category><dc:creator><![CDATA[Gujjar Apurv]]></dc:creator><pubDate>Fri, 10 Apr 2026 03:10:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/de599b1b-0056-4109-a6a1-fc82a585558c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Terraform Setup on AWS EC2 &amp; Local (Ubuntu , window )</h3>
<p>In Day 1, we understood the fundamentals of Terraform and Infrastructure as Code (IaC).<br />Now, in Day 2, we will set up Terraform in real environments.</p>
<p>This guide covers installation on:</p>
<ul>
<li><p>AWS EC2 (Ubuntu)</p>
</li>
<li><p>Local Ubuntu Machine</p>
</li>
<li><p>Windows (using Chocolatey)</p>
</li>
</ul>
<h3>🎯 Objective</h3>
<ul>
<li><p>Install Terraform in different environments</p>
</li>
<li><p>Follow secure installation practices</p>
</li>
<li><p>Verify installation properly</p>
</li>
</ul>
<h3><mark class="bg-yellow-200 dark:bg-yellow-500/30">☁️ Part 1: Setup on AWS EC2 (Ubuntu)</mark></h3>
<h3>🧱 Step 1: Create EC2 Instance</h3>
<ul>
<li><p>Go to AWS EC2 Dashboard</p>
</li>
<li><p>Click <strong>Launch Instance</strong></p>
</li>
<li><p>Select:</p>
<ul>
<li><p>OS → Ubuntu</p>
</li>
<li><p>Instance Type → t2.micro (Free Tier)</p>
</li>
</ul>
</li>
<li><p>Create key pair</p>
</li>
<li><p>Launch instance</p>
</li>
</ul>
<h3>🔐 Step 2: Connect via SSH</h3>
<pre><code class="language-plaintext">ssh -i your-key.pem ubuntu@your-ec2-public-ip
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/6fc32909-6fcd-4f15-9950-c0047ddce544.png" alt="" style="display:block;margin:0 auto" />

<h3>📦 Step 3: Update System</h3>
<pre><code class="language-plaintext">sudo apt-get update &amp;&amp; sudo apt-get install -y gnupg software-properties-common
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/53f9989b-0108-4539-948d-f8b6a0584ead.png" alt="" style="display:block;margin:0 auto" />

<h3>🔑 Step 4: Add HashiCorp GPG Key</h3>
<pre><code class="language-plaintext">wget -O- https://apt.releases.hashicorp.com/gpg | \
gpg --dearmor | \
sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg &gt; /dev/null
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/74851202-a799-4aa3-a932-420e96edc12d.png" alt="" style="display:block;margin:0 auto" />

<h3>✅ Step 5: Verify GPG Key</h3>
<pre><code class="language-plaintext">gpg --no-default-keyring \
--keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg \
--fingerprint
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/367597fb-f49b-46a7-8a2e-f95e40bc41e3.png" alt="" style="display:block;margin:0 auto" />

<p>👉 Ensures authenticity and security</p>
<h3>📁 Step 6: Add Repository</h3>
<pre><code class="language-plaintext">echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
sudo tee /etc/apt/sources.list.d/hashicorp.list
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/9fed7f68-a778-4a1a-9a9b-d4942afdffa7.png" alt="" style="display:block;margin:0 auto" />

<h3>🔄 Step 7: Update Packages</h3>
<pre><code class="language-plaintext">sudo apt-get update
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/dd524921-cce0-4198-b4a6-13e9c7aa6ffd.png" alt="" style="display:block;margin:0 auto" />

<h3>⚙️ Step 8: Install Terraform</h3>
<pre><code class="language-plaintext">sudo apt-get install terraform
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/d8dedd16-9507-4a19-9486-b6029474a30c.png" alt="" style="display:block;margin:0 auto" />

<h3>🔍 Step 9: Verify Installation</h3>
<pre><code class="language-plaintext">terraform --version
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/fa254ee3-fb6a-40a0-a8d8-4d78f6416512.png" alt="" style="display:block;margin:0 auto" />

<h3>💻 Part 2: Local Installation (Ubuntu)</h3>
<p>👉 Follow the <strong>same steps as EC2</strong></p>
<p>No changes required — works exactly the same.</p>
<h3>🪟 Part 3: Terraform Installation on Windows</h3>
<h3>📦 Step 1: Install using Chocolatey</h3>
<pre><code class="language-plaintext">choco install terraform
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/64341917-39e1-4b4d-b2c2-76e83a44fb1f.png" alt="" style="display:block;margin:0 auto" />

<p>👉 Make sure <strong>Chocolatey is installed</strong> on your system</p>
<h3>🔍 Step 2: Verify Installation</h3>
<p>Open a new terminal (CMD/PowerShell):</p>
<pre><code class="language-plaintext">terraform -help
</code></pre>
<p>Expected output:</p>
<pre><code class="language-plaintext">Usage: terraform [global options] &lt;subcommand&gt; [args]
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/a5bbdf15-e299-49ff-8b38-aa625b9985fd.png" alt="" style="display:block;margin:0 auto" />

<p>👉 Shows all available Terraform commands</p>
<h3>⚡ Enable Autocomplete (Optional but Recommended)</h3>
<h3>🐧 Bash</h3>
<pre><code class="language-plaintext">touch ~/.bashrc
terraform -install-autocomplete
</code></pre>
<h3>🐚 Zsh</h3>
<pre><code class="language-plaintext">touch ~/.zshrc
terraform -install-autocomplete
</code></pre>
<p>👉 Restart terminal after this step</p>
<img src="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/70cab01c-dd28-4ad3-95e7-ae402a5a215a.png" alt="" style="display:block;margin:0 auto" />

<h3>📘 Alternative Installation Method</h3>
<p>You can also install Terraform using the official HashiCorp documentation depending on your OS and requirements.</p>
<p><a href="https://developer.hashicorp.com/terraform">Terraform official Documentation</a></p>
<h2><strong>👨‍💻 About the Author</strong></h2>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751797710818/123a7231-3dca-4273-ad68-7bd026f69b95.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt="" style="display:block;margin:0 auto" />

<p><mark class="bg-yellow-200 dark:bg-yellow-500/30">“This Kubernetes series focuses on building a strong foundation by understanding real Kubernetes concepts step by step.”</mark></p>
<h3><strong>📬 Let's Stay Connected</strong></h3>
<ul>
<li><p>📧 <strong>Email</strong>: <a href="mailto:gujjarapurv181@gmail.com"><strong>gujjarapurv181@gmail.com</strong></a></p>
</li>
<li><p>🐙 <strong>GitHub</strong>: <a href="http://github.com/ApurvGujjar07"><strong>github.com/ApurvGujjar07</strong></a></p>
</li>
<li><p>💼 <strong>LinkedIn</strong>: <a href="http://linkedin.com/in/apurv-gujjar"><strong>linkedin.com/in/apurv-gujjar</strong></a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[🚀 Terraform Series – Day 1]]></title><description><![CDATA[Introduction to Terraform & Infrastructure as Code (IaC)
In modern DevOps practices, managing infrastructure manually is no longer scalable or efficient. Organizations are rapidly shifting towards aut]]></description><link>https://apurv-gujjar.me/terraform-series-day-1</link><guid isPermaLink="true">https://apurv-gujjar.me/terraform-series-day-1</guid><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[GCP]]></category><category><![CDATA[vscode extensions]]></category><category><![CDATA[ec2]]></category><category><![CDATA[k8s]]></category><dc:creator><![CDATA[Gujjar Apurv]]></dc:creator><pubDate>Fri, 10 Apr 2026 03:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/685cdc0d5ca95e55fac3ab09/f984cc72-6bbe-413b-8cba-76064fda703c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Introduction to Terraform &amp; Infrastructure as Code (IaC)</h3>
<p>In modern DevOps practices, managing infrastructure manually is no longer scalable or efficient. Organizations are rapidly shifting towards <strong>automation and Infrastructure as Code (IaC)</strong> to ensure consistency, speed, and reliability.</p>
<p>This is where <strong>Terraform</strong> plays a crucial role.</p>
<h3>📌 What is Terraform?</h3>
<p>Terraform is an <strong>Infrastructure as Code (IaC)</strong> tool developed by HashiCorp that enables you to define, provision, and manage infrastructure using code.</p>
<ul>
<li><p>Uses <strong>HCL (HashiCorp Configuration Language)</strong></p>
</li>
<li><p>Follows a <strong>declarative approach</strong></p>
</li>
<li><p>Automates infrastructure lifecycle</p>
</li>
</ul>
<p>👉 Instead of manually creating resources, you define them in code and Terraform handles the execution.</p>
<h3>🏢 About HashiCorp</h3>
<p>HashiCorp is a technology company focused on building tools for:</p>
<ul>
<li><p>Infrastructure automation</p>
</li>
<li><p>Security management</p>
</li>
<li><p>Application deployment</p>
</li>
</ul>
<p>👉 Founded in <strong>2014</strong><br />👉 By <strong>Mitchell Hashimoto</strong> and <strong>Armon Dadgar</strong></p>
<p>Some popular tools by HashiCorp:</p>
<ul>
<li><p>Terraform</p>
</li>
<li><p>Vault</p>
</li>
<li><p>Consul</p>
</li>
<li><p>Nomad</p>
</li>
</ul>
<h3>⚡ Why Terraform Matters in DevOps</h3>
<p>In real-world environments, infrastructure needs to be:</p>
<ul>
<li><p><strong>Consistent</strong> across environments</p>
</li>
<li><p><strong>Scalable</strong> based on demand</p>
</li>
<li><p><strong>Repeatable</strong> without errors</p>
</li>
<li><p><strong>Automated</strong> to reduce manual effort</p>
</li>
</ul>
<p>Terraform enables all of this by:</p>
<ul>
<li><p>Eliminating manual provisioning</p>
</li>
<li><p>Enforcing infrastructure consistency</p>
</li>
<li><p>Supporting version control (Git-based workflows)</p>
</li>
<li><p>Enabling multi-cloud deployments</p>
</li>
</ul>
<h3>🌍 Real-World DevOps Scenario</h3>
<p>Consider a company managing multiple environments:</p>
<ul>
<li><p><strong>Development</strong></p>
</li>
<li><p><strong>Testing</strong></p>
</li>
<li><p><strong>Production</strong></p>
</li>
</ul>
<p>Each environment requires:</p>
<ul>
<li><p>Virtual Machines</p>
</li>
<li><p>Networking setup</p>
</li>
<li><p>Load balancing</p>
</li>
</ul>
<h3>❌ Without Terraform</h3>
<ul>
<li><p>Manual configuration</p>
</li>
<li><p>Time-consuming process</p>
</li>
<li><p>High probability of human errors</p>
</li>
<li><p>Difficult to maintain consistency</p>
</li>
</ul>
<h3>✅ With Terraform</h3>
<ul>
<li><p>Define infrastructure once</p>
</li>
<li><p>Reuse configurations across environments</p>
</li>
<li><p>Deploy with a single command</p>
</li>
<li><p>Modify using variables and version control</p>
</li>
</ul>
<p>👉 Result: <strong>Faster, reliable, and scalable infrastructure management</strong></p>
<h3>⚔️ Terraform vs Other Tools</h3>
<h3><mark class="bg-yellow-200 dark:bg-yellow-500/30">🔹 Terraform vs Ansible</mark></h3>
<p><strong>Terraform</strong></p>
<ul>
<li><p>Focus: Infrastructure provisioning</p>
</li>
<li><p>Creates resources such as:</p>
<ul>
<li><p>Virtual Machines</p>
</li>
<li><p>Networks</p>
</li>
<li><p>Load Balancers</p>
</li>
</ul>
</li>
</ul>
<p><strong>Ansible</strong></p>
<ul>
<li><p>Focus: Configuration management</p>
</li>
<li><p>Handles:</p>
<ul>
<li><p>Software installation</p>
</li>
<li><p>System updates</p>
</li>
<li><p>Application setup</p>
</li>
</ul>
</li>
</ul>
<p>👉 In practice:<br />Terraform → <em>Creates infrastructure</em><br />Ansible → <em>Configures infrastructure</em></p>
<h3><mark class="bg-yellow-200 dark:bg-yellow-500/30">🔹 Terraform vs AWS CloudFormation</mark></h3>
<p><strong>Terraform</strong></p>
<ul>
<li><p>Supports multiple cloud providers:</p>
<ul>
<li><p>AWS</p>
</li>
<li><p>Azure</p>
</li>
<li><p>GCP</p>
</li>
</ul>
</li>
<li><p>Enables <strong>multi-cloud strategy</strong></p>
</li>
</ul>
<p><strong>AWS CloudFormation</strong></p>
<ul>
<li>Limited to <strong>AWS ecosystem only</strong></li>
</ul>
<p>👉 Terraform provides <strong>flexibility</strong>, while CloudFormation is <strong>AWS-specific</strong></p>
<h3>🧠 Key Takeaways</h3>
<ul>
<li><p>Terraform is a core DevOps tool for <strong>Infrastructure as Code</strong></p>
</li>
<li><p>It replaces manual infrastructure setup with <strong>automated workflows</strong></p>
</li>
<li><p>Supports <strong>multi-cloud environments</strong></p>
</li>
<li><p>Ensures <strong>consistency, scalability, and efficiency</strong></p>
</li>
</ul>
<h2><strong>👨‍💻 About the Author</strong></h2>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751797710818/123a7231-3dca-4273-ad68-7bd026f69b95.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt="" style="display:block;margin:0 auto" />

<p>“A complete Terraform series covering everything from fundamentals to advanced real-world infrastructure automation in a DevOps environment.”</p>
<h3><strong>📬 Let's Stay Connected</strong></h3>
<ul>
<li><p>📧 <strong>Email</strong>: <a href="mailto:gujjarapurv181@gmail.com"><strong>gujjarapurv181@gmail.com</strong></a></p>
</li>
<li><p>🐙 <strong>GitHub</strong>: <a href="http://github.com/ApurvGujjar07"><strong>github.com/ApurvGujjar07</strong></a></p>
</li>
<li><p>💼 <strong>LinkedIn</strong>: <a href="http://linkedin.com/in/apurv-gujjar"><strong>linkedin.com/in/apurv-gujjar</strong></a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[“My DevOps Interview Experience & Questions (2025)”]]></title><description><![CDATA[DevOps is one of the fastest-growing and most in-demand skills in the IT world 🌍, especially for beginners and cloud engineers 🚀. After learning DevOps tools ⚙️ and working on practical projects 💡, I recently attended two DevOps job interviews 🎯....]]></description><link>https://apurv-gujjar.me/my-devops-interview-experience-and-questions-2025</link><guid isPermaLink="true">https://apurv-gujjar.me/my-devops-interview-experience-and-questions-2025</guid><category><![CDATA[AWS]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Linux]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[cicd]]></category><category><![CDATA[Jenkins]]></category><category><![CDATA[Python]]></category><dc:creator><![CDATA[Gujjar Apurv]]></dc:creator><pubDate>Sun, 21 Dec 2025 14:41:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766327936838/aa82218c-219b-4817-9669-ba174a05d957.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>DevOps is one of the fastest-growing and most in-demand skills in the IT world 🌍, especially for beginners and cloud engineers 🚀. After learning DevOps tools ⚙️ and working on practical projects 💡, I recently attended two DevOps job interviews 🎯. In this blog, I’m excited to share my real interview experience 📘, including the questions asked ❓, the topics covered 📌, and the key areas interviewers focused on 👀. My goal is to help beginners understand the real interview environment 💼, build confidence 💪, and get a clear idea of what companies truly expect from DevOps candidates 🌟.</p>
<details><summary>📝 Note:</summary><div data-type="detailsContent">All the interview answers shared in this blog are based on my own experience and the responses I gave during my interviews. If you feel any answer needs correction or improvement, please update it as per your understanding. The purpose of this blog is to share real DevOps interview-learning experiences, not to claim that every answer is perfect. 😊</div></details>

<p>🟢 <strong><mark>Interview Question 1: “Tell me about yourself.”</mark></strong></p>
<p><strong>Answer I Gave :-</strong></p>
<p>“Good morning/afternoon, and <strong>thank you for giving me this opportunity</strong> to introduce myself.</p>
<p>My name is <strong>Gujjar Apurv</strong>, and I am a final-year BE student in Electronics and Communication Engineering. Over the past two years, I have been actively learning and practicing <mark>DevOps and Cloud technologies</mark> with strong hands-on experience. To validate my technical skills, I appeared for and successfully cleared the AWS <mark>Associate Developer certification exam</mark>, which has strengthened my foundation in cloud concepts and AWS services.</p>
<p>Recently, I was completed a Cloud Internship as a Cloud Intern at Corextech IT Services, where I gained practical experience with <mark>AWS services such as EC2, VPC, IAM, RDS, Route 53, AWS Lambda, S3, </mark> and many more. During this internship, I assisted in the <mark>deployment and monitoring</mark> of cloud-based systems. Based on my performance, the company assigned me an additional one-month responsibility to work on their live projects.</p>
<p>I have worked on <mark>multiple hands-on projects</mark>, although I have highlighted only two major ones in my resume.</p>
<p>The first project is the <strong>Netflix Clone Project</strong>, where I built a complete CI/CD pipeline using Jenkins, Docker, GitHub Actions, monitoring tools, and security integrations. I also implemented DevSecOps practices with live monitoring.</p>
<p>The second project is an <strong>AWS Three-Tier Secure Application</strong>, where I designed a scalable architecture using EC2, VPC, Subnets, RDS, Route Tables, ALB, and various other AWS components.</p>
<p>Along with technical work, I have also written <mark>technical blogs </mark> to explain my projects and tools, which helped me demonstrate my <mark>technical understanding and documentation skills.</mark></p>
<p>These experiences have helped me build confidence in automation, deployment, cloud monitoring, Linux server handling, and scripting skills that are essential in the DevOps industry.</p>
<p>Thank you.”</p>
<p>🟢 <strong><mark>Interview Question 2: “Why did you choose DevOps, and what is DevOps?”</mark></strong></p>
<p>“I chose DevOps because I enjoy automation, cloud technologies, and improving system efficiency. DevOps matches my problem-solving mindset and gives me the opportunity to work end-to-end building faster, scalable, and reliable environments.”</p>
<p>“DevOps is a set of practices that helps teams build and deploy software faster using automation. It reduces errors, improves delivery speed, and enhances collaboration between development and operations teams.”</p>
<p>🟢 <strong><mark>Interview Question 3: “Give one DevOps example which is related to the industry.”</mark></strong></p>
<p>“For example, when a developer pushes code to GitHub, a CI/CD pipeline automatically builds, tests, and deploys the application to the server. This removes manual steps, reduces errors, and speeds up software delivery.</p>
<p>🟢 <strong><mark>Interview Question 4: “What is your strength and weakness?”</mark></strong></p>
<p><strong>Answer I Gave:</strong><br />✅ <strong>Strength:</strong><br />“My strength is that I am a quick learner and I adapt to new technologies faster.<br />⚠️ <strong>Weakness:</strong><br />“My weakness is that I am detail-oriented, and sometimes I over-polish my work.</p>
<p><strong>🟢 <mark>Interview Question 5: “What is Cloud vs Cloud Computing?”</mark></strong></p>
<p><strong>Answer I Gave:</strong></p>
<p><strong>☁️ Cloud:</strong></p>
<p>“The cloud is a platform that <mark>stores and analyzes data</mark> over the internet instead of using a local system.”</p>
<p><strong>💻 Cloud Computing:</strong></p>
<p>“Cloud computing provides the <mark>delivery of computing resources</mark> like servers, storage, databases, and networking over the internet, without managing physical hardware.”</p>
<p><strong>🟢 <mark>Interview Question 6: “Types of Cloud?”</mark></strong></p>
<p><strong>Answer I Gave:</strong></p>
<ul>
<li><p><strong>Public Cloud:</strong> Open for all users publicly over the internet.<br />  <strong>Example:</strong> AWS, Azure, Google Cloud</p>
</li>
<li><p><strong>Private Cloud:</strong> Not publicly accessible; used by a specific organization.<br />  <strong>Example:</strong> VMware, OpenStack</p>
</li>
<li><p><strong>Hybrid Cloud:</strong> A combination of public and private cloud models.<br />  <strong>Example:</strong> AWS + VMware integration</p>
</li>
</ul>
<p><strong><mark>🟢 Interview Question 7:(common for both interview )</mark></strong></p>
<p><strong><mark>“What are Cloud Service Models, and how do they compare with AWS services?”</mark></strong>**</p>
<p><strong>Answer I Gave:</strong></p>
<p>Cloud services are divided into <strong>three main models</strong>:</p>
<p><strong>1️⃣ IaaS – Infrastructure as a Service</strong></p>
<p>Provides virtual infrastructure like servers, storage, and networking resources.</p>
<p><strong>AWS Examples:</strong></p>
<ul>
<li><p>EC2 (Compute)</p>
</li>
<li><p>EBS (Storage)</p>
</li>
<li><p>S3 (Object Storage)</p>
</li>
<li><p>VPC (Networking)</p>
</li>
<li><p>ELB (Load Balancing)</p>
</li>
</ul>
<p><strong>2️⃣ PaaS – Platform as a Service</strong></p>
<p>Provides a platform to build, run, and manage applications without managing infrastructure.</p>
<p><strong>AWS Examples:</strong></p>
<ul>
<li><p>RDS (Database Platform)</p>
</li>
<li><p>Lambda (Serverless Platform)</p>
</li>
<li><p>ECS / EKS (Container Platforms)</p>
</li>
</ul>
<p><strong>3️⃣ SaaS – Software as a Service</strong></p>
<p>Provides ready-to-use software or tools on the cloud.</p>
<p><strong>AWS Examples:</strong></p>
<ul>
<li><p>CloudWatch (Monitoring Tool)</p>
</li>
<li><p>Amazon WorkMail (Email Service)</p>
</li>
<li><p>AWS Chime (Communication Service)</p>
</li>
</ul>
<p><mark>🟢 </mark> <strong><mark>Interview Question 8: -“What protocols do you know in cloud networking?”</mark></strong></p>
<p><strong>Answer I Gave:</strong></p>
<p>I mentioned the basic cloud networking protocols and explained each in one line with examples:</p>
<p><strong>1️⃣ HTTP / HTTPS</strong></p>
<p>Used for web communication between client and server.<br /><strong>Example:</strong> Accessing websites or APIs through browsers.</p>
<p><strong>2️⃣ SMTP</strong></p>
<p>Used for sending emails over the internet.<br /><strong>Example:</strong> Email delivery systems and notification services.</p>
<p><strong>3️⃣ DNS</strong></p>
<p>Used to translate domain names into IP addresses.<br /><strong>Example:</strong> Creating DNS records on Route 53.</p>
<p><strong>4️⃣ TCP / UDP</strong></p>
<p>Transport layer protocols used for sending data packets between devices.<br /><strong>Example:</strong> SSH uses TCP; streaming uses UDP.</p>
<p><strong>5️⃣ POP3 (</strong><code>Post Office Protocol version 3</code>)</p>
<p>Used to receive emails from a mail server.<br /><strong>Example:</strong> Inbox downloading in email clients.</p>
<p><strong><mark>🟢 Interview Question 9:- “How will you host a website on AWS? Explain for both static and dynamic websites.</mark></strong></p>
<p><strong>Answer I Gave:</strong></p>
<p>I asked whether the website is static or dynamic. The interviewer said both, so I explained each case separately:</p>
<p><strong>1️⃣ Static Website Hosting on AWS</strong></p>
<p>For a static website, I would:</p>
<ul>
<li><p>Use <strong>S3</strong> to store static files like HTML, CSS, JS.</p>
</li>
<li><p>Enable <strong>Static Website Hosting</strong> in S3.</p>
</li>
<li><p>Use <strong>CloudFront</strong> CDN for caching and global delivery.</p>
</li>
<li><p>Use <strong>Route 53</strong> for DNS mapping to a domain name.</p>
</li>
</ul>
<p>👉 Result: Fast, secure, scalable static website hosting.</p>
<p><strong>2️⃣ Dynamic Website Hosting on AWS</strong></p>
<p>For a dynamic website, I would:</p>
<ul>
<li><p>Use <strong>EC2</strong> instance to run the backend application.</p>
</li>
<li><p>Use <strong>Load Balancer (ALB/ELB)</strong> to distribute traffic.</p>
</li>
<li><p>Use <strong>Auto Scaling Group</strong> to handle traffic load.</p>
</li>
<li><p>Use <strong>RDS</strong> database for storage.</p>
</li>
<li><p>Use <strong>VPC</strong> for secure networking.</p>
</li>
<li><p>Use <strong>Route 53</strong> to map the domain.</p>
</li>
</ul>
<p>👉 Result: Highly available, secure and scalable dynamic website architecture.</p>
<p><strong><mark>🟢 Interview Question 10: (IMP)</mark></strong></p>
<p><mark>“Difference Between static and dynamic hosting, why is it needed, and how does it work?”</mark></p>
<p><strong>Answer I Gave:</strong></p>
<p>I explained that DNS is used in <strong>both static and dynamic hosting</strong>, because in both cases users access the website through a domain name, and DNS maps that domain to the correct server or endpoint.</p>
<p><strong><mark>Static website hosting</mark></strong> serves fixed content directly from <strong>S3 via CloudFront</strong>, and DNS points to the CloudFront distribution.<br /><strong><mark>Dynamic website hosting</mark></strong> generates real-time content using <strong>EC2 (behind ALB) with a database like RDS</strong>, and DNS points to the ALB.</p>
<p><strong>📌 Why DNS is needed?</strong></p>
<p>DNS <mark>converts a domain name into an IP address or endpoint URL </mark> so <mark>browsers can locate the server or hosting resource.</mark></p>
<p><strong>📌 Type of DNS Records I Mentioned:</strong></p>
<h4 id="heading-1-a-record-address-record"><strong>1️⃣ A Record (Address Record)</strong></h4>
<p>Maps a domain to an IP address.</p>
<ul>
<li><p>In static hosting → rarely needed, unless pointing to CloudFront IP via LB.</p>
</li>
<li><p>In dynamic hosting → points to EC2 or Load Balancer IP.</p>
</li>
</ul>
<p><strong>Example:</strong><br /><a target="_blank" href="http://example.com"><code>example.com</code></a> <code>→ 54.xx.xx.xx (EC2 public IP / LB IP)</code></p>
<h4 id="heading-2-cname-canonical-name-record"><strong>2️⃣ CNAME (Canonical Name Record)</strong></h4>
<p>Maps one domain name to another domain name.<br />Used for endpoints instead of IP.</p>
<p><strong>Example:</strong><br /><a target="_blank" href="http://www.example.com"><code>www.example.com</code></a> <code>→</code> <a target="_blank" href="http://xyz.cloudfront.net"><code>xyz.cloudfront.net</code></a><br /><a target="_blank" href="http://app.example.com"><code>app.example.com</code></a> <code>→</code> <a target="_blank" href="http://loadbalancer.aws.com"><code>loadbalancer.aws.com</code></a></p>
<p><strong>📌 Step-by-Step DNS Workflow I Explained:</strong></p>
<p>1️⃣ User types domain name in browser<br />2️⃣ DNS checks domain record<br />3️⃣ If <strong>A Record</strong> → returns server IP<br />4️⃣ If <strong>CNAME</strong> → returns mapped domain → then endpoint resolves to IP<br />5️⃣ Browser connects to that IP<br />6️⃣ Website loads from static bucket / LB / EC2</p>
<p><strong>🟢 <mark>Interview Question 11:</mark></strong></p>
<p><mark>“If you have one EC2 instance running a website, can you host it using S3 or Route 53? If yes, how? What IP and record type will you assign?”</mark></p>
<p><strong>Answer I Gave:</strong></p>
<p>“Yes, a website running on a single EC2 instance can be hosted using <mark>Route 53. </mark> Since it is a <mark>dynamic website</mark>, I will assign an <mark>Elastic IP</mark> to the EC2 instance and create an <mark>A record in Route 53 </mark> pointing the <mark>domain to that Elastic IP</mark>. S3 is not required because EC2 is directly serving the application.”</p>
<p><strong><mark>🟢 Interview Question 12:</mark></strong></p>
<p><mark>“How do you create an EC2 instance? What are inbound/outbound rules? Can you launch multiple EC2s at once?”</mark></p>
<p><strong>Answer I Gave:</strong></p>
<p>✔️ <strong>How to create EC2:</strong></p>
<p>“Select AMI → select instance type → configure storage &amp; network → add security group rules → launch → SSH connect.”</p>
<p>✔️ <strong>Inbound Rules:</strong></p>
<p>“Allow traffic coming into EC2, like HTTP (80), HTTPS (443), SSH (22).”</p>
<p>✔️ <strong>Outbound Rules:</strong></p>
<p>“Allow traffic going out from EC2, usually open to all by default.”</p>
<p>✔️ <strong>Multiple EC2 instances:</strong></p>
<p>“Yes, we can launch multiple instances together by increasing instance count during launch or using Auto Scaling.”</p>
<p><strong><mark>🟢 Interview Question 13:</mark></strong></p>
<p><mark>“What is VPC?”</mark></p>
<p><strong>Short Answer I Gave:-</strong></p>
<p>“VPC stands for Virtual Private Cloud, and it is used as a secure and isolated network in AWS.”</p>
<p><strong><mark>🟢 Interview Question 14:</mark></strong></p>
<p><mark>“How did you implement your 3-tier architecture project?”</mark></p>
<p><strong>Short Answer I Gave:</strong></p>
<p>“First, I created a custom VPC. Then I created one public and two private subnets. After that, I attached an Internet Gateway for public access and configured a NAT Gateway for private subnet outbound access. Next, I configured route tables for public and private subnets. Then I launched EC2 instances for the web tier in the public subnet and app and database tiers in private subnets. Finally, I configured security groups to allow only required communication between the tiers.”</p>
<p><strong><mark>🟢 Interview Question 15:</mark></strong></p>
<p><mark>“If your server gets millions of traffic, how will you handle it?”</mark></p>
<p><strong>Short Answer I Gave:</strong></p>
<p>“To handle high traffic, I will use a Load Balancer to distribute requests across multiple servers. If the application needs more scalability and fault tolerance, I can migrate to Kubernetes, which automatically manages scaling, load distribution, and high availability.”</p>
<p><strong><mark>🟢 Interview Question 16:</mark></strong></p>
<p><mark>“Which subnets use inbound and outbound rules, and what are inbound and outbound rules?”</mark></p>
<p><strong>Short Answer I Gave:-</strong></p>
<p><strong>Inbound Rules:</strong><br />“Allow traffic coming into the instance like HTTP (80), HTTPS (443), SSH (22).”</p>
<p><strong>Outbound Rules:</strong><br />“Allow traffic going out from the instance usually open for internet access or updates.”</p>
<p><strong>Subnet Preference:</strong><br />“Public subnets allow inbound internet traffic; private subnets allow inbound only from internal networks.”</p>
<p><strong><mark>🟢 Interview Question 17:</mark></strong></p>
<p><mark>“You have 1 VPC with 3 subnets (Subnet-1 Public, Subnet-2 Private, Subnet-3 Private). Subnet-1 has 3 EC2 instances and other subnets have 1 EC2 each. If you want EC2s inside the public subnet to communicate with each other, will you use Internet Gateway? If yes/no, why? And how will subnet communication happen?”</mark></p>
<p><strong>Answer I Gave:</strong></p>
<p>✔️ Communication inside same subnet:</p>
<p>“No, Internet Gateway is not required because instances in the same subnet can communicate with each other privately using internal IPs.”</p>
<p>✔️ Communication between different subnets:</p>
<p>“To communicate across Subnet-1, Subnet-2, and Subnet-3, I will use the same VPC routing. All subnets inside a VPC are already connected by default, so instances can communicate using private IPs without Internet Gateway.”</p>
<p>✔️ Why Internet Gateway is not needed:</p>
<p>“Internet Gateway is only needed for public internet access, not for internal VPC communication.”</p>
<p>✔️ How Subnet-1 will communicate with Subnet-2 &amp; Subnet-3:</p>
<p>“Since all subnets are in the same VPC, VPC routing table already allows internal communication. So EC2s can talk to each other using private IPs. No VPC peering or Internet Gateway is required.”</p>
<p><strong><mark>🟢 Interview Question 18:</mark></strong></p>
<p><mark>“If each subnet is in a different VPC, how will you make EC2 instances communicate with each other?”</mark></p>
<p><strong>Short Answer I Gave:</strong></p>
<p>“If each subnet is in a different VPC, I will use VPC Peering or Transit Gateway, then update route tables and security groups so EC2 instances can communicate privately without using the Internet Gateway.”</p>
<p><strong><mark>🟢 Interview Question 19:</mark></strong></p>
<p><mark>“In VPC communication, do we use Internet Gateway or NAT Gateway for communication? Why not? And why do we use VPC Peering or Transit Gateway instead?”</mark></p>
<p><strong>Short Answer I Gave:</strong></p>
<ul>
<li><p><strong>We don’t use Internet Gateway or NAT Gateway for VPC-to-VPC communication</strong>, because they are used for internet access, not internal private communication.</p>
</li>
<li><p>For connecting VPCs, we use <strong>VPC Peering</strong> or <strong>Transit Gateway</strong> because they allow <strong>private, secure communication</strong> between VPC networks.</p>
</li>
</ul>
<p><strong><mark>🟢 Interview Question 20:</mark></strong></p>
<p><mark>“What is internet access in EC2, why do we assign a public subnet, is EC2 public by default, and how do you access it?”</mark></p>
<p><strong>Answer I Gave:</strong></p>
<p>✔️ <strong>What is internet access in EC2?</strong></p>
<p>“Internet access means the EC2 instance can send and receive traffic from the internet using a public IP and Internet Gateway.”</p>
<p>✔️ <strong>Why do we assign a public subnet?</strong></p>
<p>“To give EC2 internet access through the Internet Gateway. Public subnet allows the instance to be reachable from the outside network.”</p>
<p>✔️ <strong>Is EC2 public by default?</strong></p>
<p>“No. EC2 becomes public only if:</p>
<ul>
<li><p>Public IP is enabled</p>
</li>
<li><p>Subnet route points to Internet Gateway</p>
</li>
<li><p>Security group allows traffic”</p>
</li>
</ul>
<p>✔️ <strong>How do we access EC2?</strong></p>
<p>“We can access EC2 using SSH with a public IP and key pair, or through EC2 Instance Connect from the AWS console.”</p>
<p><strong><mark>🟢 Interview Question 21:</mark></strong></p>
<p><mark>“</mark><strong><mark>What is a Region in AWS? Explain with example. What is an Availability Zone?”</mark></strong></p>
<p><strong>Answer I Gave:</strong></p>
<p>✔️ <strong>What is Region in AWS?</strong></p>
<p>“A Region is a physical geographical location where AWS data centers are hosted.”</p>
<p><strong>Example:</strong></p>
<ul>
<li><p>Mumbai Region → <code>ap-south-1</code></p>
</li>
<li><p>North Virginia Region → <code>us-east-1</code></p>
</li>
</ul>
<p>✔️ <strong>What is Availability Zone (AZ)?</strong></p>
<p>“AZ is a group of isolated data centers inside a region that work together to provide high availability.”</p>
<p><strong>Example:</strong></p>
<ul>
<li><p>Mumbai Region has AZs like:</p>
<ul>
<li><p><code>ap-south-1a</code></p>
</li>
<li><p><code>ap-south-1b</code></p>
</li>
<li><p><code>ap-south-1c</code></p>
</li>
</ul>
</li>
</ul>
<p><strong><mark>🟢 Interview Question 22:</mark></strong></p>
<p><strong><mark>“What is an IAM Role and how do you use it?”</mark></strong></p>
<p><strong>Answer I Gave:</strong></p>
<p><strong>What is IAM Role?</strong><br />“An IAM Role gives permission to AWS services or users without needing access keys.”</p>
<p><strong>How to use it?</strong><br />“Attach the role to EC2, Lambda, or any service to allow secure resource access.”</p>
<p><strong><mark>🟢 Interview Question 23:-</mark></strong></p>
<p><strong><mark>“Which AWS service works in a serverless model</mark></strong><mark>?”</mark></p>
<p><strong>Answer I Gave:</strong></p>
<p>“AWS Lambda is a serverless compute service that runs code without managing servers. You only upload code, and Lambda handles scaling automatically.”</p>
<p><strong><mark>✅ Linux Commands for DevOps – Quick Reference Table</mark></strong></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Command</td><td>Use Case (Short Description)</td></tr>
</thead>
<tbody>
<tr>
<td><strong>ls</strong></td><td>Show list of files &amp; folders</td></tr>
<tr>
<td><strong>cd</strong></td><td>Change directory</td></tr>
<tr>
<td><strong>pwd</strong></td><td>Show current directory path</td></tr>
<tr>
<td><strong>cp</strong></td><td>Copy files/folders</td></tr>
<tr>
<td><strong>mv</strong></td><td>Move or rename files/folders</td></tr>
<tr>
<td><strong>rm -rf</strong></td><td>Delete files or folders</td></tr>
<tr>
<td><strong>find</strong></td><td>Search files by name</td></tr>
<tr>
<td><strong>grep</strong></td><td>Search text inside files</td></tr>
<tr>
<td><strong>chmod</strong></td><td>Change file permissions</td></tr>
<tr>
<td><strong>chown</strong></td><td>Change file ownership</td></tr>
<tr>
<td><strong>df -h</strong></td><td>Check disk storage usage</td></tr>
<tr>
<td><strong>du -sh</strong></td><td>Check folder size</td></tr>
<tr>
<td><strong>top</strong></td><td>View live CPU + memory usage</td></tr>
<tr>
<td><strong>free -h</strong></td><td>Check RAM usage</td></tr>
<tr>
<td><strong>ps -ef</strong></td><td>Check running processes</td></tr>
<tr>
<td><strong>kill -9 PID</strong></td><td>Stop/kill stuck process</td></tr>
<tr>
<td><strong>tail -f</strong></td><td>Read live logs</td></tr>
<tr>
<td><strong>ping IP/Domain</strong></td><td>Test network connectivity</td></tr>
<tr>
<td><strong>netstat -tulnp</strong></td><td>Check open ports</td></tr>
<tr>
<td><strong>ss -tulnp</strong></td><td>Check listening ports</td></tr>
<tr>
<td><strong>yum install pkg</strong></td><td>Install package (RHEL/CentOS)</td></tr>
<tr>
<td><strong>apt install pkg</strong></td><td>Install package (Ubuntu/Debian)</td></tr>
<tr>
<td><strong>scp</strong></td><td>Secure file transfer</td></tr>
<tr>
<td><strong>curl / wget</strong></td><td>Download files</td></tr>
<tr>
<td><strong>tar -cvf</strong></td><td>Compress archive</td></tr>
<tr>
<td><strong>tar -xvf</strong></td><td>Extract archive</td></tr>
<tr>
<td><strong>systemctl start/stop/restart</strong></td><td>Manage services</td></tr>
<tr>
<td><strong>ssh -i key.pem user@ip</strong></td><td>Connect to remote server</td></tr>
<tr>
<td><strong>uname -a</strong></td><td>Show kernel details</td></tr>
<tr>
<td><strong>hostnamectl</strong></td><td>Show hostname info</td></tr>
<tr>
<td><strong>ip a / ifconfig</strong></td><td>Show IP details</td></tr>
<tr>
<td><strong>history</strong></td><td>Show executed commands</td></tr>
<tr>
<td><strong>whoami</strong></td><td>Show current user</td></tr>
</tbody>
</table>
</div><p><strong><mark>🟢 Interview Question 24:</mark></strong></p>
<p><strong><mark>“What is the difference between Virtual Machines and Docker?”</mark></strong></p>
<p><strong>Answer I Gave:</strong></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><mark>VM</mark></td><td><mark>Docker</mark></td></tr>
</thead>
<tbody>
<tr>
<td>Heavy</td><td>Lightweight</td></tr>
<tr>
<td>Full OS required</td><td>Shares host OS</td></tr>
<tr>
<td>Slow startup</td><td>Fast startup</td></tr>
<tr>
<td>High resources used</td><td>Low resources used</td></tr>
<tr>
<td>Strong isolation</td><td>Process-level isolation</td></tr>
</tbody>
</table>
</div><p><strong>Short Answer:</strong><br />“VMs are heavy and slow because they run full OS; Docker is fast and lightweight because it uses containers and shares the host OS.”</p>
<p><strong><mark>🟢 Interview Question 25:</mark></strong></p>
<p><strong><mark>“What is Docker and why do we use it?”</mark></strong></p>
<p><strong>Answer I Gave:</strong><br />“Docker is a tool used to create, run, and manage containers for applications. It helps applications run faster, improves portability, and makes deployment easy.”</p>
<p><strong><mark>🟢 Interview Question 26:</mark></strong></p>
<p><strong><mark>“What is a Docker Image and what is a Docker Container?”</mark></strong></p>
<p><strong>Answer I Gave:</strong></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Docker Image</td><td>Docker Container</td></tr>
</thead>
<tbody>
<tr>
<td>Blueprint/template of application</td><td>Running instance of the image</td></tr>
<tr>
<td>Read-only</td><td>Live, executable environment</td></tr>
<tr>
<td>Stored in registry</td><td>Runs on host machine</td></tr>
</tbody>
</table>
</div><p><strong><mark>Short Answer:</mark></strong><br />“A Docker image is a blueprint, and a container is the running instance created from that image.”</p>
<p><strong><mark>🟢 Interview Question 27:</mark></strong></p>
<p><strong><mark>“If Docker is available, why do we use Kubernetes? What is the difference?”</mark></strong></p>
<p><strong>Answer I Gave:</strong></p>
<p><strong>Docker:</strong><br />“Docker is used to create and run containers.”</p>
<p><strong>Kubernetes:</strong><br />“Kubernetes is used to manage and scale containers automatically across multiple servers.”</p>
<p><strong>Short Difference:</strong><br />“Docker runs containers. Kubernetes manages many containers.”</p>
<p><strong><mark>🟢 Interview Question 28:</mark></strong></p>
<p><strong><mark>“What is a Dockerfile? Give example and build command.”</mark></strong></p>
<p><strong>Answer I Gave:</strong><br />“A Dockerfile is a script that contains instructions to build a Docker image automatically.”</p>
<p><strong>Example Dockerfile:</strong></p>
<pre><code class="lang-powershell">FROM python:<span class="hljs-number">3.9</span>
<span class="hljs-built_in">COPY</span> app.py .
CMD [<span class="hljs-string">"python"</span>, <span class="hljs-string">"app.py"</span>]
</code></pre>
<p><strong>Build Command:</strong></p>
<pre><code class="lang-powershell">docker build <span class="hljs-literal">-t</span> app .
</code></pre>
<blockquote>
<p><strong><mark>app is the image name</mark></strong></p>
<h3 id="heading-one-line-logic"><strong>🟢 One Line Logic:</strong></h3>
</blockquote>
<p><strong><mark>“Dockerfile creates the image, and Docker Compose runs the image.”</mark></strong></p>
<p><strong>🟢 Interview Question 29:</strong></p>
<p><strong>“What is Docker Compose and how does it work? Give example.”</strong></p>
<p><strong>Answer I Gave:</strong><br />“Docker Compose is a tool used to run multiple containers together using a single YAML file. It helps manage multi-container applications easily.”</p>
<p><strong>📌 How it works:</strong></p>
<ul>
<li><p>We write all container details inside <code>docker-compose.yml</code></p>
</li>
<li><p>Then run all containers with one command</p>
</li>
<li><p><strong>📌 Example:</strong></p>
</li>
</ul>
<p><strong>docker-compose.yml:</strong></p>
<pre><code class="lang-powershell">version: <span class="hljs-string">'3'</span>
services:
  web:
    image: nginx
    ports:
      - <span class="hljs-string">"80:80"</span>
</code></pre>
<p><strong>📌 Command:</strong></p>
<pre><code class="lang-powershell">docker<span class="hljs-literal">-compose</span> up <span class="hljs-literal">-d</span>
</code></pre>
<p><strong><mark>🟢 Interview Question 30 : -</mark></strong></p>
<p><strong><mark>“If your system gets millions of traffic and you don’t want to manage EC2 manually, what will you use?”</mark></strong></p>
<p><strong>Answer I Gave:</strong><br />“I will use Kubernetes because it automatically handles scaling, deployment, traffic distribution, self-healing, and node management. It creates and manages containers without manually launching EC2 instances.”</p>
<p><strong><mark>🟢 Interview Question 31:</mark></strong></p>
<p><strong><mark>“What is Kubernetes?”</mark></strong></p>
<p><strong>Answer I Gave:</strong><br />“Kubernetes is a container orchestration tool that automates deployment, scaling, and management of containerized applications.”</p>
<p><strong><mark>🟢 Interview Question 32:</mark></strong></p>
<p><strong><mark>“What are Pod, Deployment, and Service in Kubernetes?”</mark></strong></p>
<p><strong>Answer I Gave:</strong></p>
<p>✔️ <strong>Pod:</strong></p>
<p>“Pod is the smallest unit in Kubernetes that runs one or more containers.”</p>
<p>✔️ <strong>Deployment:</strong></p>
<p>“Deployment manages multiple Pods and enables easy updates, scaling, and rollback.”</p>
<p>✔️ <strong>Service:</strong></p>
<p>“Service exposes Pods to the network so applications can communicate internally or externally.”</p>
<p><strong><mark>🟢 Interview Question 33:</mark></strong></p>
<blockquote>
<p>“First, when a developer pushes code to GitHub, it triggers the CI pipeline.<br />The pipeline pulls the code and automatically builds the application using GitHub Actions or Jenkins.<br />Next, SonarQube performs static code analysis to check code quality and security issues.<br />After that, a Docker image is built and pushed to the container registry.<br />Then, Trivy scans the Docker image to identify security vulnerabilities.<br />For monitoring, Prometheus collects metrics and Grafana provides visual dashboards.<br />Finally, the container image is deployed to the production environment.”</p>
</blockquote>
<p>✔️ <strong>Summary:</strong></p>
<p>“In my Netflix clone CI/CD project, code went from GitHub → Jenkins/GitHub Actions → SonarQube → Docker → Trivy → Monitoring → Deployment.”</p>
<p><strong><mark>🟢 Interview Question 34:</mark></strong></p>
<p><strong><mark>“If you use a public IP to access a server, how is it secure? Anyone can access it, right?”</mark></strong></p>
<p><strong>Answer I Gave:</strong></p>
<p>“Public IP is safe because access depends on security group rules + SSH keys, not on IP alone.”</p>
<p><strong><mark>🟢 Interview Question 35:</mark></strong></p>
<p><strong><mark>“Give one combined real scenario where monitoring and security tools work together.”</mark></strong></p>
<p><strong>Answer I Gave:</strong></p>
<p>“In my Netflix clone deployment, I used Prometheus and Grafana to monitor container performance like CPU, RAM, latency, and traffic load. SonarQube was used for code scanning, and Docker images were scanned with Trivy for vulnerabilities. If any issue occurs, Prometheus alerts trigger notifications. This setup helped maintain application health, performance, and security continuously all in one workflow.”</p>
<p><strong><mark>🟢 Interview Question 36:</mark></strong></p>
<p><strong><mark>“What is CI/CD, and how does Jenkins help? Also, what is the difference between Jenkins and GitHub Actions?”</mark></strong></p>
<p><strong>Answer I Gave:</strong></p>
<p>✔️ <strong>What is CI?</strong></p>
<p>“CI (Continuous Integration) means automatically building and testing code whenever developers push changes.”</p>
<p>✔️ <strong>What is CD?</strong></p>
<p>“CD (Continuous Deployment/Delivery) means automatically deploying applications to servers or containers after testing.”</p>
<p>✔️ <strong>How Jenkins helps?</strong></p>
<p><strong>“CI/CD automates build, test, and deployment. Jenkins is the tool to do it. GitHub Actions is a simpler, cloud-based CI/CD alternative.”</strong></p>
<p><strong><mark>🟢 Interview Question 37:</mark></strong></p>
<p><strong><mark>“What is Git and what is GitHub?”</mark></strong></p>
<p><strong>Answer I Gave:</strong></p>
<p>✔️ <strong>Git:</strong></p>
<p>“Git is a version control system used to track and manage source code changes.”</p>
<p>✔️ <strong>GitHub:</strong></p>
<p>“GitHub is a cloud platform used to store Git repositories and collaborate with others online.”</p>
<p><strong><mark>🟢 Interview Question 38:</mark></strong></p>
<p><strong><mark>“Explain the Git workflow from </mark></strong> <code>git init</code> <strong><mark>to pull request.”</mark></strong></p>
<p><strong>Answer I Gave:</strong></p>
<p>✔️ <strong>1️⃣ git init</strong></p>
<p>Start a new local repository.</p>
<p>✔️ <strong>2️⃣ git add .</strong></p>
<p>Add files to staging.</p>
<p>✔️ <strong>3️⃣ git commit -m “msg”</strong></p>
<p>Save changes in local repo.</p>
<p>✔️ <strong>4️⃣ git branch -M main</strong></p>
<p>Set main branch name.</p>
<p>✔️ <strong>5️⃣ git remote add origin &lt;URL&gt;</strong></p>
<p>Connect local repo to GitHub.</p>
<p>✔️ <strong>6️⃣ git push -u origin main</strong></p>
<p>Upload code to GitHub.</p>
<p>✔️ <strong>7️⃣ Create new branch</strong></p>
<p>Work on new feature.</p>
<p>✔️ <strong>8️⃣ git push (new branch)</strong></p>
<p>Push branch to GitHub.</p>
<p>✔️ <strong>9️⃣ Create Pull Request</strong></p>
<p>Compare branch → merge → review.</p>
<p><strong>Short path :-</strong><br /><mark>“init → add → commit → push → branch → pull request → merge.”</mark></p>
<h3 id="heading-interview-question-39"><strong><mark>Interview Question 39:</mark></strong></h3>
<p><strong><mark>“What is Shell Scripting, and why is it used in DevOps?”</mark></strong></p>
<p><strong>Answer I Gave:</strong><br />“Shell scripting automates Linux tasks using command sequences. In DevOps, it is used for automation, deployments, monitoring, log management, and server tasks.”</p>
<p><strong><mark>Interview Question 40:</mark></strong></p>
<p><strong><mark>“What is Shebang (#!) in shell script?”</mark></strong></p>
<p><strong>Answer I Gave:</strong><br />“Shebang tells Linux which interpreter should run the script, such as ‘#!/bin/bash’.”</p>
<p><strong><mark>Interview Question 41:</mark></strong></p>
<p><strong><mark>“How do you make a script executable?”</mark></strong></p>
<p><strong>Answer I Gave:</strong><br />“By using: chmod +x <a target="_blank" href="http://script.sh">script.sh</a>”</p>
<p><strong><mark>Interview Question 42:</mark></strong></p>
<p><strong><mark>“How do you run a shell script?”</mark></strong></p>
<p><strong>Answer I Gave:</strong><br />“By executing: ./<a target="_blank" href="http://script.sh">script.sh</a>”</p>
<p><strong><mark>Interview Question 43:</mark></strong></p>
<p><strong><mark>“Real scenario: If a service stops working, how will you restart it using shell script?”</mark></strong></p>
<p><strong>Answer I Gave:</strong><br />“I will automate it using systemctl commands inside a shell script to restart the service automatically if it fails.”</p>
<p><strong><mark>Interview Question 44:</mark></strong></p>
<p><strong><mark>“How do you search for errors inside a log file using shell script?”</mark></strong></p>
<p><strong>Answer I Gave:</strong><br />“I will use grep inside a shell script to filter error lines e.g., grep ‘ERROR’ app.log.”</p>
<p><strong><mark>Interview Question 45:</mark></strong></p>
<p><strong><mark>“How do you schedule a shell script automatically?”</mark></strong></p>
<p><strong>Answer I Gave:</strong><br />“I will use cron jobs by running: crontab -e”</p>
<p><strong><mark>Interview Question 46:</mark></strong></p>
<p><strong><mark>“Give one real DevOps example of Shell Script.”</mark></strong></p>
<p><strong>Answer I Gave:</strong><br />“I used shell script to automate backup using tar, which removed manual effort.</p>
<p><strong><mark>Interview Question 47:</mark></strong></p>
<p><strong><mark>“Why is Python used in DevOps?”</mark></strong></p>
<p><strong>Answer I Gave:</strong><br />“Python is used for automation, cloud scripting, API integration, data parsing, monitoring, and CI/CD workflows.”</p>
<p><strong><mark>Interview Question 48:</mark></strong></p>
<p><strong><mark>“Do you know Python scripting?”</mark></strong></p>
<p><strong>Answer I Gave:</strong><br />“Yes, I know basic Python and can write automation scripts if needed.”</p>
<p><strong><mark>Interview Question 49:</mark></strong></p>
<p><strong><mark>“Give one real example of Python automation in DevOps.”</mark></strong></p>
<p><strong>Answer I Gave:</strong><br />“I used Python to collect CPU and memory usage metrics automatically and send alerts.”</p>
<p><strong><mark>Interview Question 50:</mark></strong></p>
<p><strong><mark>“What is pip?”</mark></strong></p>
<p><strong>Answer I Gave:</strong><br />“pip is a Python package installer used to install libraries and modules.”</p>
<p><strong><mark>Interview Question 51:</mark></strong></p>
<p><strong><mark>“How do you automate AWS services using Python?”</mark></strong></p>
<p><strong>Answer I Gave:</strong><br />“I will use Python with boto3 library to create, manage, and automate AWS cloud operations.”</p>
<p><strong><mark>Interview Question 52:</mark></strong></p>
<p><strong><mark>“Shell scripting vs Python scripting in DevOps?”</mark></strong></p>
<p><strong>Answer I Gave:</strong><br />“Shell scripting is best for OS tasks, while Python is best for cloud automation, APIs, and data handling.”</p>
<p><strong><mark>Interview Question 53:</mark></strong></p>
<p><strong><mark>“How will you scan logs and extract errors using Python?”</mark></strong></p>
<p><strong>Answer I Gave:</strong><br />“I will loop through log file lines in Python and print the lines containing the keyword ‘ERROR’.”</p>
<h3 id="heading-extra-advanced-devops-interview-questions"><strong><em><mark>🟢 EXTRA ADVANCED DEVOPS INTERVIEW QUESTIONS</mark></em></strong></h3>
<p><strong>Q1: Hard link vs Soft link?</strong></p>
<p>Hard link = real file copy; Soft link = shortcut link.</p>
<p><strong>Q2: Find top 10 biggest files?</strong></p>
<p><code>du -ah / | sort -rh | head -n 10</code></p>
<p><strong>Q3: Check which port is in use?</strong></p>
<p><code>lsof -i :80</code></p>
<p><strong>Q4: Check live disk I/O?</strong></p>
<p><code>iotop</code></p>
<p><strong>Q5: Check failed login attempts?</strong></p>
<p><code>cat /var/log/secure</code></p>
<p><strong>Q6: SSH not working — how to access server?</strong></p>
<p>Use AWS Console or Session Manager.</p>
<p><strong>Q7: High server load — first check?</strong></p>
<p>Check CPU with <code>top</code>.</p>
<p><strong>Q8: Find zombie processes?</strong></p>
<p><code>ps aux | grep Z</code></p>
<p><strong>Q9: Check highest CPU usage process?</strong></p>
<p><code>top -o %CPU</code></p>
<p><strong>Q10: Trace network packets?</strong></p>
<p><code>tcpdump</code></p>
<h2 id="heading-advanced-docker-k8s"><strong><mark>🟢 ADVANCED DOCKER / K8s</mark></strong></h2>
<p><strong>Q11: What is overlay network?</strong></p>
<p>Allows containers to talk across multiple hosts.</p>
<p><strong>Q12: How Kubernetes auto-scales?</strong></p>
<p>Using Horizontal Pod Autoscaler.</p>
<p><strong>Q13: Liveness vs Readiness probe?</strong></p>
<p>Liveness = container running; Readiness = ready for traffic.</p>
<p><strong>Q14: What if Kubernetes master fails?</strong></p>
<p>Cluster control breaks — need HA setup.</p>
<p><strong>Q15: Why etcd used?</strong></p>
<p>Stores Kubernetes cluster data.</p>
<h2 id="heading-advanced-cicd"><strong><mark>🟢 ADVANCED CI/CD</mark></strong></h2>
<p><strong>Q16: Canary deployment?</strong></p>
<p>Release to small users first, then full rollout.</p>
<p><strong>Q17: Handle secrets in pipeline?</strong></p>
<p>Use Secrets Manager or Vault.</p>
<p><strong>Q18: What is an artifact?</strong></p>
<p>Build output like Docker image or JAR.</p>
<p><strong>Q19: How to rollback deployment?</strong></p>
<p>Deploy the previous version again.</p>
<p><strong>Q20: Trigger pipeline automatically?</strong></p>
<p>Using webhook.</p>
<h3 id="heading-aws-scenario"><strong><mark>🟢 AWS SCENARIO</mark></strong></h3>
<p><strong>Q21: Public IP changes after reboot fix?</strong></p>
<p>Use Elastic IP.</p>
<p><strong>Q22: How is S3 secure?</strong></p>
<p>Bucket policy + IAM + encryption.</p>
<p><strong>Q23: RDS high load fix?</strong></p>
<p>Add read replicas.</p>
<p><strong>Q24: AWS cost increasing check how?</strong></p>
<p>Use Cost Explorer.</p>
<p><strong><mark>🟢 NETWORKING</mark></strong></p>
<p><strong>Q25: Website slow first check?</strong></p>
<p>Ping to test response.</p>
<p><strong><mark>🟢 HR STYLE DEVOPS</mark></strong></p>
<p><strong>Q26: Deployment fails at night what will you do?</strong></p>
<p>Check logs, rollback safely, fix later.</p>
<p><strong>Q27: Why DevOps?</strong></p>
<p>For fast delivery and less manual work.</p>
<p><strong>Q28: Your weakness?</strong></p>
<p>Pay extra attention to details.</p>
<p><strong>Q29: Deadline is close what will you do?</strong></p>
<p>Focus on priority tasks.</p>
<p><strong><mark>🟢 ADVANCED LINUX COMMANDS</mark></strong></p>
<p><strong>Q30: htop use?</strong></p>
<p>Live process view.</p>
<p><strong>Q31: nmap use?</strong></p>
<p>Network scan.</p>
<p><strong>Q32: ncdu use?</strong></p>
<p>Disk usage view.</p>
<p><strong>Q33: strace use?</strong></p>
<p>System call trace.</p>
<p><strong>Q34: journalctl use?</strong></p>
<p>System logs.</p>
<p><strong>Q35: dig use?</strong></p>
<p>DNS check.</p>
<p><strong>Q36: rsync use?</strong></p>
<p>File sync.</p>
<p><strong>Q37: uptime use?</strong></p>
<p>System load.</p>
<p><strong>Q38: crontab -l use?</strong></p>
<p>List cron jobs.</p>
<p><strong><mark>🟢 ADVANCED DEVOPS / CLOUD CORE</mark></strong></p>
<p><strong>Q39: What is Kernel Panic?</strong></p>
<p>Fatal OS crash.</p>
<p><strong>Q40: What is inode?</strong></p>
<p>File metadata record.</p>
<p><strong>Q41: /etc/passwd vs /etc/shadow?</strong></p>
<p>passwd = users; shadow = passwords.</p>
<p><strong>Q42: What is SELinux?</strong></p>
<p>Security policy layer.</p>
<p><strong>Q43: What is race condition?</strong></p>
<p>Two processes accessing data same time → conflict.</p>
<p><strong>Q44: Ephemeral storage?</strong></p>
<p>Temporary storage for pods.</p>
<p><strong>Q45: Pod Disruption Budget?</strong></p>
<p>Stops too many pods from going offline.</p>
<p><strong>Q46: Container Registry?</strong></p>
<p>Stores container images.</p>
<p><strong>Q47: What is throttling?</strong></p>
<p>Performance reduced due to limits.</p>
<p><strong>Q48: Cache invalidation?</strong></p>
<p>Remove stale cache data.</p>
<p><strong>Q49: Sticky session LB?</strong></p>
<p>Same server for same user.</p>
<p><strong>Q50: Infrastructure drift?</strong></p>
<p>Infra state mismatch with IaC code.</p>
<p><mark>🟢 Conclusion ✨</mark></p>
<p>Preparing for a DevOps interview can feel challenging, especially for freshers but the journey becomes easier when we learn from real experiences instead of memorizing theory. In this blog, I shared every question that was asked in my two DevOps interviews, along with simple and practical answers so that anyone who is preparing can understand what companies actually expect.</p>
<p>From AWS, Linux, Networking, Docker, Kubernetes, and CI/CD pipelines to scenario-based cloud solutions these real interview learnings will save you time, build confidence, and help you prepare smarter.</p>
<p>💡 The purpose of this blog was not just to share answers,<br />but to show the <strong>process, mindset, and direction</strong> that a DevOps candidate should follow.</p>
<p>If you are a fresher like me, remember one thing:</p>
<p><strong>You do not need to know everything.</strong><br />You just need to:<br />✔️ Understand concepts<br />✔️ Build projects<br />✔️ Stay consistent<br />✔️ Keep learning<br />✔️ Believe in yourself</p>
<p>DevOps is not just about tools<br />It is about problem-solving, automation, teamwork, and continuous improvement.</p>
<p>Thank you for reading this blog.<br />I hope it helps you prepare better, feel confident, and move one step closer to your DevOps career.</p>
<p>🚀 If I can do it, you can do it too.<br />All the best, future DevOps engineers! 🔥</p>
<h2 id="heading-about-the-author"><strong>👨‍💻 About the Author</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751797710818/123a7231-3dca-4273-ad68-7bd026f69b95.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p><mark>“This series isn’t just about tools it’s about mastering real DevOps concepts and interview-ready knowledge.”</mark></p>
<h3 id="heading-lets-stay-connected">📬 Let's Stay Connected</h3>
<ul>
<li><p>📧 <strong>Email</strong>: <a target="_blank" href="mailto:gujjarapurv181@gmail.com"><strong>gujjarapurv181@gmail.com</strong></a></p>
</li>
<li><p>🐙 <strong>GitHub</strong>: <a target="_blank" href="http://github.com/ApurvGujjar07"><strong>github.com/ApurvGujjar07</strong></a></p>
</li>
<li><p>💼 <strong>LinkedIn</strong>: <a target="_blank" href="http://linkedin.com/in/apurv-gujjar"><strong>linkedin.com/in/apurv-gujjar</strong></a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[ERPNext Setup on Ubuntu 22.04 – From Zero to Hero!]]></title><description><![CDATA[📑 Table of Contents

🌟 Introduction: Why ERPNext?

🎯 Aim of this Blog

🛠️ Prerequisites Before You Begin

⚙️ Step 1: Server Setup – Preparing Ubuntu for ERPNext

📦 Step 2: Install Required Packages

🗄️ Step 3: Configure MySQL Server

🔧 Step 4:...]]></description><link>https://apurv-gujjar.me/erpnext</link><guid isPermaLink="true">https://apurv-gujjar.me/erpnext</guid><category><![CDATA[AWS]]></category><category><![CDATA[Ubuntu]]></category><category><![CDATA[MariaDB]]></category><category><![CDATA[MySQL]]></category><category><![CDATA[Databases]]></category><dc:creator><![CDATA[Gujjar Apurv]]></dc:creator><pubDate>Fri, 22 Aug 2025 04:25:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755836531022/ac61a852-e222-4e6e-9132-1271e2526ec2.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-table-of-contents">📑 Table of Contents</h2>
<ol>
<li><p>🌟 <strong>Introduction: Why ERPNext?</strong></p>
</li>
<li><p>🎯 <strong>Aim of this Blog</strong></p>
</li>
<li><p>🛠️ <strong>Prerequisites Before You Begin</strong></p>
</li>
<li><p>⚙️ <strong>Step 1: Server Setup – Preparing Ubuntu for ERPNext</strong></p>
</li>
<li><p>📦 <strong>Step 2: Install Required Packages</strong></p>
</li>
<li><p>🗄️ <strong>Step 3: Configure MySQL Server</strong></p>
</li>
<li><p>🔧 <strong>Step 4: Install CURL, Node.js, NPM, and Yarn</strong></p>
</li>
<li><p>🏗️ <strong>Step 5: Install Frappe Bench Framework</strong></p>
</li>
<li><p>🚀 <strong>Step 6: Install ERPNext and Other Applications</strong></p>
</li>
<li><p>✅ <strong>Final Verification and Testing Your Setup</strong></p>
</li>
<li><p>📌 <strong>Conclusion: Running ERPNext Smoothly</strong></p>
</li>
</ol>
<h2 id="heading-introduction-why-erpnext">🌟 Introduction : Why ERPNext?</h2>
<p>Running a business requires managing multiple operations finance, sales, HR, payroll, CRM, projects, and more. Instead of juggling multiple tools, wouldn’t it be better to have one platform for everything? That’s exactly what <strong><mark>ERPNext</mark></strong> delivers.</p>
<p>ERPNext is an <strong><mark>open-source ERP solution</mark></strong> designed to simplify and <mark>automate business</mark> processes. From startups to enterprises, it’s trusted worldwide for being <strong>cost-effective, flexible, and scalable</strong>. Unlike other ERP tools that demand heavy licensing costs, ERPNext offers a <strong>free and community-driven platform</strong> that anyone can install and use.This blog is your <strong>complete, beginner-friendly guide</strong> to installing ERPNext on Ubuntu 22.04 LTS.</p>
<h2 id="heading-aim-of-this-blog">🎯 Aim of this Blog</h2>
<p>The main objective of this guide is to help you:</p>
<ul>
<li><p>✅ Understand <mark>ERPNext</mark> installation requirements.</p>
</li>
<li><p>✅ Learn server setup and dependency installation.</p>
</li>
<li><p>✅ Configure and install ERPNext with Frappe framework.</p>
</li>
<li><p>✅ Explore add-on modules like <strong><mark>HR, Payroll, Sales, and CRM</mark></strong><mark>.</mark></p>
</li>
</ul>
<p>By the end, you’ll have a <strong>fully functional ERPNext system</strong> running on your local machine.</p>
<h3 id="heading-prerequisites-before-you-begin">🛠️ <mark> Prerequisites Before You Begin</mark></h3>
<p>Before we start the installation, make sure your system meets the minimum requirements:</p>
<h3 id="heading-operating-system">🔹 Operating System</h3>
<ul>
<li>Ubuntu <strong>22.04 LTS (64-bit)</strong></li>
</ul>
<h3 id="heading-minimum-hardware-requirements">🔹 Minimum Hardware Requirements</h3>
<ul>
<li><p><strong>CPU</strong>: 2 cores</p>
</li>
<li><p><strong>RAM</strong>: 2 GB</p>
</li>
<li><p><strong>Storage</strong>: 15 GB free disk space</p>
</li>
</ul>
<p><em>(Tip: For smoother performance, 4 GB RAM and 2+ CPUs are recommended.)</em></p>
<h3 id="heading-setup-environment">🔹 Setup Environment</h3>
<ul>
<li><p>Perform the installation on a <strong>local machine</strong> for practice/testing.</p>
</li>
<li><p>Ensure you have <strong>basic Linux command knowledge</strong> (updating system, installing packages, using terminal).</p>
</li>
</ul>
<h3 id="heading-step-1-server-setup-preparing-ubuntu">⚙️ Step 1: Server Setup – Preparing Ubuntu</h3>
<p>Before we start installing ERPNext, let’s make sure our Ubuntu 22.04 server is ready.</p>
<p><mark>🔹 Update &amp; Upgrade Packages</mark></p>
<p>Keep the system up to date:</p>
<pre><code class="lang-powershell">sudo apt update &amp;&amp; sudo apt upgrade <span class="hljs-literal">-y</span>
</code></pre>
<h3 id="heading-set-timezone">🔹 Set Timezone</h3>
<p>Correct timezone ensures accuracy in reports, HR, and payroll:</p>
<pre><code class="lang-powershell">sudo timedatectl <span class="hljs-built_in">set-timezone</span> Asia/Kolkata
</code></pre>
<h3 id="heading-install-basic-tools">🔹 Install Basic Tools</h3>
<p>Essential utilities for smooth installation:</p>
<pre><code class="lang-powershell">sudo apt install git <span class="hljs-built_in">curl</span> <span class="hljs-built_in">wget</span> htop nano unzip <span class="hljs-literal">-y</span>
</code></pre>
<p>✅ That’s it! Your server is now ready to proceed with ERPNext installation.</p>
<h3 id="heading-step-2-install-required-packages">📦 Step 2: Install Required Packages</h3>
<p>In this step, we install the essential tools for Python, create swap memory for smooth performance, and set up MariaDB (MySQL alternative) for database support.</p>
<hr />
<h3 id="heading-install-python-build-tools">🐍 Install Python + Build Tools</h3>
<pre><code class="lang-powershell">sudo apt <span class="hljs-literal">-y</span> install python3 python3<span class="hljs-literal">-pip</span> python3<span class="hljs-literal">-venv</span> python3<span class="hljs-literal">-dev</span> build<span class="hljs-literal">-essential</span> libffi<span class="hljs-literal">-dev</span> libssl<span class="hljs-literal">-dev</span>
</code></pre>
<p>🔎 <strong>Explanation:</strong></p>
<ul>
<li><p><strong>python3</strong> → Installs the Python 3 interpreter.</p>
</li>
<li><p><strong>python3-pip</strong> → Python package manager (used to install libraries).</p>
</li>
<li><p><strong>python3-venv</strong> → For creating isolated virtual environments.</p>
</li>
<li><p><strong>python3-dev</strong> → Development headers required for compiling Python extensions.</p>
</li>
<li><p><strong>build-essential</strong> → Provides compiler tools like <code>gcc</code> and <code>make</code>.</p>
</li>
<li><p><strong>libffi-dev</strong> &amp; <strong>libssl-dev</strong> → Required for cryptography and secure connections.</p>
</li>
</ul>
<p>👉 These dependencies are necessary for ERPNext/Bench to work properly.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755831289646/926572d3-1450-481d-a6ec-a1e9e89beb4c.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-create-amp-enable-swap-memory">🧠 Create &amp; Enable Swap Memory</h3>
<pre><code class="lang-powershell">sudo fallocate <span class="hljs-literal">-l</span> <span class="hljs-number">2</span>G /swapfile    <span class="hljs-comment"># Create a 2GB swap file</span>
sudo chmod <span class="hljs-number">600</span> /swapfile          <span class="hljs-comment"># Set correct permissions</span>
sudo mkswap /swapfile             <span class="hljs-comment"># Format the swap file</span>
sudo swapon /swapfile             <span class="hljs-comment"># Enable swap</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">'/swapfile none swap sw 0 0'</span> | sudo <span class="hljs-built_in">tee</span> <span class="hljs-literal">-a</span> /etc/fstab   <span class="hljs-comment"># Make swap permanent after reboot</span>
</code></pre>
<p>🔎 <strong>Explanation:</strong></p>
<ul>
<li><p><strong>Swap memory</strong> acts as virtual RAM using disk space.</p>
</li>
<li><p>If the system runs out of physical RAM, swap ensures processes don’t crash.</p>
</li>
<li><p>Here, we created a <strong>2GB swap file</strong> to handle heavy package installations and Python compilations smoothly.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755831346414/5f3a4e30-07ef-4545-a0e5-ba697dad9c6d.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-install-mariadb-mysql-server">🗄️ Install MariaDB (MySQL Server)</h3>
<pre><code class="lang-powershell">sudo apt <span class="hljs-literal">-y</span> install mariadb<span class="hljs-literal">-server</span> mariadb<span class="hljs-literal">-client</span>
</code></pre>
<p>🔎 <strong>Explanation:</strong></p>
<ul>
<li><p><strong>mariadb-server</strong> → Installs the database engine (where ERPNext stores data).</p>
</li>
<li><p><strong>mariadb-client</strong> → Provides tools to interact with the database.</p>
</li>
</ul>
<p>👉 MariaDB is a <strong>drop-in replacement for MySQL</strong> — it uses the same commands but is faster and open-source.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755831418991/d5b8c944-5a9a-4805-ba5a-e2c0b11ab402.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-step-3-configure-mysql-server">🗄️ Step 3: Configure MySQL Server</h3>
<p>Now, let’s secure the MariaDB server.</p>
<pre><code class="lang-powershell">sudo mysql_secure_installation
</code></pre>
<p>🔎 <strong>Explanation:</strong><br />This is an interactive script that asks security-related questions:</p>
<ol>
<li><p><strong>Set root password</strong> → Protects the database root user with a password.</p>
</li>
<li><p><strong>Remove anonymous users</strong> → Deletes default anonymous accounts.</p>
</li>
<li><p><strong>Disallow remote root login</strong> → Ensures root can only log in from <a target="_blank" href="http://localhost">localhost</a>.</p>
</li>
<li><p><strong>Remove test database</strong> → Deletes the unnecessary test DB.</p>
</li>
</ol>
<p>👉 These steps make MariaDB secure and production-ready.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755831442965/2484157e-1aa4-409a-8ae2-4a8e21e4d1d0.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-understanding-the-bind-address-in-mariadb-for-erpnext">🔑 Understanding the <code>bind-address</code> in MariaDB for ERPNext</h3>
<p>When setting up <strong>ERPNext</strong> on Ubuntu with <strong>MariaDB</strong>, one important configuration line you might have noticed in the <code>erpnext.cnf</code> file is:</p>
<pre><code class="lang-powershell">bind<span class="hljs-literal">-address</span> = <span class="hljs-number">127.0</span>.<span class="hljs-number">0.1</span>
</code></pre>
<p>But what does this mean, and why do we use it? 🤔</p>
<hr />
<h2 id="heading-what-is-bind-address">🌐 What is <code>bind-address</code>?</h2>
<p>The <code>bind-address</code> tells MariaDB <strong>which network interface (IP address)</strong> it should listen to for incoming connections.</p>
<ul>
<li><p><code>127.0.0.1</code> → This is the <strong>loopback address</strong> (<a target="_blank" href="http://localhost">localhost</a>). It means MariaDB will <strong>only accept connections from the same machine</strong> where it is installed.</p>
</li>
<li><p>Any other IP (like <code>0.0.0.0</code> or server’s public IP) → MariaDB would listen to requests from outside machines too.</p>
</li>
</ul>
<hr />
<h2 id="heading-why-use-127001-for-erpnext">🔒 Why use <code>127.0.0.1</code> for ERPNext?</h2>
<p>For most ERPNext installations:<br />✅ ERPNext and MariaDB run on the <strong>same server</strong>.<br />✅ We don’t need external machines to connect directly to MariaDB.<br />✅ It provides <strong>better security</strong>, as no one from outside the server can attempt to connect to the database.</p>
<p>This is why:</p>
<pre><code class="lang-powershell">bind<span class="hljs-literal">-address</span> = <span class="hljs-number">127.0</span>.<span class="hljs-number">0.1</span>
</code></pre>
<p>is the <strong>safest choice</strong> for local or single-server ERPNext setups.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755831654213/e46446cb-f499-4e23-9d0d-2a3a7accd057.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-4-install-redis-nodejs-yarn-amp-frappe-bench"><mark>⚡ Step 4: Install Redis, Node.js, Yarn &amp; Frappe Bench</mark></h3>
<p>In this step, we will install all the <strong>required tools</strong> to run ERPNext efficiently.</p>
<hr />
<h2 id="heading-41-install-redis-amp-wkhtmltopdf">🛠️ 4.1 Install Redis &amp; wkhtmltopdf</h2>
<p>1️⃣ Install Redis &amp; wkhtmltopdf:</p>
<pre><code class="lang-powershell">sudo apt <span class="hljs-literal">-y</span> install redis<span class="hljs-literal">-server</span> wkhtmltopdf
</code></pre>
<p>👉 <code>redis-server</code>: Used for caching and background task queues.<br />👉 <code>wkhtmltopdf</code>: Helps ERPNext generate PDF reports (Invoices, Quotations, etc.).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755832104999/af8af361-d3cb-4786-8384-295969c57369.png" alt class="image--center mx-auto" /></p>
<p>2️⃣ Enable Redis to start on boot:</p>
<pre><code class="lang-powershell">sudo systemctl enable redis<span class="hljs-literal">-server</span>
</code></pre>
<p>👉 Makes sure Redis automatically starts when the server reboots.</p>
<p>3️⃣ Start Redis immediately:</p>
<pre><code class="lang-powershell">sudo systemctl <span class="hljs-built_in">start</span> redis<span class="hljs-literal">-server</span>
</code></pre>
<p>👉 Runs Redis service right now without waiting for reboot.\</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755832136584/85053467-6f9f-49bc-8537-704f87573efc.png" alt class="image--center mx-auto" /></p>
<p>4️⃣ Test Redis is working:</p>
<pre><code class="lang-powershell">redis<span class="hljs-literal">-cli</span> ping
</code></pre>
<p>👉 If Redis is active, you will see <strong>PONG</strong> ✅.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755832176198/7e65bb41-7b78-49a2-a70d-666b2335ba39.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-42-install-nodejs">⚙️ 4.2 Install Node.js</h2>
<p>1️⃣ Add Node.js 18.x repository:</p>
<pre><code class="lang-powershell"><span class="hljs-built_in">curl</span> <span class="hljs-literal">-fsSL</span> https://deb.nodesource.com/setup_18.x | sudo <span class="hljs-literal">-E</span> bash -
</code></pre>
<p>👉 Downloads &amp; configures Node.js LTS (18.x) setup for Ubuntu.</p>
<p>2️⃣ Install Node.js:</p>
<pre><code class="lang-powershell">sudo apt <span class="hljs-literal">-y</span> install nodejs
</code></pre>
<p>👉 Installs both <strong>Node.js</strong> (runtime) and <strong>npm</strong> (package manager).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755832213304/c7318694-8786-4569-a6c4-49b1e4f5a7ca.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-43-install-yarn">📦 4.3 Install Yarn</h2>
<p>1️⃣ Install Yarn globally:</p>
<pre><code class="lang-powershell">sudo npm install <span class="hljs-literal">-g</span> yarn
</code></pre>
<p>👉 Yarn is a <strong>faster package manager</strong> used by ERPNext frontend for handling JavaScript libraries.</p>
<p>2️⃣ Check Yarn version:</p>
<pre><code class="lang-powershell">yarn <span class="hljs-literal">-v</span>
</code></pre>
<p>👉 Confirms Yarn is installed successfully.</p>
<p>3️⃣ Check Node.js version:</p>
<pre><code class="lang-powershell">node <span class="hljs-literal">-v</span>
</code></pre>
<p>👉 Confirms Node.js installed properly.</p>
<p>4️⃣ Check npm version:</p>
<pre><code class="lang-powershell">npm <span class="hljs-literal">-v</span>
</code></pre>
<p>👉 Confirms npm (Node package manager) is working.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755832634642/5fb57a52-06a8-4aec-be7d-348f59b94e77.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-44-install-frappe-bench">🚀 4.4 Install Frappe Bench</h2>
<p>1️⃣ Install pipx (required for managing Python apps safely):</p>
<pre><code class="lang-powershell">sudo apt <span class="hljs-literal">-y</span> install pipx
</code></pre>
<p>👉 Installs pipx, which isolates Python-based applications.</p>
<p>2️⃣ Add pipx to system PATH:</p>
<pre><code class="lang-powershell">pipx ensurepath
</code></pre>
<p>👉 Ensures pipx commands can run globally from terminal.</p>
<p>3️⃣ Install Frappe Bench:</p>
<pre><code class="lang-powershell">pipx install frappe<span class="hljs-literal">-bench</span>
</code></pre>
<p>👉 Installs the <strong>bench tool</strong>, used to create and manage ERPNext projects/sites.</p>
<p>4️⃣ Verify Bench installation:</p>
<pre><code class="lang-powershell">bench -<span class="hljs-literal">-version</span>
</code></pre>
<p>👉 Confirms that Bench is installed correctly.</p>
<p>✨ ✅ At this point, Redis, Node.js, Yarn, and Frappe Bench are all set up!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755832738633/7dce6923-3efd-4fc3-aa3b-57fbbdf5c7db.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-5-installing-frappe-bench-framework"><mark>🏗️ Step 5: Installing Frappe Bench Framework</mark></h3>
<p>The <strong>Frappe Bench</strong> is the backbone of ERPNext. It helps you create, manage, and run multiple ERPNext sites with ease 🚀. Let’s set it up!</p>
<hr />
<h3 id="heading-1-create-a-frappe-directory">⚡ 1. Create a Frappe Directory</h3>
<pre><code class="lang-powershell">mkdir <span class="hljs-literal">-p</span> ~/frappe &amp;&amp; <span class="hljs-built_in">cd</span> ~/frappe
</code></pre>
<p>🔹 <code>mkdir -p ~/frappe</code> → Creates a new folder called <strong>frappe</strong> inside your home directory.<br />🔹 <code>cd ~/frappe</code> → Immediately moves you inside that folder.</p>
<p>👉 This ensures we have a clean space where all ERPNext files will be stored 📂.</p>
<hr />
<h3 id="heading-2-initialize-bench-with-frappe-v15">⚡ 2. Initialize Bench with Frappe v15</h3>
<pre><code class="lang-powershell">bench init -<span class="hljs-literal">-frappe</span><span class="hljs-literal">-branch</span> version<span class="hljs-literal">-15</span> frappe<span class="hljs-literal">-bench</span>
</code></pre>
<p>🔹 <code>bench init</code> → Starts the setup of a new Bench environment.<br />🔹 <code>--frappe-branch version-15</code> → Installs the latest stable <strong>Frappe v15</strong>.<br />🔹 <code>frappe-bench</code> → Name of the folder where the new Bench environment will be created.</p>
<p>👉 After this, a new directory <strong>frappe-bench</strong> is created which contains all necessary files ⚙️.</p>
<hr />
<h3 id="heading-3-move-into-bench-directory">⚡ 3. Move into Bench Directory</h3>
<pre><code class="lang-powershell"><span class="hljs-built_in">cd</span> frappe<span class="hljs-literal">-bench</span>
</code></pre>
<p>🔹 Switches into the <strong>frappe-bench</strong> folder created in the previous step.<br />👉 From here, you can start creating ERPNext sites and apps 🎯.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755833079394/9be16f9c-2922-43a4-8541-c5aac06d4673.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755833096313/51bc6cc9-8793-46fd-ab6c-6e17ec75bfdf.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-6-installing-erpnext-and-applications"><mark>✨ Step 6: Installing ERPNext and Applications</mark></h3>
<p>Once your <strong>Frappe Framework</strong> is ready, the next move is to bring <strong>ERPNext</strong> into action. This step will set up ERPNext inside your local environment 🚀.</p>
<hr />
<h2 id="heading-1-create-a-new-site">🖥️ 1. Create a New Site</h2>
<pre><code class="lang-powershell">bench <span class="hljs-built_in">new-site</span> site1.local
</code></pre>
<p>🔹 Creates a fresh site named <strong>site1.local</strong>.<br />🔹 It will ask for:</p>
<ul>
<li><p><strong>MySQL root password</strong> 🔑</p>
</li>
<li><p><strong>Administrator password</strong> (used later for ERPNext login)</p>
</li>
</ul>
<p>👉 Think of this as preparing an <strong>empty house 🏠</strong> where ERPNext will live.</p>
<hr />
<h2 id="heading-2-download-erpnext-app">📥 2. Download ERPNext App</h2>
<pre><code class="lang-powershell">bench <span class="hljs-built_in">get-app</span> erpnext -<span class="hljs-literal">-branch</span> version<span class="hljs-literal">-15</span>
</code></pre>
<p>🔹 Fetches ERPNext source code from GitHub.<br />🔹 <code>--branch version-15</code> ensures you install the <strong>latest stable ERPNext (v15)</strong>.</p>
<p>👉 Like bringing <strong>ERPNext package 📦</strong> into your system.</p>
<hr />
<h2 id="heading-3-install-erpnext-on-your-site">🔗 3. Install ERPNext on Your Site</h2>
<pre><code class="lang-powershell">bench -<span class="hljs-literal">-site</span> site1.local <span class="hljs-built_in">install-app</span> erpnext
</code></pre>
<p>🔹 Installs ERPNext inside your site.<br />🔹 Now, <code>site1.local</code> is fully powered by ERPNext.</p>
<p>👉 Imagine you just <strong>plugged ERPNext’s engine 🚂</strong> into your site.</p>
<hr />
<h2 id="heading-4-build-erpnext-assets">🎨 4. Build ERPNext Assets</h2>
<pre><code class="lang-powershell">bench build -<span class="hljs-literal">-app</span> erpnext
</code></pre>
<p>🔹 Compiles <strong>JS, CSS, and frontend files</strong>.<br />🔹 Without this, your ERPNext dashboard may look broken.</p>
<p>👉 This step <strong>decorates the house 🏡✨</strong>, making ERPNext’s UI ready.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755833648674/b6d20764-5d5e-465e-9b83-2e78bfc2734a.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755833686157/c69e96c8-d820-4bec-a58e-4673b247f712.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-5-install-honcho-process-manager">⚡ 5. Install Honcho (Process Manager)</h2>
<pre><code class="lang-powershell">pip install honcho
honcho -<span class="hljs-literal">-version</span>
</code></pre>
<p>🔹 <strong>Honcho</strong> helps run multiple services (Frappe, Redis, Workers) together.<br />🔹 First command installs it, second confirms the version.</p>
<p>👉 Think of Honcho as your <strong>site manager 👷</strong>, keeping everything running smoothly.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755833387494/e94f90f3-5df2-4083-b752-fe99fb53bcb6.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-final-verification-and-testing-your-setup"><mark>✅ </mark> <strong><mark>Final Verification and Testing Your Setup</mark></strong></h2>
<h3 id="heading-start-erpnext-and-access-on-localhosthttplocalhost">▶️ Start ERPNext and Access on <a target="_blank" href="http://Localhost">Localhost</a></h3>
<p>Once ERPNext is installed, it’s time to bring it to life! 🚀</p>
<h3 id="heading-step-1-go-to-your-bench-folder">🔹 Step 1: Go to your bench folder</h3>
<pre><code class="lang-powershell"><span class="hljs-built_in">cd</span> ~/frappe<span class="hljs-literal">-bench</span>
</code></pre>
<h3 id="heading-step-2-start-erpnext-services">🔹 Step 2: Start ERPNext services</h3>
<pre><code class="lang-powershell">bench <span class="hljs-built_in">start</span>
</code></pre>
<p>This command will launch the web server, scheduler, workers, and all background services required for ERPNext.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755834226347/d5cacc6c-a8a8-4bc3-b3ae-f357bafb3322.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755834251735/bdea7c51-76d9-47da-8bd2-7a7b9a4f753f.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-3-open-erpnext-in-your-browser">🔹 Step 3: Open ERPNext in your browser</h3>
<p>Now visit:</p>
<pre><code class="lang-powershell">http://site1.local:<span class="hljs-number">8000</span>
</code></pre>
<p>(If <code>site1.local</code> doesn’t work, try <a target="_blank" href="http://localhost:8000"><code>http://localhost:8000</code></a>)</p>
<p>✨ And boom 💥 — your ERPNext dashboard will appear, ready for you to explore modules like <strong>HR, Sales, CRM, Accounting, Inventory, and more!</strong> 🎯</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755834274794/710610eb-da13-4869-9845-e79f7c5b52ad.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755834292587/65cd697d-0886-4085-87f2-2a5f6dad4adc.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755834302719/760055e2-608b-45b0-a876-6278bde4b126.png" alt class="image--center mx-auto" /></p>
<p><mark>🎉 After installing ERPNext, complete the </mark> <strong><mark>Setup Wizard</mark></strong> <mark> by creating your admin account and entering your company details. Once done, you’ll be redirected straight to the </mark> <strong><mark>ERPNext dashboard</mark></strong><mark>, ready to explore all modules. 🚀</mark></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755834310345/8dcd92cc-f799-41e2-99f9-2cec2da7ea10.png" alt class="image--center mx-auto" /></p>
<p><mark>⚠️ </mark> <strong><mark>Notice</mark></strong><br />If you don’t see modules like <strong>HR, Sales, CRM, Accounting, or Tools</strong> on your ERPNext dashboard after installation, don’t worry!</p>
<p>👉 Simply use the <strong><mark>Search Bar (Ctrl + G)</mark></strong> at the top and type the module name (e.g., <em>HR</em> or <em>Sales</em>). Once opened, you can pin or add them to your sidebar/dashboard.</p>
<p>This happens because ERPNext hides some modules by default — but they are all available and just one search away. ✅</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755834322434/daedafbb-3ac2-457e-9c55-658e3ec69e90.png" alt class="image--center mx-auto" /></p>
<p>✅ <strong>Conclusion</strong><br />You’ve now completed the full journey from setting up the server to launching ERPNext on your browser. 🎯 With modules like <strong>HR, Sales, CRM, Accounting, and Inventory</strong>, your business is ready to streamline operations on a modern open-source ERP system. This is just the beginning explore, customize, and make ERPNext truly yours! 🚀</p>
<h2 id="heading-about-the-author"><strong>👨‍💻 About the Author</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751797710818/123a7231-3dca-4273-ad68-7bd026f69b95.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>This project is a deep dive into the <strong>ERPNext ecosystem</strong>, designed to strengthen my foundation in enterprise resource planning, business automation, and modular integration using open-source technology.</p>
<p>This series isn’t just about installing ERPNext; it’s about <strong>mastering the core modules that power modern business management</strong> from HR and Sales to Accounting, Inventory, and CRM. 🚀</p>
<hr />
<h3 id="heading-lets-stay-connected">📬 Let's Stay Connected</h3>
<ul>
<li><p>📧 <strong>Email</strong>: <a target="_blank" href="mailto:gujjarapurv181@gmail.com"><strong>gujjarapurv181@gmail.com</strong></a></p>
</li>
<li><p>🐙 <strong>GitHub</strong>: <a target="_blank" href="http://github.com/ApurvGujjar07"><strong>github.com/ApurvGujjar07</strong></a></p>
</li>
<li><p>💼 <strong>LinkedIn</strong>: <a target="_blank" href="http://linkedin.com/in/apurv-gujjar"><strong>linkedin.com/in/apurv-gujjar</strong></a></p>
</li>
</ul>
<p>✨ <strong>Final Note</strong><br />This blog is a reflection of my personal journey and learning efforts. I have written and shaped it with my own understanding and dedication. Still, perfection is a continuous process so if you find any mistakes, missing points, or areas of improvement, feel free to share your suggestions in the comments. Your feedback will not only help me improve this work but will also add more value to everyone who reads it. 🚀</p>
<p>Thank you for taking the time to read! 🙌</p>
]]></content:encoded></item><item><title><![CDATA[☁️ Seamless Cloud Backup & Restore using Rclone: Pcloud ➡️ AWS S3 Automation]]></title><description><![CDATA[Aim 🎯
The aim of this task is to securely backup and restore data from another cloud (Pcloud) to AWS S3. 💾🔒To achieve this, I performed the entire task using Rclone, created an automated Bash script, and scheduled Cron jobs to ensure continuous, s...]]></description><link>https://apurv-gujjar.me/s3-task-backup-restore-part-2-using-rcone</link><guid isPermaLink="true">https://apurv-gujjar.me/s3-task-backup-restore-part-2-using-rcone</guid><category><![CDATA[AWS]]></category><category><![CDATA[clone script]]></category><category><![CDATA[S3-bucket]]></category><category><![CDATA[EC2 instance]]></category><category><![CDATA[S3]]></category><category><![CDATA[Ubuntu]]></category><category><![CDATA[remote]]></category><category><![CDATA[synchronization]]></category><dc:creator><![CDATA[Gujjar Apurv]]></dc:creator><pubDate>Fri, 15 Aug 2025 04:36:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755226476710/aaeeb023-9cac-4ac3-a3c6-447325c14bab.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-aim"><strong>Aim 🎯</strong></h2>
<p>The aim of this task is to <strong>securely backup and restore data from another cloud (Pcloud) to AWS S3</strong>. 💾🔒<br />To achieve this, I performed the <strong>entire task using Rclone</strong>, created an <strong>automated Bash script</strong>, and scheduled <strong>Cron jobs</strong> to ensure <strong>continuous, safe, and hassle-free data synchronization</strong>. 🌐✅</p>
<h3 id="heading-introduction"><strong>Introduction 📖</strong></h3>
<p>Data is the most valuable asset 💻💾, and keeping it safe and accessible is crucial. In this task, I <strong>used Rclone with sync</strong> to <strong>securely backup and restore data from Pcloud to AWS S3 ☁️➡️☁️</strong>. With a <strong>Bash script 🖥️</strong> and <strong>Cron jobs ⏰</strong>, the process is <strong>fully automated, secure, and hassle-free ✅</strong>.</p>
<h2 id="heading-agenda">📁 <strong>Agenda</strong></h2>
<ul>
<li><p>🛠️ Prerequisites Setup</p>
</li>
<li><p>🖥️ EC2 Instance Configuration</p>
</li>
<li><p>👤 IAM User &amp; S3 Bucket Creation</p>
</li>
<li><p>📦 Rclone Installation &amp; Remote Configuration</p>
</li>
<li><p>💻 Backup &amp; Restore Script Development</p>
</li>
<li><p>⏱️ Cron Automation Setup</p>
</li>
<li><p>🧪 Testing &amp; Verification</p>
</li>
<li><p>🏁 Conclusion</p>
</li>
</ul>
<h2 id="heading-prerequisites">🛠️ <strong>Prerequisites</strong></h2>
<ul>
<li><p>✅ AWS Account with EC2 access</p>
</li>
<li><p>✅ PCloud account(any other cloud) with data to backup</p>
</li>
<li><p>✅ Basic Linux command knowledge</p>
</li>
<li><p>✅ SSH access to Ubuntu server</p>
</li>
</ul>
<h2 id="heading-step-1-ec2-instance-setup">🖥️ <strong>Step 1: EC2 Instance Setup</strong></h2>
<h2 id="heading-launch-ubuntu-ec2-instance">Launch Ubuntu EC2 Instance</h2>
<pre><code class="lang-powershell"><span class="hljs-comment"># Instance Type: t3.micro (Free tier eligible)</span>
<span class="hljs-comment"># AMI: Ubuntu Server 22.04 LTS</span>
<span class="hljs-comment"># Security Group: Allow SSH (Port 22)</span>
<span class="hljs-comment"># Key Pair: Create or use existing</span>
</code></pre>
<h2 id="heading-connect-to-instance">Connect to Instance</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755227497547/b96706b4-ea88-4f51-8d39-866ef60a1474.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-2-iam-user-amp-s3-bucket-creation">👤 <strong>Step 2: IAM User &amp; S3 Bucket Creation</strong></h2>
<h2 id="heading-create-iam-user">🔏Create IAM User</h2>
<ol>
<li><p>Go to AWS IAM Console</p>
</li>
<li><p>Create new user: <code>rclone-backup-user</code></p>
</li>
<li><p>Attach policy: <code>AmazonS3FullAccess</code></p>
</li>
<li><p>Generate Access Keys (save securely)</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755227576393/14da3778-4450-4e73-90c9-9130d8478ea2.png" alt class="image--center mx-auto" /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755227593603/dd62e385-655e-4792-8ca8-ab2fe1d100f7.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h2 id="heading-create-s3-bucket">🪣Create S3 Bucket</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755227947603/6937f948-d1a5-4580-844a-e78e7ce7c2f4.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-3-rclone-installation-amp-configuration">📦 <strong>Step 3: Rclone Installation &amp; Configuration</strong></h2>
<h2 id="heading-install-rclone">Install Rclone</h2>
<pre><code class="lang-powershell">sudo apt update
sudo apt install <span class="hljs-literal">-y</span> rclone
rclone version
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755228131488/dd07ff6b-55ca-448b-aa4f-9884ddcda37a.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-configure-pcloud-remote">Configure PCloud Remote</h2>
<pre><code class="lang-powershell"><span class="hljs-comment"># Choose: n (New remote)</span>
<span class="hljs-comment"># Name: pcloud</span>
<span class="hljs-comment"># Storage: pcloud</span>
<span class="hljs-comment"># Follow OAuth authentication process</span>
<span class="hljs-comment"># Add your pcloud token</span>
</code></pre>
<p>Use the command below to generate a token for pCloud and download it locally:</p>
<pre><code class="lang-powershell">rclone authorize <span class="hljs-string">"pcloud"</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755228382430/768c126b-6a44-4a6d-b6f6-18d06e497a44.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755228252264/39d8c97e-afc1-4165-8c4d-15ef5572ad08.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-configure-aws-s3-remote">Configure AWS S3 Remote</h2>
<pre><code class="lang-powershell">rclone config
<span class="hljs-comment"># Choose: n (New remote)  </span>
<span class="hljs-comment"># Name: s3</span>
<span class="hljs-comment"># Storage: s3</span>
<span class="hljs-comment"># Provider: AWS</span>
<span class="hljs-comment"># Access Key ID: [Your IAM Access Key]</span>
<span class="hljs-comment"># Secret Access Key: [Your IAM Secret Key]</span>
<span class="hljs-comment"># Region: us-east-1</span>
<span class="hljs-comment"># Endpoint: [Leave blank]</span>
<span class="hljs-comment"># Location constraint: [Leave blank]</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755240085958/dd6913b2-4cdc-4805-84d7-351205fb877a.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-verify-remotes">Verify Remotes</h2>
<pre><code class="lang-powershell">rclone listremotes
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755228301821/9f7629fc-52d2-4aee-b88f-cd2974f4be1d.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-4-create-backup-amp-restore-script">💻 <strong>Step 4: Create Backup &amp; Restore Script</strong></h2>
<h2 id="heading-create-script-directory">Create Script Directory</h2>
<h3 id="heading-create-backuprestoresh">Create <mark>backup_restore.sh</mark></h3>
<pre><code class="lang-powershell">vim backup_restore.sh
</code></pre>
<h2 id="heading-script-content">Script Content</h2>
<pre><code class="lang-powershell"><span class="hljs-comment">#!/bin/bash</span>

<span class="hljs-comment"># Remotes</span>
PCLOUD_REMOTE=<span class="hljs-string">"pcloud:"</span>
S3_REMOTE=<span class="hljs-string">"aws_s3:mybucket1-apurv"</span>

<span class="hljs-comment"># Log files</span>
BACKUP_LOG=<span class="hljs-string">"/var/log/pcloud_to_s3_backup.log"</span>
RESTORE_LOG=<span class="hljs-string">"/var/log/s3_to_pcloud_restore.log"</span>

<span class="hljs-comment"># ----------------------------</span>
<span class="hljs-comment"># 1. Backup: pCloud → S3</span>
<span class="hljs-comment"># ----------------------------</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Starting backup from pCloud to S3..."</span>
rclone sync <span class="hljs-variable">$PCLOUD_REMOTE</span> <span class="hljs-variable">$S3_REMOTE</span> -<span class="hljs-literal">-log</span><span class="hljs-literal">-level</span>=ERROR &gt;&gt; <span class="hljs-variable">$BACKUP_LOG</span> <span class="hljs-number">2</span>&gt;&amp;<span class="hljs-number">1</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Backup completed. Log: <span class="hljs-variable">$BACKUP_LOG</span>"</span>

<span class="hljs-comment"># ----------------------------</span>
<span class="hljs-comment"># 2. Restore: S3 → pCloud</span>
<span class="hljs-comment"># ----------------------------</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Starting restore from S3 to pCloud..."</span>
rclone sync <span class="hljs-variable">$S3_REMOTE</span> <span class="hljs-variable">$PCLOUD_REMOTE</span> -<span class="hljs-literal">-log</span><span class="hljs-literal">-level</span>=ERROR &gt;&gt; <span class="hljs-variable">$RESTORE_LOG</span> <span class="hljs-number">2</span>&gt;&amp;<span class="hljs-number">1</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Restore completed. Log: <span class="hljs-variable">$RESTORE_LOG</span>"</span>
</code></pre>
<ul>
<li><ul>
<li><p><strong>Remotes:</strong></p>
<p>    * <code>$PCLOUD_REMOTE</code> → pCloud account remote.</p>
<p>    * <code>$S3_REMOTE</code> → AWS S3 bucket remote.</p>
<p>    * <strong>Log files:</strong></p>
<p>    * <code>$BACKUP_LOG</code> → stores <strong>backup errors/output</strong>.</p>
<p>    * <code>$RESTORE_LOG</code> → stores <strong>restore errors/output</strong>.</p>
<ul>
<li><p>*<strong>Backup (pCloud → S3)</strong> – <code>rclone sync</code> copies data from pCloud to S3 and logs errors.</p>
</li>
<li><p><strong>*Restore (S3 → pCloud)</strong> – <code>rclone sync</code> restores data back from S3 to pCloud with logging.</p>
<p>  * <code>--log-level=ERROR &gt;&gt; $RESTORE_LOG 2&gt;&amp;1</code> → logs errors only.</p>
<p>  * <strong>Echo messages:</strong> Show <strong>start and completion</strong> of backup/restore in terminal.</p>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<h2 id="heading-make-script-executable">Make Script Executable</h2>
<pre><code class="lang-powershell">chmod +x backup_restore.sh
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755230905535/18913b15-d347-45a3-ab0f-d401fe1072ad.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-step-5-manual-testing">🔄 <strong>Step 5: Manual Testing</strong></h2>
<h2 id="heading-test-backup">Test Backup</h2>
<pre><code class="lang-powershell">./backup_restore.sh
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755230934271/55cdd965-1b8a-4401-add7-3d22a58b76f4.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-6-automate-with-cron">⏱️ <strong>Step 6: Automate with Cron</strong></h2>
<h2 id="heading-open-crontab">Open Crontab</h2>
<pre><code class="lang-powershell">crontab <span class="hljs-literal">-e</span>
</code></pre>
<h2 id="heading-add-cron-job-every-1-minute">Add Cron Job (Every 1 minute)</h2>
<pre><code class="lang-powershell">*/<span class="hljs-number">1</span> * * * * /home/ubuntu/backup_restore.sh &gt;&gt; /var/log/backup_restore_cron.log <span class="hljs-number">2</span>&gt;&amp;<span class="hljs-number">1</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755231421527/546090b0-30ed-48a9-9bd3-e1b4c8392e2b.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-a-list-files-in-homeubuntu-directory"><strong>A. List Files in</strong> <code>/home/ubuntu/</code> Directory</h3>
<pre><code class="lang-powershell"><span class="hljs-built_in">ls</span> <span class="hljs-literal">-l</span> /home/ubuntu/
</code></pre>
<p>Displays a detailed list of all files and directories inside <code>/home/ubuntu/</code>.</p>
<h3 id="heading-b-create-a-folder-named-myfiles"><strong>B. Create a Folder Named</strong> <code>myfiles</code></h3>
<pre><code class="lang-powershell">mkdir <span class="hljs-literal">-p</span> /home/ubuntu/myfiles
</code></pre>
<p>Creates a folder called <code>myfiles</code> inside <code>/home/ubuntu/</code>. The <code>-p</code> option ensures the command won’t give an error if the folder already exists.</p>
<h3 id="heading-c-create-test1txt-with-sample-content"><strong>C. Create</strong> <code>test1.txt</code> with Sample Content</h3>
<pre><code class="lang-powershell"><span class="hljs-built_in">echo</span> <span class="hljs-string">"Test file"</span> &gt; /home/ubuntu/myfiles/test1.txt
</code></pre>
<p>Creates a file named <code>test1.txt</code> inside <code>myfiles</code> and writes <code>Test file</code> into it.</p>
<h3 id="heading-d-create-test2txt-with-sample-content"><strong>D. Create</strong> <code>test2.txt</code> with Sample Content</h3>
<pre><code class="lang-powershell"><span class="hljs-built_in">echo</span> <span class="hljs-string">"Another file"</span> &gt; /home/ubuntu/myfiles/test2.txt
</code></pre>
<p>Creates a file named <code>test2.txt</code> inside <code>myfiles</code> and writes <code>Another file</code> into it.</p>
<h3 id="heading-e-copy-files-to-aws-s3-dry-run"><strong>E. Copy Files to AWS S3 (Dry Run)</strong></h3>
<pre><code class="lang-powershell">rclone <span class="hljs-built_in">copy</span> /home/ubuntu/myfiles aws_s3:mybucket1<span class="hljs-literal">-apurv</span> -<span class="hljs-literal">-dry</span><span class="hljs-literal">-run</span> -<span class="hljs-literal">-log</span><span class="hljs-literal">-level</span>=ERROR
</code></pre>
<p>Uses <strong>rclone</strong> to copy all files from <code>/home/ubuntu/myfiles</code> to the AWS S3 bucket named <code>mybucket1-apurv</code>.<br />The <code>--dry-run</code> option simulates the copy process without actually transferring the files.</p>
<h3 id="heading-f-sync-files-from-pcloud-to-aws-s3"><strong>F. Sync Files from pCloud to AWS S3</strong></h3>
<pre><code class="lang-powershell">rclone sync pcloud: aws_s3:mybucket1<span class="hljs-literal">-apurv</span> -<span class="hljs-literal">-log</span><span class="hljs-literal">-level</span>=ERROR
</code></pre>
<p>Uses <strong>rclone</strong> to sync files from the <strong>pCloud</strong> storage to the AWS S3 bucket <code>mybucket1-apurv</code>.<br /><code>--log-level=ERROR</code> shows only error messages during execution.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755231592797/e3485b11-084a-4706-b2f0-28ea63d255b1.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-step-7-testing-amp-verification">🧪 <strong>Step 7: Testing &amp; Verification</strong></h2>
<h2 id="heading-case-study-1-existing-files">Case Study 1: Existing Files</h2>
<ul>
<li><p>✅ All existing PCloud files synced to S3</p>
</li>
<li><p>✅ Directory structure maintained</p>
</li>
<li><p>✅ File integrity preserved</p>
</li>
</ul>
<h2 id="heading-case-study-2-new-file-addition">Case Study 2: New File Addition</h2>
<ol>
<li><p>Add new file to PCloud</p>
</li>
<li><p>Wait 1 minute</p>
</li>
<li><p>Check S3 bucket → File automatically appears</p>
</li>
<li><p>Verify in logs</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755231806680/4310d27b-adf7-4bb3-b054-6604ef3d7398.png" alt class="image--center mx-auto" /></p>
<p>I uploaded <code>ad.jpg</code> to <strong>pCloud</strong>, remove text file , the <strong>cron job automatically synced it</strong> to the <strong>S3 bucket</strong> within a minute. Screenshot of the automatic sync is attached below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755231837915/a58dd079-7a3f-4a52-b63e-3ef72c8613bf.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755231847653/3abae256-6ebf-412d-a140-74e57fe9626e.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-challenges-amp-success-in-automating-secure-backup-from-pcloud-to-aws-s3">⚠️ Challenges &amp; ✅ Success in Automating Secure Backup from pCloud to AWS S3</h3>
<p><strong>Task Overview &amp; Challenges:</strong></p>
<ul>
<li><p>⏳ Faced <strong>major difficulties</strong> over 2 days while setting up backup and restore between pCloud and S3.</p>
</li>
<li><p>💾 <strong>Backup worked</strong>, but <strong>restore had issues</strong> initially.</p>
</li>
</ul>
<p><strong>Steps Taken &amp; Solutions:</strong></p>
<ul>
<li><p>📤 Uploaded data to <strong>pCloud</strong>, but cron job was not picking up <strong>latest changes</strong> immediately.</p>
</li>
<li><p>❌ Initially, <strong>latest files</strong> were missing in backup due to timing issues.</p>
</li>
<li><p>🔄 Switched to using <code>rclone sync</code>, which <strong>automatically updates S3</strong> whenever any change happens in pCloud.</p>
</li>
<li><p>⏱️ Cron job now <strong>syncs every minute</strong>, ensuring <strong>real-time backup</strong>.</p>
</li>
<li><p>✅ Tested with <code>rclone copy</code> to verify that the <strong>remote connection and permissions</strong> were correct.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755232281644/d1be1082-3d5d-4031-b260-f2aaea547af0.png" alt class="image--center mx-auto" /></p>
<p><strong>Results:</strong></p>
<ul>
<li><p>🛠️ After solving all errors, backup is now <strong>working perfectly</strong>.</p>
</li>
<li><p>🔒 All data is synced <strong>securely and automatically</strong> from pCloud → AWS S3.</p>
</li>
<li><p>📸 <strong>Screenshots attached</strong> show successful backup and automation in action.</p>
</li>
</ul>
<p><strong>Conclusion for</strong> Challenges <strong>:</strong></p>
<ul>
<li><p>🤖 Automated backup with cron + rclone ensures <strong>highly secure, real-time sync</strong>.</p>
</li>
<li><p>✨ Reduces manual effort and <strong>guarantees up-to-date cloud storage</strong>.</p>
</li>
</ul>
<h2 id="heading-conclusion"><strong>conclusion:</strong></h2>
<ul>
<li><p>✅ Learned how to <strong>automate secure cloud backup</strong> using <strong>pCloud, AWS S3, cron, and Rclone</strong>.</p>
</li>
<li><p>💡 Gained hands-on experience in <strong>troubleshooting errors, syncing data, and ensuring real-time updates</strong>.</p>
</li>
<li><p>🏆 Successfully set up a <strong>fully automated, secure, and reliable backup system</strong>.</p>
</li>
</ul>
<h2 id="heading-about-the-author"><strong>👨‍💻 About the Author</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751797710818/123a7231-3dca-4273-ad68-7bd026f69b95.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>This series isn't just about using AWS; it's about <strong>mastering the core services that power modern cloud infrastructure</strong>.</p>
<hr />
<h3 id="heading-lets-stay-connected">📬 Let's Stay Connected</h3>
<ul>
<li><p>📧 <strong>Email</strong>: <a target="_blank" href="mailto:gujjarapurv181@gmail.com"><strong>gujjarapurv181@gmail.com</strong></a></p>
</li>
<li><p>🐙 <strong>GitHub</strong>: <a target="_blank" href="http://github.com/ApurvGujjar07"><strong>github.com/ApurvGujjar07</strong></a></p>
</li>
<li><p>💼 <strong>LinkedIn</strong>: <a target="_blank" href="http://linkedin.com/in/apurv-gujja"><strong>linkedin.com/in/apurv-gujja</strong></a><strong>r</strong></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[🚀Automated Backup & Restore to AWS S3 using AWS CLI & Cron Jobs]]></title><description><![CDATA[📌 Introduction
Data loss can be costly whether due to accidental deletion, hardware failure, or system crashes . In this blog, we will set up an automated backup and restore system using AWS S3, AWS CLI, and cron jobs, starting completely from scrat...]]></description><link>https://apurv-gujjar.me/automated-backup-and-restore-to-aws-s3-using-aws-cli-and-cron-jobs</link><guid isPermaLink="true">https://apurv-gujjar.me/automated-backup-and-restore-to-aws-s3-using-aws-cli-and-cron-jobs</guid><category><![CDATA[AWS]]></category><category><![CDATA[S3]]></category><category><![CDATA[cli]]></category><category><![CDATA[IAM]]></category><category><![CDATA[Linux]]></category><category><![CDATA[shell script]]></category><category><![CDATA[Backup]]></category><dc:creator><![CDATA[Gujjar Apurv]]></dc:creator><pubDate>Sun, 10 Aug 2025 10:24:52 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1754821363554/a99d70f3-4986-4b88-bdc4-97f2a4a92669.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction"><strong>📌 Introduction</strong></h2>
<p>Data loss can be costly whether due to accidental deletion, hardware failure, or system crashes . In this blog, we will <strong>set up an automated backup and restore system</strong> using <strong>AWS S3</strong>, <strong>AWS CLI</strong>, and <strong>cron jobs</strong>, starting completely from scratch.</p>
<p>You will learn:</p>
<ul>
<li><p>How to configure AWS CLI for S3</p>
</li>
<li><p>How to create a shell script for backup &amp; restore</p>
</li>
<li><p>How to automate backups using cron</p>
</li>
<li><p>How to verify backups in real-world scenarios</p>
</li>
</ul>
<h2 id="heading-agenda">🗂 <strong>Agenda</strong></h2>
<p>🛠 Prerequisites<br />📥 Install AWS CLI<br />👤 Create IAM User &amp; Configure CLI<br />📦 Create S3 Bucket<br />📂 Prepare Local Folder<br />🖥 Create Backup &amp; Restore Script<br />🔄 Backup &amp; Restore (Manual)<br />⏱ Automate with Cron<br />🧪 Case Studies (Existing Data + New File)<br />🏁 Conclusion</p>
<h2 id="heading-step-1-prerequisites"><strong>🛠 Step 1 – Prerequisites</strong></h2>
<p>You need:</p>
<ul>
<li><p>AWS Account</p>
</li>
<li><p>Ubuntu server / EC2 instance</p>
</li>
<li><p>AWS CLI installed</p>
</li>
<li><p>IAM User with S3 access</p>
</li>
<li><p>Basic Linux command knowledge</p>
</li>
</ul>
<h2 id="heading-step-2-install-aws-cli"><strong>📥 Step 2 – Install AWS CLI</strong></h2>
<pre><code class="lang-powershell">sudo apt update
sudo apt install <span class="hljs-literal">-y</span> unzip tar gzip
<span class="hljs-built_in">curl</span> <span class="hljs-string">"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip"</span> <span class="hljs-literal">-o</span> <span class="hljs-string">"awscliv2.zip"</span>
unzip awscliv2.zip
sudo ./aws/install
aws -<span class="hljs-literal">-version</span>
</code></pre>
<h2 id="heading-step-3-create-iam-user-amp-configure-cli"><strong>👤 Step 3 – Create IAM User &amp; Configure CLI</strong></h2>
<ol>
<li><p>Go to <strong>AWS Console → IAM</strong></p>
</li>
<li><p>Create a user <code>backup-user</code> with <strong>Programmatic access</strong></p>
</li>
<li><p>Attach policy <code>AmazonS3FullAccess</code> (testing)</p>
</li>
<li><p>Save <strong>Access Key</strong> &amp; <strong>Secret Key</strong></p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754816114937/cacc0aa0-4b59-46e0-8641-0361ed37f1b3.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754816131822/e75657b6-e6dc-4387-a061-b3d7f6cf03db.png" alt class="image--center mx-auto" /></p>
<p>On Ubuntu, run:</p>
<pre><code class="lang-powershell">aws configure
</code></pre>
<p>Fill in:</p>
<ul>
<li><p>Access Key</p>
</li>
<li><p>Secret Key</p>
</li>
<li><p>Region ()</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754818192609/73d1931e-557b-4dfd-9f4a-1d9f656df36a.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h2 id="heading-step-4-create-s3-bucket"><strong>📦 Step 4 – Create S3 Bucket</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754816305243/b9c0874d-7a2d-4cb5-b553-da2b53aa7646.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754816347267/b302e6ff-3a61-4adc-8bcc-8e0370008841.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-5-prepare-local-folder"><strong>📂 Step 5 – Prepare Local Folder</strong></h2>
<pre><code class="lang-powershell"><span class="hljs-built_in">ls</span>
<span class="hljs-built_in">cd</span> aws
mkdir mybackup
</code></pre>
<p>Add your files (HTML, CSS, JS, etc.) here.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754816371784/a6caa8fc-4116-43f0-b857-9e95c6a566cc.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-6-create-backup-amp-restore-script"><strong>🖥 Step 6 – Create Backup &amp; Restore Script</strong></h2>
<pre><code class="lang-powershell">vim <span class="hljs-built_in">backup-to</span><span class="hljs-literal">-s3</span>.sh
</code></pre>
<pre><code class="lang-powershell"><span class="hljs-comment">#!/bin/bash</span>

FOLDER_PATH=<span class="hljs-string">"/home/ubuntu/mybackup"</span>
BUCKET_NAME=<span class="hljs-string">"s3-bucket-apurv-data"</span>
LOG_FILE=<span class="hljs-string">"/home/ubuntu/backup-log.txt"</span>
DATE=<span class="hljs-variable">$</span>(date +<span class="hljs-string">"%Y-%m-%d %H:%M:%S"</span>)

<span class="hljs-keyword">if</span> [ <span class="hljs-string">"<span class="hljs-variable">$1</span>"</span> == <span class="hljs-string">"backup"</span> ]; then
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"[<span class="hljs-variable">$DATE</span>] Backup started..."</span> | <span class="hljs-built_in">tee</span> <span class="hljs-literal">-a</span> <span class="hljs-string">"<span class="hljs-variable">$LOG_FILE</span>"</span>
    aws s3 sync <span class="hljs-string">"<span class="hljs-variable">$FOLDER_PATH</span>"</span> <span class="hljs-string">"s3://<span class="hljs-variable">$BUCKET_NAME</span>/"</span> -<span class="hljs-operator">-exact</span><span class="hljs-literal">-timestamps</span> -<span class="hljs-literal">-sse</span> AES256 &gt;&gt; <span class="hljs-string">"<span class="hljs-variable">$LOG_FILE</span>"</span> <span class="hljs-number">2</span>&gt;&amp;<span class="hljs-number">1</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"[<span class="hljs-variable">$DATE</span>] Backup completed."</span> | <span class="hljs-built_in">tee</span> <span class="hljs-literal">-a</span> <span class="hljs-string">"<span class="hljs-variable">$LOG_FILE</span>"</span>

elif [ <span class="hljs-string">"<span class="hljs-variable">$1</span>"</span> == <span class="hljs-string">"restore"</span> ]; then
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"[<span class="hljs-variable">$DATE</span>] Restore started..."</span> | <span class="hljs-built_in">tee</span> <span class="hljs-literal">-a</span> <span class="hljs-string">"<span class="hljs-variable">$LOG_FILE</span>"</span>
    aws s3 sync <span class="hljs-string">"s3://<span class="hljs-variable">$BUCKET_NAME</span>/"</span> <span class="hljs-string">"<span class="hljs-variable">$FOLDER_PATH</span>"</span> -<span class="hljs-operator">-exact</span><span class="hljs-literal">-timestamps</span> &gt;&gt; <span class="hljs-string">"<span class="hljs-variable">$LOG_FILE</span>"</span> <span class="hljs-number">2</span>&gt;&amp;<span class="hljs-number">1</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"[<span class="hljs-variable">$DATE</span>] Restore completed."</span> | <span class="hljs-built_in">tee</span> <span class="hljs-literal">-a</span> <span class="hljs-string">"<span class="hljs-variable">$LOG_FILE</span>"</span>

<span class="hljs-keyword">else</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"Usage: <span class="hljs-variable">$0</span> backup|restore"</span>
fi
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754816553593/1c842408-0216-4fd8-9cc9-fb73632b7c0f.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754816603761/513984db-3092-4132-8d97-654313fc26c6.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-7-manual-backup-amp-restore"><strong>Step 7 – Manual Backup &amp; Restore</strong></h3>
<pre><code class="lang-powershell"><span class="hljs-comment"># Make the script executable</span>
chmod +x <span class="hljs-built_in">backup-to</span><span class="hljs-literal">-s3</span>.sh    <span class="hljs-comment"># make it executable</span>

<span class="hljs-comment"># Take a backup</span>
./<span class="hljs-built_in">backup-to</span><span class="hljs-literal">-s3</span>.sh backup    <span class="hljs-comment"># to take backup</span>

<span class="hljs-comment"># Restore from backup</span>
./<span class="hljs-built_in">backup-to</span><span class="hljs-literal">-s3</span>.sh restore   <span class="hljs-comment"># to restore</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754816711057/a6c2911a-5e79-4162-9a94-2e224daacc43.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-8-automate-with-cron"><strong>⏱ Step 8 – Automate with Cron</strong></h2>
<pre><code class="lang-powershell">crontab <span class="hljs-literal">-e</span>
</code></pre>
<p>Add:</p>
<pre><code class="lang-powershell">*/<span class="hljs-number">2</span> * * * * /home/ubuntu/aws/<span class="hljs-built_in">backup-to</span><span class="hljs-literal">-s3</span>.sh backup
</code></pre>
<p>This cron job will automatically run the backup script every 2 minutes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754817159956/40054f71-54ec-43fd-8762-b3adb5473e80.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754817298204/a3870d4c-f4f1-498d-a2ba-a6d5204261f9.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-9-case-studies"><strong>🧪 Step 9 – Case Studies</strong></h2>
<h3 id="heading-case-1-existing-data-in-bucket"><strong>📌 Case 1 – Existing Data in Bucket</strong></h3>
<p>When the bucket was created, it already contained:</p>
<pre><code class="lang-powershell">index.html
style.css
script.js
</code></pre>
<p>Backup script compared timestamps:</p>
<ul>
<li><p>Same files → Skipped</p>
</li>
<li><p>Updated files → Uploaded again</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754817434728/c6e80dce-dc06-4124-940b-e139cfc56864.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-case-2-adding-a-new-file"><strong>📌 Case 2 – Adding a New File</strong></h3>
<p>We added:</p>
<pre><code class="lang-powershell"><span class="hljs-built_in">echo</span> <span class="hljs-string">"This is a test file"</span> &gt; /home/ubuntu/mybackup/test.txt
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754817530900/5f10d49e-63d8-4848-9735-d95f62185f59.png" alt class="image--center mx-auto" /></p>
<p>✅Check Total files and folders :-</p>
<pre><code class="lang-powershell"><span class="hljs-built_in">ls</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754818405058/7ffd5d9b-08e3-46e2-b540-e43429d39be3.png" alt class="image--center mx-auto" /></p>
<p>Backup run:</p>
<pre><code class="lang-powershell">/home/ubuntu/aws/<span class="hljs-built_in">backup-to</span><span class="hljs-literal">-s3</span>.sh backup
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754817574261/4fd33b2b-759c-482c-8df5-b925be989987.png" alt class="image--center mx-auto" /></p>
<p>Verification:</p>
<pre><code class="lang-powershell">aws s3 <span class="hljs-built_in">ls</span> s3://s3<span class="hljs-literal">-bucket</span><span class="hljs-literal">-apurv</span><span class="hljs-literal">-data</span>/ -<span class="hljs-literal">-recursive</span> -<span class="hljs-literal">-human</span><span class="hljs-literal">-readable</span> -<span class="hljs-literal">-summarize</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754817614642/fdfa5099-1be2-40a9-8186-ab8803888e89.png" alt class="image--center mx-auto" /></p>
<p>Now <code>test.txt</code> appears in the bucket.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754817642424/7393ea9b-3a4f-4770-8d45-26644808cf7c.png" alt class="image--center mx-auto" /></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong>Notice :-</strong>The <strong>DIVP PROJECT/</strong> folder contained code files (HTML, CSS, Java).</div>
</div>

<h2 id="heading-conclusion"><strong>🏁 Conclusion</strong></h2>
<p>We successfully built an <strong>automated backup &amp; restore</strong> system with:</p>
<ul>
<li><p>AWS S3</p>
</li>
<li><p>AWS CLI</p>
</li>
<li><p>Cron automation</p>
</li>
<li><p>Logging &amp; encryption</p>
</li>
</ul>
<p>With this setup, your data is <strong>secure</strong>, <strong>versioned</strong>, and <strong>restorable</strong> anytime.</p>
<h2 id="heading-about-the-author"><strong>👨‍💻 About the Author</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751797710818/123a7231-3dca-4273-ad68-7bd026f69b95.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>This series isn't just about using AWS; it's about <strong>mastering the core services that power modern cloud infrastructure</strong>.</p>
<hr />
<h3 id="heading-lets-stay-connected">📬 Let's Stay Connected</h3>
<ul>
<li><p>📧 <strong>Email</strong>: <a target="_blank" href="mailto:gujjarapurv181@gmail.com"><strong>gujjarapurv181@gmail.com</strong></a></p>
</li>
<li><p>🐙 <strong>GitHub</strong>: <a target="_blank" href="http://github.com/ApurvGujjar07"><strong>github.com/ApurvGujjar07</strong></a></p>
</li>
<li><p>💼 <strong>LinkedIn</strong>: <a target="_blank" href="http://linkedin.com/in/apurv-gujjar"><strong>linkedin.com/in/apurv-gujja</strong></a><strong>r</strong></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[“🌐🌟Beginner’s Guide to Load Balancing on AWS Using NGINX and ALB”]]></title><description><![CDATA[🔰 Introduction
In this guide, I’ll walk you through how I deployed a basic NGINX web server setup using AWS services like EC2, ALB, Subnets, and VPC. Each server displays a simple custom message and the traffic is distributed using an Application Lo...]]></description><link>https://apurv-gujjar.me/alb-part-1</link><guid isPermaLink="true">https://apurv-gujjar.me/alb-part-1</guid><category><![CDATA[ec2]]></category><category><![CDATA[AWS]]></category><category><![CDATA[S3]]></category><category><![CDATA[ELB]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Gujjar Apurv]]></dc:creator><pubDate>Mon, 04 Aug 2025 10:53:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1754304250774/d804600f-6c0f-4e98-ba9c-62ec5d0371f0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction">🔰 Introduction</h3>
<p>In this guide, I’ll walk you through how I deployed a basic NGINX web server setup using AWS services like EC2, ALB, Subnets, and VPC. Each server displays a simple custom message and the traffic is distributed using an Application Load Balancer. This small project helped me and you understand how load balancing works in AWS and how to connect multiple EC2 instances behind an ALB.</p>
<h2 id="heading-index-table-of-contents">📑 <strong>Index / Table of Contents</strong></h2>
<ol>
<li><p><strong>Introduction</strong><br />  – Brief about the project and objective</p>
</li>
<li><p><strong>Architecture Overview</strong><br />  – Visual diagram<br />  – Basic AWS services used</p>
</li>
<li><p><strong>VPC and Subnet Setup</strong><br />  – Create custom VPC<br />  – Create 3 public subnets (in 3 AZs)<br />  – Enable auto-assign public IP</p>
</li>
<li><p><strong>Internet Gateway and Routing</strong><br />  – Create and attach IGW<br />  – Route Table setup and association with subnets</p>
</li>
<li><p><strong>Security Groups Configuration</strong><br />  – ALB Security Group<br />  – EC2 Security Group</p>
</li>
<li><p><strong>Launching EC2 Instances</strong><br />  – 3 Ubuntu instances<br />  – Subnet &amp; AZ mapping<br />  – Key pair and public IPs</p>
</li>
<li><p><strong>Installing NGINX on EC2</strong><br />  – Commands to install &amp; start NGINX<br />  – Enable on boot</p>
</li>
<li><p><strong>Customizing Web Content</strong><br />  – Unique message per server (<code>Server 1</code>, <code>Server 2</code>, <code>Server 3</code>)</p>
</li>
<li><p><strong>Creating Application Load Balancer (ALB)</strong><br />  – ALB across all 3 subnets<br />  – Target group (port 80)<br />  – Health checks &amp; registration</p>
</li>
<li><p><strong>Testing the Setup</strong><br /> – Access via ALB DNS<br /> – Load balancing behavior</p>
</li>
<li><p><strong>Conclusion</strong></p>
</li>
</ol>
<h2 id="heading-1-introduction">🧩 <strong>1. Introduction</strong></h2>
<p>In this small project, I worked on deploying a simple web server setup using <strong>NGINX on EC2 instances</strong> behind an <strong>Application Load Balancer (ALB)</strong> in AWS. The goal was to understand how to:</p>
<ul>
<li><p>Launch EC2 instances across multiple subnets</p>
</li>
<li><p>Install and configure NGINX on Ubuntu servers</p>
</li>
<li><p>Set up an ALB to distribute incoming traffic</p>
</li>
<li><p>Serve custom responses from each EC2 to test load balancing</p>
</li>
</ul>
<p>Each server was placed in a different <strong>Availability Zone</strong> and returned a different message like “Server 1”, “Server 2”, and “Server 3” to help visualize ALB routing.</p>
<p>This project is ideal for beginners who want hands-on experience with core AWS services like <strong>EC2, VPC, Subnets, IGW, Security Groups</strong>, and <strong>ALB</strong>.</p>
<h2 id="heading-2-architecture-overview">🗺️ <strong>2. Architecture Overview</strong></h2>
<p>This project is built using basic yet important AWS services that work together to host a simple web application and distribute traffic across multiple servers.</p>
<hr />
<h3 id="heading-main-aws-services-used">🧱 <strong>Main AWS Services Used:</strong></h3>
<ul>
<li><p><strong>VPC</strong> – Custom Virtual Private Cloud to isolate the network</p>
</li>
<li><p><strong>Subnets</strong> – 3 public subnets across different Availability Zones (AZs)</p>
</li>
<li><p><strong>Internet Gateway (IGW)</strong> – Allows internet access to instances</p>
</li>
<li><p><strong>Route Table</strong> – Connects subnets to the IGW for public access</p>
</li>
<li><p><strong>EC2 Instances (Ubuntu)</strong> – Hosts NGINX and serves static content</p>
</li>
<li><p><strong>Security Groups</strong> – Control inbound/outbound traffic to ALB and EC2</p>
</li>
<li><p><strong>Application Load Balancer (ALB)</strong> – Distributes HTTP traffic to EC2s</p>
</li>
<li><p><strong>Target Group</strong> – Links ALB to backend EC2 instances</p>
</li>
</ul>
<hr />
<h3 id="heading-architecture-diagram">🖼️ <strong>Architecture Diagram:</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754301478647/9986f8ca-9273-4755-8ccb-5ea91f375ef4.png" alt class="image--center mx-auto" /></p>
<p>This setup ensures that any request to the ALB is routed to one of the EC2 instances in a round-robin fashion. You can test this by hitting the ALB DNS in a browser and seeing different responses.</p>
<h2 id="heading-3-vpc-and-subnet-setup">🌐 <strong>3. VPC and Subnet Setup</strong></h2>
<p>The first step in deploying any AWS infrastructure is creating a <strong>VPC (Virtual Private Cloud)</strong>, which acts like your private network in the cloud.</p>
<p>In this setup, we create one VPC and divide it into 3 <strong>public subnets</strong>, each in a different <strong>Availability Zone (AZ)</strong> for better availability and distribution.</p>
<hr />
<h3 id="heading-31-create-custom-vpc">🧱 <strong>3.1 Create Custom VPC</strong></h3>
<ol>
<li><p>Go to the <strong>VPC Dashboard</strong> in AWS.</p>
</li>
<li><p>Click <strong>“Create VPC”</strong> and choose <strong>VPC only</strong>.</p>
</li>
<li><p>Enter:</p>
<ul>
<li><p><strong>Name:</strong> <code>server-vpc</code></p>
</li>
<li><p><strong>IPv4 CIDR block:</strong> <code>10.0.0.0/16</code></p>
</li>
<li><p>Leave IPv6 and other options as default.</p>
</li>
</ul>
</li>
<li><p>Click <strong>Create VPC</strong>.</p>
</li>
</ol>
<hr />
<h3 id="heading-32-create-3-public-subnets">🌍 <strong>3.2 Create 3 Public Subnets</strong></h3>
<p>Now create three subnets in <strong>different AZs</strong>:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Subnet Name</td><td>AZ</td><td>CIDR Block</td></tr>
</thead>
<tbody>
<tr>
<td>Subnet-1</td><td>ap-south-1a</td><td><code>10.0.1.0/24</code></td></tr>
<tr>
<td>Subnet-2</td><td>ap-south-1b</td><td><code>10.0.2.0/24</code></td></tr>
<tr>
<td>Subnet-3</td><td>ap-south-1c</td><td><code>10.0.3.0/24</code></td></tr>
</tbody>
</table>
</div><p>For each subnet:</p>
<ol>
<li><p>Go to <strong>Subnets → Create subnet</strong></p>
</li>
<li><p>Select the VPC you created.</p>
</li>
<li><p>Assign <strong>AZ</strong>, <strong>name</strong>, and <strong>CIDR</strong>.</p>
</li>
<li><p>✅ <strong>Enable Auto-assign public IPv4</strong> for each subnet (important for public access).</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754301663158/ebbe32c9-4073-47b4-9fb8-61c58a213a99.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754301674475/29b7c6be-884c-417e-9529-d6f53e9884b3.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-4-internet-gateway-and-routing">🌐 <strong>4. Internet Gateway and Routing</strong></h2>
<p>To allow EC2 instances in public subnets to connect to the internet (for updates, package installs, etc.), we need two things:</p>
<ul>
<li><p>An <strong>Internet Gateway (IGW)</strong></p>
</li>
<li><p>A <strong>Route Table</strong> that connects subnets to the IGW</p>
</li>
</ul>
<hr />
<h3 id="heading-41-create-and-attach-internet-gateway-igw">🔌 <strong>4.1 Create and Attach Internet Gateway (IGW)</strong></h3>
<ol>
<li><p>Go to <strong>VPC Dashboard → Internet Gateways</strong></p>
</li>
<li><p>Click <strong>Create Internet Gateway</strong></p>
<ul>
<li><strong>Name:</strong> <code>server-igw</code></li>
</ul>
</li>
<li><p>Once created, click <strong>Actions → Attach to VPC</strong></p>
<ul>
<li><p>Select your VPC (<code>server-vpc</code>)</p>
</li>
<li><p>Click <strong>Attach</strong></p>
</li>
</ul>
</li>
</ol>
<hr />
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754301822981/47c1aa9f-9e8f-474e-aba9-a7234d6f8ff9.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-42-create-route-table-for-public-access">🧭 <strong>4.2 Create Route Table for Public Access</strong></h3>
<ol>
<li><p>Go to <strong>Route Tables → Create route table</strong></p>
<ul>
<li><p><strong>Name:</strong> <code>Public-RT</code></p>
</li>
<li><p>Select your VPC (<code>My-VPC</code>)</p>
</li>
</ul>
</li>
<li><p>Click <strong>Create</strong></p>
</li>
</ol>
<hr />
<h3 id="heading-43-add-route-to-igw">➕ <strong>4.3 Add Route to IGW</strong></h3>
<ol>
<li><p>Open <code>Public-RT</code> → Go to <strong>Routes → Edit routes</strong></p>
</li>
<li><p>Click <strong>Add route</strong></p>
<ul>
<li><p><strong>Destination:</strong> <code>0.0.0.0/0</code></p>
</li>
<li><p><strong>Target:</strong> Select your <strong>Internet Gateway (My-IGW)</strong></p>
</li>
</ul>
</li>
<li><p>Click <strong>Save changes</strong></p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754301860701/bb54e52c-d8d4-4b2e-983f-3941b479c5c1.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-44-associate-route-table-to-public-subnets">🔗 <strong>4.4 Associate Route Table to Public Subnets</strong></h3>
<ol>
<li><p>Open <code>Public-RT</code> → Go to <strong>Subnet Associations</strong></p>
</li>
<li><p>Click <strong>Edit subnet associations</strong></p>
</li>
<li><p>Select all 3 public subnets:</p>
<ul>
<li><p><code>Subnet-1 (10.0.1.0/24)</code></p>
</li>
<li><p><code>Subnet-2 (10.0.2.0/24)</code></p>
</li>
<li><p><code>Subnet-3 (10.0.3.0/24)</code></p>
</li>
</ul>
</li>
<li><p>Save</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754301904545/00d56c89-8425-4b83-82d5-843076133cad.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-5-security-groups-configuration">🔐 <strong>5. Security Groups Configuration</strong></h2>
<p>Security Groups (SGs) in AWS act like <strong>virtual firewalls</strong> that control <strong>inbound and outbound traffic</strong> to your EC2 instances and Load Balancer.<br />We’ll create <strong>two separate SGs</strong> — one for the ALB and one for the EC2 instances.</p>
<hr />
<h3 id="heading-51-create-alb-security-group">🛡️ <strong>5.1 Create ALB Security Group</strong></h3>
<p>This SG allows the ALB to accept incoming HTTP requests from the internet.</p>
<ol>
<li><p>Go to <strong>EC2 → Security Groups → Create Security Group</strong></p>
</li>
<li><p>Name: <code>ALB-SG</code></p>
</li>
<li><p>Description: <code>Allows HTTP access from anywhere</code></p>
</li>
<li><p>VPC: Select <code>server-vpc</code></p>
</li>
<li><p>Add Inbound Rule:</p>
<ul>
<li><p><strong>Type:</strong> HTTP</p>
</li>
<li><p><strong>Port:</strong> 80</p>
</li>
<li><p><strong>Source:</strong> Anywhere (<code>0.0.0.0/0</code>)</p>
</li>
</ul>
</li>
<li><p>Leave outbound as default</p>
</li>
<li><p>Click <strong>Create Security Group</strong></p>
</li>
</ol>
<hr />
<h3 id="heading-52-create-ec2-security-group">🖥️ <strong>5.2 Create EC2 Security Group</strong></h3>
<p>This SG allows EC2 instances to accept traffic <strong>only from the ALB</strong>, not directly from the internet.</p>
<ol>
<li><p>Create another SG: <code>EC2-SG</code></p>
</li>
<li><p>Description: <code>Allows HTTP from ALB only</code></p>
</li>
<li><p>VPC: Select <code>My-VPC</code></p>
</li>
<li><p>Add Inbound Rule:</p>
<ul>
<li><p><strong>Type:</strong> HTTP</p>
</li>
<li><p><strong>Port:</strong> 80</p>
</li>
<li><p><strong>Source:</strong> Custom</p>
</li>
<li><p><strong>Select ALB-SG</strong> as source (it will show in the dropdown)</p>
</li>
</ul>
</li>
<li><p>Click <strong>Create Security Group</strong></p>
</li>
</ol>
<hr />
<h3 id="heading-attach-sgs-to-resources">🔄 <strong>Attach SGs to Resources</strong></h3>
<ul>
<li><p>While creating the <strong>ALB</strong>, attach <code>ALB-SG</code></p>
</li>
<li><p>While launching each <strong>EC2 instance</strong>, attach <code>EC2-SG</code></p>
</li>
</ul>
<h2 id="heading-6-launching-ec2-instances">💻 <strong>6. Launching EC2 Instances</strong></h2>
<p>Now that our networking and security are in place, it's time to launch <strong>3 EC2 instances</strong> one in each public subnet — to host our NGINX web servers.</p>
<p>We’ll use <strong>Ubuntu 22.04</strong> as the OS for simplicity and compatibility.</p>
<hr />
<h3 id="heading-61-launch-ec2-instances-repeat-3-times">🔸 <strong>6.1 Launch EC2 Instances (Repeat 3 Times)</strong></h3>
<ol>
<li><p>Go to <strong>EC2 → Instances → Launch Instance</strong></p>
</li>
<li><p>Enter:</p>
<ul>
<li><p><strong>Name:</strong> <code>Server-1</code> (change to Server-2 and Server-3 for others)</p>
</li>
<li><p><strong>AMI:</strong> Ubuntu Server 22.04 LTS (64-bit)</p>
</li>
<li><p><strong>Instance type:</strong> <code>t2.micro</code> (free tier eligible)</p>
</li>
</ul>
</li>
<li><p><strong>Key Pair:</strong> Select an existing one or create a new one<br /> <em>(Make sure to download and keep the</em> <code>.pem</code> file safe)</p>
</li>
</ol>
<hr />
<h3 id="heading-62-network-settings-per-instance">📍 <strong>6.2 Network Settings per Instance</strong></h3>
<p>For each instance:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Server Name</td><td>Subnet</td><td>Availability Zone</td></tr>
</thead>
<tbody>
<tr>
<td>Server-1</td><td><code>Subnet-1</code></td><td><code>ap-south-1a</code></td></tr>
<tr>
<td>Server-2</td><td><code>Subnet-2</code></td><td><code>ap-south-1b</code></td></tr>
<tr>
<td>Server-3</td><td><code>Subnet-3</code></td><td><code>ap-south-1c</code></td></tr>
</tbody>
</table>
</div><ul>
<li><p><strong>Auto-assign Public IP:</strong> Enabled ✅</p>
</li>
<li><p><strong>Security Group:</strong> Attach the <code>EC2-SG</code> you created earlier</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754302298520/23adbd07-edce-4968-a609-19c574dab3d6.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-63-launch-all-instances">🟢 <strong>6.3 Launch All Instances</strong></h3>
<ol>
<li><p>Launch each instance with its respective subnet</p>
</li>
<li><p>Wait for <strong>running status</strong></p>
</li>
<li><p>Copy the <strong>public IP</strong> of each instance (we’ll use them to SSH and install NGINX)</p>
</li>
</ol>
<h2 id="heading-7-installing-nginx-on-ec2">🔧 <strong>7. Installing NGINX on EC2</strong></h2>
<p>After your EC2 instances are running, SSH into each one and install NGINX:</p>
<pre><code class="lang-powershell"><span class="hljs-comment"># Update and install NGINX</span>
sudo apt update <span class="hljs-literal">-y</span>
sudo apt install nginx <span class="hljs-literal">-y</span>

<span class="hljs-comment"># Start NGINX and enable on boot</span>
sudo systemctl <span class="hljs-built_in">start</span> nginx
sudo systemctl enable nginx
</code></pre>
<p>Repeat the above steps on all 3 EC2 instances.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754302494147/235a8b2a-3e46-4099-b43c-dc64e14bd10a.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754302524717/9fdb2180-242d-4d4c-b251-92282bd0d6ee.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-8-customizing-web-content">🎨 <strong>8. Customizing Web Content</strong></h2>
<p>We want each EC2 instance to return a unique message (e.g., Server 1, Server 2, etc.) so we can test the ALB behavior.</p>
<h3 id="heading-on-server-1">On Server 1:</h3>
<pre><code class="lang-powershell"><span class="hljs-built_in">echo</span> <span class="hljs-string">"Server 1"</span> | sudo <span class="hljs-built_in">tee</span> /var/www/html/index.nginx<span class="hljs-literal">-debian</span>.html
</code></pre>
<h3 id="heading-on-server-2">On Server 2:</h3>
<pre><code class="lang-powershell"><span class="hljs-built_in">echo</span> <span class="hljs-string">"Server 2"</span> | sudo <span class="hljs-built_in">tee</span> /var/www/html/index.nginx<span class="hljs-literal">-debian</span>.html
</code></pre>
<h3 id="heading-on-server-3">On Server 3:</h3>
<pre><code class="lang-powershell"><span class="hljs-built_in">echo</span> <span class="hljs-string">"Server 3"</span> | sudo <span class="hljs-built_in">tee</span> /var/www/html/index.nginx<span class="hljs-literal">-debian</span>.html
</code></pre>
<hr />
<h2 id="heading-9-creating-application-load-balancer-alb">⚙️ <strong>9. Creating Application Load Balancer (ALB)</strong></h2>
<p>Now we create an <strong>Application Load Balancer</strong> that will distribute incoming traffic across the 3 servers.</p>
<h3 id="heading-alb-setup">🔹 ALB Setup:</h3>
<ol>
<li><p>Go to <strong>EC2 → Load Balancers → Create Load Balancer</strong></p>
</li>
<li><p>Choose <strong>Application Load Balancer</strong></p>
</li>
<li><p>Enter:</p>
<ul>
<li><p><strong>Name:</strong> <code>server-alb</code></p>
</li>
<li><p><strong>Scheme:</strong> Internet-facing</p>
</li>
<li><p><strong>Listeners:</strong> HTTP (Port 80)</p>
</li>
</ul>
</li>
<li><p><strong>Availability Zones</strong>: Select your VPC and the 3 subnets:</p>
<ul>
<li><p>Subnet-1 (ap-south-1a)</p>
</li>
<li><p>Subnet-2 (ap-south-1b)</p>
</li>
<li><p>Subnet-3 (ap-south-1c)</p>
</li>
</ul>
</li>
<li><p>Attach <strong>ALB-SG</strong> as the security group.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754302553431/813ae09a-f884-46db-b9f8-3c017108ac35.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-target-group-setup">🔸 Target Group Setup:</h3>
<ol>
<li><p>Create a new <strong>Target Group</strong></p>
<ul>
<li><p><strong>Target type:</strong> Instances</p>
</li>
<li><p><strong>Protocol:</strong> HTTP</p>
</li>
<li><p><strong>Port:</strong> 80</p>
</li>
</ul>
</li>
<li><p>Register all 3 EC2 instances</p>
</li>
<li><p>Keep health checks as default (or use <code>/</code>)</p>
</li>
</ol>
<p>Once done, the ALB will start running and routing traffic.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754302611516/ebd282ef-6ace-4e7a-b5ea-98e5177faa59.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-10-testing-the-setup">🔍 <strong>10. Testing the Setup</strong></h2>
<p>After your ALB status is <strong>active</strong>:</p>
<ol>
<li><p>Go to <strong>Load Balancers → Select your ALB</strong></p>
</li>
<li><p>Copy the <strong>DNS name</strong> (e.g., <a target="_blank" href="http://my-alb-123456789.ap-south-1.elb.amazonaws.com"><code>my-alb-123456789.ap-south-1.elb.amazonaws.com</code></a>)</p>
</li>
<li><p>Open it in your browser:</p>
</li>
</ol>
<pre><code class="lang-powershell">http://&lt;your<span class="hljs-literal">-alb</span><span class="hljs-literal">-dns</span>&gt;
</code></pre>
<p>🌀 Refresh multiple times — you should see:</p>
<ul>
<li><p>Server 1</p>
</li>
<li><p>Server 2</p>
</li>
<li><p>Server 3</p>
</li>
</ul>
<p>This confirms <strong>round-robin load balancing</strong> is working across your EC2 instances</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754302670183/6275b256-bf82-4e32-88c9-2ebf40159048.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754302684340/4a2a48ab-6b46-4d68-a3c7-b7441df70aaf.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754302691303/fa2355e3-7dca-4727-b2c3-4600b528e1aa.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">✅ <strong>Conclusion</strong></h2>
<p>In this hands-on project, we successfully created a load-balanced web server architecture on AWS using core services like <strong>VPC, EC2, Subnets, Security Groups</strong>, and an <strong>Application Load Balancer (ALB)</strong>.</p>
<p>Each EC2 instance ran <strong>NGINX</strong>, hosted a unique web page, and was deployed in a separate <strong>Availability Zone</strong>, ensuring high availability and better traffic distribution. The ALB handled incoming traffic and distributed it across the servers in a <strong>round-robin</strong> fashion — a basic yet powerful demonstration of load balancing on AWS.</p>
<p>This project gave me a practical understanding of:</p>
<ul>
<li><p>How networking works in AWS (VPC, Subnets, IGW, Routing)</p>
</li>
<li><p>How to configure EC2 and secure them with Security Groups</p>
</li>
<li><p>How to use ALB to manage and balance web traffic</p>
</li>
<li><p>How to test and validate a real cloud setup end-to-end</p>
</li>
</ul>
<h2 id="heading-about-the-author"><strong>👨‍💻 About the Author</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751797710818/123a7231-3dca-4273-ad68-7bd026f69b95.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>This series isn't just about using AWS; it's about <strong>mastering the core services that power modern cloud infrastructure</strong>.</p>
<hr />
<h3 id="heading-lets-stay-connected">📬 Let's Stay Connected</h3>
<ul>
<li><p>📧 <strong>Email</strong>: <a target="_blank" href="mailto:gujjarapurv181@gmail.com"><strong>gujjarapurv181@gmail.com</strong></a></p>
</li>
<li><p>🐙 <strong>GitHub</strong>: <a target="_blank" href="http://github.com/ApurvGujjar07"><strong>github.com/ApurvGujjar07</strong></a></p>
</li>
<li><p>💼 <strong>LinkedIn</strong>: <a target="_blank" href="http://linkedin.com/in/apurv-gujjar"><strong>linkedin.com/in/apurv-gujjar</strong></a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[🚀 Seamless Data Transfer via Rsync Between  2 Different EC2 Instances Mounted with EBS Volumes]]></title><description><![CDATA[🎯 Aim:
To securely transfer files from one EC2 instance to another using the rsync utility, where data resides on an attached and mounted EBS volume, while preserving permissions and ensuring efficiency.
📌 Overview:
In this task, we used two EC2 in...]]></description><link>https://apurv-gujjar.me/ebs-rsync-task</link><guid isPermaLink="true">https://apurv-gujjar.me/ebs-rsync-task</guid><category><![CDATA[AWS]]></category><category><![CDATA[rsync]]></category><category><![CDATA[ec2]]></category><category><![CDATA[ebs snapshots]]></category><category><![CDATA[Ubuntu]]></category><dc:creator><![CDATA[Gujjar Apurv]]></dc:creator><pubDate>Mon, 21 Jul 2025 16:52:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1753116643206/c8fbf6f3-d0ff-4a5b-afa4-39a3913dbd26.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-aim">🎯 <strong>Aim:</strong></h2>
<p>To <strong>securely transfer files</strong> from one EC2 instance to another using the <code>rsync</code> utility, where data resides on an <strong>attached and mounted EBS volume</strong>, while preserving permissions and ensuring efficiency.</p>
<h2 id="heading-overview">📌 <strong>Overview:</strong></h2>
<p>In this task, we used <strong>two EC2 instances</strong>, each with its own <strong>EBS volume</strong> mounted at <code>/mydata</code>. Our goal was to transfer the contents from <strong>EC2 Instance 1</strong> to <strong>EC2 Instance 2</strong> using <code>rsync</code>, excluding system files like <code>lost+found</code>.</p>
<p>We ensured:</p>
<ul>
<li><p>🔐 Secure SSH key-based connection</p>
</li>
<li><p>🚫 Proper exclusion of unnecessary system folders</p>
</li>
<li><p>✅ Verified and successful file transfer</p>
</li>
<li><p>🛠️ Fixed permission issues</p>
</li>
</ul>
<h2 id="heading-infrastructure-setup">🧱 <strong>Infrastructure Setup:</strong></h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Component</td><td>Details</td></tr>
</thead>
<tbody>
<tr>
<td>EC2 Instance 1</td><td>Source, with EBS mounted at <code>/mydata1</code></td></tr>
<tr>
<td>EC2 Instance 2</td><td>Destination, with EBS mounted at <code>/mydata2</code></td></tr>
<tr>
<td>SSH Key</td><td><code>your key.pem</code></td></tr>
<tr>
<td>File Transferred</td><td><code>ebs-test.txt</code></td></tr>
<tr>
<td>Tool Used</td><td><code>rsync</code></td></tr>
</tbody>
</table>
</div><h2 id="heading-step-by-step-execution">🔧 Step-by-Step Execution</h2>
<h3 id="heading-step-1-create-ec2-instances">Step 1️⃣: Create EC2 Instances</h3>
<ul>
<li><p>Launch two EC2 instances (Source &amp; Target) in the same VPC/Subnet for easier SSH connection.</p>
</li>
<li><p>Use the <strong>Ubuntu AMI</strong> and make sure both have access to the same key pair (e.g., <code>xyz.pem</code>).</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753112665027/92e91cef-d567-4622-be51-283081c95fae.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-2-create-and-attach-ebs-volume-to-source-ec2">Step 2️⃣: Create and Attach EBS Volume to Source EC2</h3>
<ul>
<li><p>Go to <strong>Elastic Block Store &gt; Volumes</strong></p>
</li>
<li><p>Create a new 2 EBS volume (e.g., 1 GB, same AZ as Source EC2)</p>
</li>
<li><p>Select the volume → <strong>Actions &gt; Attach volume</strong> → Choose your Source EC2</p>
</li>
<li><p>Device name will be something like <code>/dev/xvdh</code></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753112731181/680879fc-ab01-4ffd-a33b-024cfbc2cd20.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-3-mount-ebs-volume-in-source-amp-destination-both-ec2">Step 3️⃣: Mount EBS Volume in Source &amp; Destination Both EC2</h3>
<blockquote>
<h3 id="heading-its-not-mandatory-that-both-source-and-destination-have-ebs-volumes-attached"><strong><mark>✅ it's not mandatory that </mark> <em><mark>both source and destination</mark></em> <mark> have EBS volumes attached.</mark></strong></h3>
</blockquote>
<p>SSH into Source &amp; Destination EC2 and run:</p>
<pre><code class="lang-powershell">lsblk      <span class="hljs-comment"># Check if /dev/xvdf is visible</span>
sudo mkfs <span class="hljs-literal">-t</span> ext4 /dev/xvdh    <span class="hljs-comment"># Format the volume</span>
sudo mkdir /mydata           <span class="hljs-comment"># Create a mount point</span>
sudo <span class="hljs-built_in">mount</span> /dev/xvdh /mydata <span class="hljs-comment"># Mount the volume</span>
</code></pre>
<p>✅ Now the volume is ready to store data.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753112973418/bb5792ec-0b27-4c61-b622-cd7d54f7f776.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753113000510/aa36ea2f-491d-4c01-b374-7b2fe452b1f8.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-4-create-sample-data">Step 4️⃣: Create Sample Data</h3>
<p>Write a test file to check transfer:</p>
<pre><code class="lang-powershell"><span class="hljs-built_in">cd</span> /mydata
<span class="hljs-built_in">echo</span> <span class="hljs-string">"This is task for EBS-Task performed by Apurv Gujjar"</span> &gt; ebs<span class="hljs-literal">-test</span>.txt
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753113361159/f67115e5-9279-4489-af0d-586813d7f9fe.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-5-change-ownership-of-mounted-ebs-volume">🔄 Step 5️⃣: Change Ownership of Mounted EBS Volume 👑</h3>
<p>After mounting your EBS volume (e.g., at <code>/mydata</code>), by default it's owned by the <code>root</code> user. This can block the <code>ubuntu</code> user from accessing or writing files — especially important for tools like <code>rsync</code>.</p>
<h4 id="heading-aim-1">🎯 <strong>Aim:</strong></h4>
<p>Grant full ownership of the mounted volume to the <code>ubuntu</code> user so it can use the directory without permission issues.</p>
<h4 id="heading-command">🛠️ <strong>Command:</strong></h4>
<pre><code class="lang-powershell">sudo chown <span class="hljs-literal">-R</span> ubuntu:ubuntu /mydata
</code></pre>
<blockquote>
<p>🔹 <code>-R</code>: Applies changes recursively<br />🔹 <code>ubuntu:ubuntu</code>: Sets user and group ownership<br />🔹 <code>/mydata</code>: Path to your mounted volume (adjust if different)</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753113728139/e9e4247b-7d69-4d4c-ac49-c78f812ffd44.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753113763697/0be7db35-824c-463d-b129-60844a28d847.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-why-its-important">✅ <strong>Why it’s Important:</strong></h4>
<p>Without changing ownership, you may get <code>Permission Denied</code> errors during file transfers. This step ensures smooth read/write access for the EC2 instance’s main user.</p>
<h3 id="heading-step-6-add-private-key-to-source-ec2-for-remote-access-via-rsync">🔐 Step 6️⃣: Add Private Key to Source EC2 for Remote Access via Rsync</h3>
<p>To allow the <strong>source EC2 instance</strong> to securely connect to the <strong>destination EC2 instance</strong> using <code>rsync</code>, you need to place the <strong>destination EC2's</strong> <code>.pem</code> file inside the source EC2.</p>
<h4 id="heading-why-this-is-needed">🎯 Why This Is Needed?</h4>
<p>The <strong>source</strong> initiates the rsync connection over SSH to the <strong>destination</strong>. Therefore, the <strong>source EC2 needs the destination's private key</strong> to authenticate and establish the secure connection.</p>
<hr />
<h4 id="heading-step-by-step">🛠️ Step-by-Step:</h4>
<ol>
<li><p><strong>Create a new file to store your destination’s private key</strong>:</p>
<pre><code class="lang-plaintext"> vim [your-key-name.pem]
</code></pre>
</li>
<li><p><strong>Paste your destination EC2’s private key</strong> (from the <code>.pem</code> file you downloaded while creating destination EC2).</p>
</li>
<li><p><strong>Save and exit</strong> (<code>Esc + :wq</code> in vim).</p>
</li>
<li><p><strong>Secure the key by updating its permissions</strong>:</p>
<pre><code class="lang-plaintext"> chmod 400 [your-key-name.pem]
</code></pre>
</li>
</ol>
<p>✅ Now, your <strong>source EC2</strong> can securely connect to the <strong>destination EC2</strong> using this key enabling <code>rsync</code> to transfer files smoothly.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753113694074/1246153b-4edc-4da3-a8c0-d757e6aeea75.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-7-rsync-command-execution-from-ec2-1-source">⚙️ <strong>Step 7️⃣: Rsync Command Execution from EC2-1 (Source)</strong></h2>
<p>📍 On <strong>EC2-1</strong>, run the following command to transfer the file to <strong>EC2-2</strong> using <code>rsync</code> over SSH:</p>
<pre><code class="lang-plaintext"> rsync -avz --exclude 'lost+found' -e "ssh -i /home/ubuntu/corextech.pem" /mydata/ ubuntu@172.31.0.110:/mydata/
</code></pre>
<h3 id="heading-command-breakdown">🧠 <strong>Command Breakdown :</strong></h3>
<ul>
<li><ul>
<li><p><code>rsync</code><br />    → The main utility used to synchronize files and directories between two locations.</p>
<ul>
<li><p><code>-a</code> (archive mode)<br />  → Preserves permissions, symbolic links, file ownership, and timestamps. Ideal for full backups.</p>
</li>
<li><p><code>-v</code> (verbose)<br />  → Displays detailed output of the transfer process.</p>
</li>
<li><p><code>-z</code> (compress)<br />  → Compresses data during transfer to reduce network load and increase speed.</p>
</li>
<li><p><code>--exclude 'lost+found'</code><br />  → Skips the <code>lost+found</code> directory (often present in ext file systems) to avoid unnecessary syncing.</p>
</li>
<li><p><code>-e "ssh -i /home/ubuntu/corextech.pem"</code><br />  → Uses SSH for secure data transfer with a specific private key (<code>corextech.pem</code>) for authentication.</p>
</li>
<li><p><code>/mydata/</code><br />  → Source directory on EC2-1 (local instance) to be synced.</p>
</li>
<li><p><code>ubuntu@172.31.0.110:/mydata/</code><br />  → Destination path on EC2-2 (remote instance) where the data will be copied.</p>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p>🔐 This command securely:</p>
<ul>
<li><p>Connects to EC2-2 using SSH (<code>-e "ssh -i corextech.pem"</code>)</p>
</li>
<li><p>Transfers <code>/mydata/ebs-test.txt</code> from EC2-1 to <code>/mydata</code> directory on EC2-2</p>
</li>
<li><p>Maintains <strong>file permissions</strong>, <strong>timestamps</strong>, and <strong>compression</strong> during transfer</p>
</li>
</ul>
<p>📤 After running, you’ll see output like:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753114025453/446179bd-9089-4b51-9880-6dee50b4afcd.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-8-rsync-transfer-completed-successfully"><strong>Step 8️⃣: Rsync Transfer Completed Successfully</strong></h3>
<p>After running the <code>rsync</code> command from <strong>EC2-1</strong>, we accessed <strong>EC2-2</strong> to confirm the file transfer was successful.</p>
<p>We verified it using the following commands in EC2-2:</p>
<pre><code class="lang-plaintext">ls -l /mydata
</code></pre>
<p>This listed the contents of <code>/mydata</code>, showing that the file <code>ebs-test.txt</code> was present.</p>
<pre><code class="lang-plaintext">cat /mydata/ebs-test.txt
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753114322384/82af46fa-d36c-498d-8d9c-dcc3d625dbe2.png" alt class="image--center mx-auto" /></p>
<p>✅ <strong>Success!</strong> We have now securely and efficiently transferred a file between two EC2 instances using <strong>rsync over SSH</strong>.</p>
<h3 id="heading-conclusion">✅ <strong>Conclusion</strong></h3>
<p>By following the above steps, we successfully transferred a file from one EC2 instance to another using <code>rsync</code> over SSH. This method is secure, efficient, and ideal for syncing files between servers with minimal overhead. It's a must-have skill for any DevOps or Cloud Engineer working with AWS infrastructure.</p>
<h2 id="heading-about-the-author"><strong>👨‍💻 About the Author</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751797710818/123a7231-3dca-4273-ad68-7bd026f69b95.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>This series isn't just about using AWS; it's about <strong>mastering the core services that power modern cloud infrastructure</strong>.</p>
<hr />
<h3 id="heading-lets-stay-connected">📬 Let's Stay Connected</h3>
<ul>
<li><p>📧 <strong>Email</strong>: <a target="_blank" href="mailto:gujjarapurv181@gmail.com"><strong>gujjarapurv181@gmail.com</strong></a></p>
</li>
<li><p>🐙 <strong>GitHub</strong>: <a target="_blank" href="http://github.com/ApurvGujjar07"><strong>github.com/ApurvGujjar07</strong></a></p>
</li>
<li><p>💼 <strong>LinkedIn</strong>: <a target="_blank" href="http://linkedin.com/in/apurv-gujjar"><strong>linkedin.com/in/apurv-gujjar</strong></a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[🚀 Automating Nginx & Custom Scripts with Cronjobs on EC2 Ubuntu]]></title><description><![CDATA[Modern server management in the cloud demands repeatability and automation. As part of my DevOps skill development, I successfully set up an AWS EC2 instance, deployed an Nginx web server, automated server maintenance using cronjobs, and built custom...]]></description><link>https://apurv-gujjar.me/cron-job-task</link><guid isPermaLink="true">https://apurv-gujjar.me/cron-job-task</guid><category><![CDATA[ec2]]></category><category><![CDATA[AWS]]></category><category><![CDATA[cronjob]]></category><category><![CDATA[nginx]]></category><category><![CDATA[shell script]]></category><category><![CDATA[Ubuntu]]></category><dc:creator><![CDATA[Gujjar Apurv]]></dc:creator><pubDate>Sun, 20 Jul 2025 11:44:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1753009629136/fe0e6ee9-3ad3-42f6-ae61-0fef58c42c90.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Modern server management in the cloud demands repeatability and automation. As part of my DevOps skill development, I successfully set up an AWS EC2 instance, deployed an Nginx web server, automated server maintenance using cronjobs, and built custom shell scripts for scheduled tasks. This blog details my hands-on journey, professional approach, and practical learnings</p>
<h4 id="heading-cron-time-format-explanation">🟢 . <strong>Cron Time Format Explanation</strong></h4>
<pre><code class="lang-powershell"><span class="hljs-comment"># ┌───────────── minute (0 - 59)</span>
<span class="hljs-comment"># │ ┌───────────── hour (0 - 23)</span>
<span class="hljs-comment"># │ │ ┌───────────── day of month (1 - 31)</span>
<span class="hljs-comment"># │ │ │ ┌───────────── month (1 - 12)</span>
<span class="hljs-comment"># │ │ │ │ ┌───────────── day of week (0 - 6) (Sunday=0 or 7)</span>
<span class="hljs-comment"># │ │ │ │ │</span>
<span class="hljs-comment"># │ │ │ │ │</span>
<span class="hljs-comment"># * * * * * &lt;command-to-execute&gt;</span>
</code></pre>
<h3 id="heading-useful-crontab-commands-for-linux-automation">🔧 Useful Crontab Commands for Linux Automation</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Command</td><td>Description</td></tr>
</thead>
<tbody>
<tr>
<td><code>crontab -e</code></td><td>Edit the current user’s crontab file to schedule tasks</td></tr>
<tr>
<td><code>crontab -l</code></td><td>List/show all scheduled cron jobs for the current user</td></tr>
<tr>
<td><code>crontab -r</code></td><td>Remove the current user’s crontab (all jobs)</td></tr>
<tr>
<td><code>crontab -u &lt;user&gt; -l</code></td><td>View cron jobs of a specific user (run as root)</td></tr>
<tr>
<td><code>sudo service cron status</code></td><td>Check the status of the cron service</td></tr>
<tr>
<td><code>sudo service cron start</code></td><td>Start the cron service (if stopped)</td></tr>
<tr>
<td><code>sudo service cron stop</code></td><td>Stop the cron service</td></tr>
<tr>
<td><code>sudo service cron restart</code></td><td>Restart the cron service</td></tr>
</tbody>
</table>
</div><h2 id="heading-task-objective">📝 <strong>Task Objective</strong></h2>
<p><strong>Goal:</strong></p>
<ul>
<li><p>Deploy and manage an AWS EC2 (Ubuntu) instance</p>
</li>
<li><p>Automate Nginx server installation and daily management</p>
</li>
<li><p>Schedule Nginx “<strong>restart” at 7 AM</strong> daily</p>
</li>
<li><p>Run a custom shell script <strong>every day at 2 AM</strong> using cronjobs</p>
</li>
<li><p>Achieve all objectives with pure automation—no manual intervention</p>
</li>
</ul>
<h2 id="heading-1-launching-an-ec2-ubuntu-instance">1️⃣ <strong>Launching an EC2 Ubuntu Instance</strong></h2>
<ul>
<li><p><strong>OS:</strong> Ubuntu 22.04 LTS</p>
</li>
<li><p><strong>Instance Type:</strong> t2.micro (Free Tier eligible)</p>
</li>
<li><p><strong>Key Pair:</strong> Secure EC2 access using <code>.pem</code> SSH key</p>
</li>
<li><p><strong>Security Group:</strong></p>
<ul>
<li><p>Allow port <strong>22</strong> (SSH) for terminal access</p>
</li>
<li><p>Allow port <strong>80</strong> (HTTP) for web access (Nginx)</p>
</li>
</ul>
</li>
</ul>
<p><strong>Connect to your instance:</strong></p>
<pre><code class="lang-plaintext">ssh -i "your-key.pem" ubuntu@your-ec2-public-ip
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753009917339/7562bc0c-d54c-4916-80e6-0d58902c4898.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753009928937/5639f18c-f8f0-451d-8666-1d65b7566fb8.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-2-manually-installing-and-verifying-nginx-web-server-on-ec2">🧩 Step 2: Manually Installing and Verifying Nginx Web Server on EC2</h3>
<p>After launching the EC2 instance, the next step was to <strong>manually install the Nginx web server</strong>. This helps in quickly testing server availability and basic connectivity.</p>
<h4 id="heading-commands-used-for-nginx-installation">⚙️ Commands Used for Nginx Installation</h4>
<p>Below are the individual commands I executed one by one:</p>
<pre><code class="lang-powershell"><span class="hljs-comment"># Step 1: Update package list</span>
sudo apt update

<span class="hljs-comment"># Step 2: Install Nginx</span>
sudo apt install nginx <span class="hljs-literal">-y</span>

<span class="hljs-comment"># Step 3: Enable Nginx to start on boot</span>
sudo systemctl enable nginx

<span class="hljs-comment"># Step 4: Start Nginx service</span>
sudo systemctl <span class="hljs-built_in">start</span> nginx

<span class="hljs-comment"># Step 5: Check Nginx service status</span>
sudo systemctl status nginx
</code></pre>
<h4 id="heading-how-i-verified-the-installation">✅ How I Verified the Installation</h4>
<p>Once the installation was successful:</p>
<ul>
<li><p>I <strong>copied the public IPv4 address</strong> of the EC2 instance.</p>
</li>
<li><p>Opened it in a web browser like this:</p>
</li>
</ul>
<pre><code class="lang-plaintext">http://&lt;your-ec2-public-ip&gt;
</code></pre>
<p>If everything is set up correctly, the <strong>default Nginx welcome page</strong> appears with the message:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753010604429/214910ce-b321-4fbb-a11b-b2bfc85ed0fc.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753010121776/8555016d-37ef-4004-b62b-98b6e7ef08a7.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-3-creating-the-shell-script-myscriptsh">🚀 Step 3: Creating the Shell Script (<code>myscript.sh</code>)</h2>
<p>After setting up Nginx, the next part of the task was to create a shell script that performs a simple operation appending a timestamped log to a file every time it runs.</p>
<h3 id="heading-1-open-a-new-script-file-using-vim">🔧 1. Open a new script file using Vim</h3>
<p>First, I created a new shell script using the Vim editor by running the following command:</p>
<pre><code class="lang-powershell">vim myscript.sh
</code></pre>
<p>This opened the Vim editor where I could write my script.</p>
<hr />
<h3 id="heading-2-script-content">📝 2. Script Content</h3>
<p>Inside the <code>myscript.sh</code> file, I added the following lines of code:</p>
<pre><code class="lang-powershell"><span class="hljs-comment">#!/bin/bash</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Script ran at <span class="hljs-variable">$</span>(date)"</span> &gt;&gt; /home/ubuntu/myscript.log
</code></pre>
<p>📌 This script appends the current date and time to a log file named <code>myscript.log</code> located in the <code>/home/ubuntu/</code> directory. Every time the script runs, a new entry is added to the log, which is useful for tracking execution.</p>
<hr />
<h3 id="heading-3-save-and-exit-the-file">💾 3. Save and Exit the File</h3>
<p>To save and exit in Vim:</p>
<ul>
<li><p>Press <code>Esc</code></p>
</li>
<li><p>Then type <code>:wq</code> and hit <code>Enter</code></p>
</li>
</ul>
<p>This saves the script and brings you back to the terminal.</p>
<hr />
<h3 id="heading-4-make-the-script-executable">✅ 4. Make the Script Executable</h3>
<p>Before we can run the script manually or via a cron job, we need to give it executable permissions. I used the following command:</p>
<pre><code class="lang-powershell">chmod +x myscript.sh
</code></pre>
<p>Now the script is ready to be run manually or scheduled using cron in the next step.</p>
<h3 id="heading-fixing-permission-error-optional-step">Fixing Permission Error (optional step)</h3>
<p>When I tried to run <code>myscript.sh</code>, I got a <strong>"Permission denied"</strong> error because the log file <code>/home/ubuntu/myscript.log</code> was owned by the <strong>root</strong> user. This prevented the script from writing to the file.</p>
<p>To fix this issue, I followed these steps:</p>
<ol>
<li><p>Noticed the permission error related to <code>myscript.log</code>.</p>
</li>
<li><p>Checked the file ownership it was owned by <strong>root</strong>.</p>
</li>
<li><p>Changed the ownership from <code>root</code> to the <code>ubuntu</code> user using <code>chown</code>.</p>
</li>
<li><p>After that, the script executed successfully and started logging entries as expected.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753013400523/14c211e5-f3ca-409d-a28a-ec45556da693.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-4-automating-tasks-using-cron-jobs">🕓 Step 4: Automating Tasks Using Cron Jobs</h3>
<p>One of the most powerful aspects of Linux-based systems is automation — and <strong>Cron Jobs</strong> are the go-to solution for scheduling repetitive system-level tasks. In this step, I utilized cron to automate essential backend processes on my EC2 instance.</p>
<hr />
<h3 id="heading-automation-goals-for-this-project">✅ Automation Goals for This Project</h3>
<p>As per the task requirements, I needed to automate the following:</p>
<p>🔁 <strong>Restart the Nginx service</strong> daily at <strong>7:00 AM</strong><br />📜 <strong>Run a custom shell script</strong> daily at <strong>2:00 AM</strong><br />🧪 <strong>(Optional)</strong> Test cron functionality by logging timestamps <strong>every minute</strong></p>
<hr />
<h3 id="heading-editing-the-crontab">🛠️ Editing the Crontab</h3>
<p>To configure these cron jobs, I used the <code>crontab</code> utility:</p>
<pre><code class="lang-powershell">crontab <span class="hljs-literal">-e</span>
</code></pre>
<p>📝 On first run, it prompts to choose an editor — I went with <strong>vim</strong>, since I’m already comfortable using it.</p>
<hr />
<h3 id="heading-cron-job-entries-i-added">📄 Cron Job Entries I Added</h3>
<pre><code class="lang-powershell"><span class="hljs-comment"># Restart Nginx at 7 AM daily</span>
<span class="hljs-number">0</span> <span class="hljs-number">7</span> * * * /bin/systemctl restart nginx

<span class="hljs-comment"># Run your script at 2 AM daily</span>
<span class="hljs-number">0</span> <span class="hljs-number">2</span> * * * /home/ubuntu/myscript.sh

<span class="hljs-comment"># Test Cron (runs every minute) - for debugging only</span>
* * * * * <span class="hljs-built_in">echo</span> <span class="hljs-string">"Cron Working ✅ <span class="hljs-variable">$</span>(date)"</span> &gt;&gt; /home/ubuntu/testcron.log
</code></pre>
<hr />
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753010863512/3e3a87f6-6072-41db-855e-2cf56c66924a.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-what-each-cron-job-does">📌 What Each Cron Job Does</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Time</td><td>Task Description</td><td>Command Executed</td></tr>
</thead>
<tbody>
<tr>
<td><code>0 7 * * *</code></td><td>Daily at 7:00 AM</td><td>Restarts the <strong>Nginx</strong> service via <code>systemctl</code></td></tr>
<tr>
<td><code>0 2 * * *</code></td><td>Daily at 2:00 AM</td><td>Executes the <strong>custom script</strong> <code>myscript.sh</code></td></tr>
<tr>
<td><code>* * * * *</code></td><td>Every minute (for testing/debug only)</td><td>Appends a timestamp to <code>/home/ubuntu/testcron.log</code></td></tr>
</tbody>
</table>
</div><hr />
<h3 id="heading-verifying-if-cron-jobs-are-working">🔍 Verifying If Cron Jobs Are Working</h3>
<p>To check the <strong>test cron job</strong>, I opened the log file:</p>
<pre><code class="lang-powershell"><span class="hljs-built_in">cat</span> /home/ubuntu/testcron.log
</code></pre>
<p>If successful, you’ll see output like this — a new line for every minute passed:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753010896304/4436dc34-6fe0-4e02-82f2-fe85ba8e7815.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-viewing-scheduled-cron-jobs-using-crontab-l">🔍 Viewing Scheduled Cron Jobs Using <code>crontab -l</code></h2>
<p>When working with <strong>automated scripts</strong> or scheduled tasks in Linux, cron jobs become your best friend. But what if you want to <strong>check which cron jobs are already scheduled</strong>? That’s where the simple yet powerful command comes in</p>
<h3 id="heading-command">✅ Command:</h3>
<pre><code class="lang-powershell">crontab <span class="hljs-literal">-l</span>
</code></pre>
<h3 id="heading-what-it-does">📌 What it does:</h3>
<p>This command <strong>lists all the cron jobs</strong> currently scheduled for the <strong>logged-in user</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753011074044/fc3199e8-d059-4981-b33e-d2184d9a0dcf.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-confirming-cron-daemon-status">📊 Confirming Cron Daemon Status</h3>
<p>To ensure that the cron service is running correctly, I checked its status using:</p>
<pre><code class="lang-powershell">sudo systemctl status cron
</code></pre>
<p>✅ The output showed:</p>
<ul>
<li><p><strong>Status: Active (running)</strong></p>
</li>
<li><p>Logs confirming each cron job run</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753011251249/00d17e54-62af-4b2a-a8ea-0e3dd9359688.png" alt class="image--center mx-auto" /></p>
<p><strong>✅ Conclusion:-</strong></p>
<p>In this task, I demonstrated how to automate tasks using <code>cron jobs</code> on an Ubuntu server. Starting from manually installing Nginx, writing a shell script using <code>vim</code>, assigning it to <code>crontab</code>, and verifying the scheduled jobs ,this step-by-step guide helps streamline repetitive processes efficiently. Crontab is a powerful Linux utility to schedule scripts, system maintenance, backups, and more ultimately enhancing productivity and automation.</p>
<h2 id="heading-about-the-author"><strong>👨‍💻 About the Author</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751797710818/123a7231-3dca-4273-ad68-7bd026f69b95.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>This series isn't just about using AWS; it's about <strong>mastering the core services that power modern cloud infrastructure</strong>.</p>
<hr />
<h3 id="heading-lets-stay-connected">📬 Let's Stay Connected</h3>
<ul>
<li><p>📧 <strong>Email</strong>: <a target="_blank" href="mailto:gujjarapurv181@gmail.com"><strong>gujjarapurv181@gmail.com</strong></a></p>
</li>
<li><p>🐙 <strong>GitHub</strong>: <a target="_blank" href="http://github.com/ApurvGujjar07"><strong>github.com/ApurvGujjar07</strong></a></p>
</li>
<li><p>💼 <strong>LinkedIn</strong>: <a target="_blank" href="http://linkedin.com/in/apurv-gujjar"><strong>linkedin.com/in/apurv-gujjar</strong></a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA["Building a Scalable AWS Network Architecture with Transit Gateway"]]></title><description><![CDATA[🚀 In this blog, we’ll walk through how to connect multiple Amazon VPCs using AWS Transit Gateway, allowing EC2 instances in different VPCs to communicate with each other. This is useful in scenarios where you need scalable, centralized connectivity ...]]></description><link>https://apurv-gujjar.me/vpc-tgw-task-1</link><guid isPermaLink="true">https://apurv-gujjar.me/vpc-tgw-task-1</guid><category><![CDATA[AWS]]></category><category><![CDATA[vpc]]></category><category><![CDATA[ec2]]></category><category><![CDATA[transit gateway]]></category><category><![CDATA[vpc peering]]></category><dc:creator><![CDATA[Gujjar Apurv]]></dc:creator><pubDate>Thu, 17 Jul 2025 14:30:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1752745663732/169ac1bb-d0d8-4c4b-83dc-97cc4e44eec5.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>🚀 In this blog, we’ll walk through how to connect multiple Amazon VPCs using <strong>AWS Transit Gateway</strong>, allowing EC2 instances in different VPCs to communicate with each other. This is useful in scenarios where you need scalable, centralized connectivity across multiple VPCs.</p>
</blockquote>
<h2 id="heading-task-breakdown">📌 Task Breakdown</h2>
<blockquote>
<p>Here's what we are going to do:</p>
</blockquote>
<ol>
<li><p>Create 3 different VPCs in the same region</p>
</li>
<li><p>Launch 1 EC2 instance in each VPC</p>
</li>
<li><p>Create a Transit Gateway</p>
</li>
<li><p>Attach all 3 VPCs to the Transit Gateway</p>
</li>
<li><p>Update route tables and security groups</p>
</li>
<li><p>Test connectivity between EC2 instances</p>
</li>
</ol>
<h2 id="heading-step-1-create-3-vpcs-in-the-same-region">🛠️ Step 1: Create 3 VPCs in the Same Region</h2>
<p>We’ll create 3 VPCs with non-overlapping CIDR ranges.</p>
<h3 id="heading-actions">Actions:</h3>
<ul>
<li><p>Go to <strong>VPC Dashboard &gt; Create VPC</strong></p>
</li>
<li><p>Select <strong>VPC only</strong> option</p>
</li>
<li><p>Create VPCs with following CIDRs:</p>
<ul>
<li><p>VPC-1: <code>10.1.0.0/16</code></p>
</li>
<li><p>VPC-2: <code>10.2.0.0/16</code></p>
</li>
<li><p>VPC-3: <code>10.3.0.0/16</code></p>
</li>
</ul>
</li>
<li><p>Enable DNS Hostnames</p>
</li>
</ul>
<p>Repeat the above step for all 3 VPCs.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752745813950/c3f03ac4-2192-4ee3-b01b-75db2780e48b.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-2-create-subnets-for-each-vpc">🧱 Step 2 : - Create Subnets for Each VPC</h2>
<h3 id="heading-subnet-creation-for-vpc-1">🔹 Subnet Creation for VPC-1</h3>
<ol>
<li><p>Go to: <strong>VPC Dashboard &gt; Subnets &gt; Create Subnet</strong></p>
</li>
<li><p>Fill the details:</p>
<ul>
<li><p><strong>Name tag</strong>: <code>Sub1</code></p>
</li>
<li><p><strong>VPC</strong>: Select <code>VPC-1</code></p>
</li>
<li><p><strong>Availability Zone</strong>: e.g., <code>ap-south-1a</code></p>
</li>
<li><p><strong>IPv4 CIDR block</strong>: <code>10.1.0.0/24</code></p>
</li>
</ul>
</li>
<li><p>Click <strong>Create Subnet</strong></p>
</li>
</ol>
<p>Repeat same for:</p>
<h3 id="heading-vpc-2">🔹 VPC-2:</h3>
<ul>
<li><p>Name tag: <code>Sub2</code></p>
</li>
<li><p>VPC: <code>VPC-2</code></p>
</li>
<li><p>AZ: <code>ap-south-1a</code></p>
</li>
<li><p>CIDR: <code>10.1.2.0/24</code></p>
</li>
</ul>
<h3 id="heading-vpc-3">🔹 VPC-3:</h3>
<ul>
<li><p>Name tag: <code>Sub3</code></p>
</li>
<li><p>VPC: <code>VPC-3</code></p>
</li>
<li><p>AZ: <code>ap-south-1a</code></p>
</li>
<li><p>CIDR: <code>0.1.3.0/24</code></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752746154859/e95aa3b5-9129-4399-a224-8fbb165ab2fc.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-3-launch-ec2-instances-and-configure-subnets">🚀 Step 3: Launch EC2 Instances and Configure Subnets</h2>
<p>In this step, I launched <strong>three EC2 instances</strong> and mapped them according to the subnets we created earlier.</p>
<h3 id="heading-instance-configuration-summary">🔹 Instance Configuration Summary:</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Instance</td><td>Subnet Type</td><td>Public IP Auto-Assign</td><td>Purpose</td></tr>
</thead>
<tbody>
<tr>
<td>EC2-1</td><td>Public Subnet</td><td>Enabled</td><td>Acts as Public Server</td></tr>
<tr>
<td>EC2-2</td><td>Private Subnet 1</td><td>Disabled</td><td>Backend/Private</td></tr>
<tr>
<td>EC2-3</td><td>Private Subnet 2</td><td>Disabled</td><td>Backend/Private</td></tr>
</tbody>
</table>
</div><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752746516626/1e72f510-d869-4b07-8cf6-3439b938ca4f.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-4-create-and-attach-transit-gateway">🚏 Step 4: Create and Attach Transit Gateway</h2>
<p>To enable communication between multiple VPCs, I created a <strong>Transit Gateway (TGW)</strong> and attached it to all three VPCs.</p>
<h3 id="heading-steps-performed">🔧 Steps Performed:</h3>
<ol>
<li><p><strong>Created a Transit Gateway</strong> from the VPC Dashboard.</p>
</li>
<li><p>Attached the <strong>Transit Gateway</strong> to the following VPCs:</p>
<ul>
<li><p><strong>VPC-1</strong> (contains the public EC2 instance)</p>
</li>
<li><p><strong>VPC-2</strong> (contains private EC2 instance)</p>
</li>
<li><p><strong>VPC-3</strong> (contains another private EC2 instance)</p>
</li>
</ul>
</li>
<li><p>This setup allows <strong>centralized routing</strong> and enables communication between all EC2 instances across VPCs via the Transit Gateway.</p>
</li>
</ol>
<blockquote>
<p>I used this method to avoid the complexity of VPC Peering and simplify inter-VPC networking.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752746692135/463b7306-06c4-4f62-8468-cf7005749279.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752746742899/2d656905-cb85-46d9-8729-3b49cbeb89db.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-5-making-vpc-1-public-using-internet-gateway">🌐 Step 5: Making VPC-1 Public using Internet Gateway</h2>
<p>To allow internet access for the EC2 instance in <strong>VPC-1</strong>, I made one of its subnets public by attaching an <strong>Internet Gateway</strong> (IGW).</p>
<h3 id="heading-steps-performed-1">🔧 Steps Performed:</h3>
<ol>
<li><p>Created an <strong>Internet Gateway (IGW)</strong>.</p>
</li>
<li><p>Attached the <strong>IGW</strong> to <strong>VPC-1</strong>.</p>
</li>
<li><p>In the route table of VPC-1, associated <strong>only Subnet-1</strong> (which contains the public EC2 instance).</p>
</li>
<li><p>Added a route in the route table with:</p>
<ul>
<li><p><strong>Destination:</strong> <code>0.0.0.0/0</code></p>
</li>
<li><p><strong>Target:</strong> Internet Gateway</p>
</li>
</ul>
</li>
</ol>
<blockquote>
<p>With this setup, only Subnet-11 has internet access, keeping other subnets private.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752746847514/0602bc89-6bda-489e-bd03-a22965a551d9.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752746878423/c5ba6ab4-36d6-4f27-b23c-373478960df7.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752746936834/c31485a1-d646-4373-88a7-66363c9dc8c2.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-6-configuring-route-tables-for-inter-vpc-communication-via-transit-gateway">🔁 Step 6: Configuring Route Tables for Inter-VPC Communication via Transit Gateway</h2>
<p>To enable communication between the three VPCs using the Transit Gateway (TGW), I updated each VPC’s route table as follows:</p>
<h3 id="heading-vpc-1-route-table">🛣️ VPC-1 Route Table:</h3>
<ul>
<li><p><strong>Destination:</strong> CIDR of VPC-2 → <strong>Target:</strong> Transit Gateway</p>
</li>
<li><p><strong>Destination:</strong> CIDR of VPC-3 → <strong>Target:</strong> Transit Gateway</p>
</li>
</ul>
<h3 id="heading-vpc-2-route-table">🛣️ VPC-2 Route Table:</h3>
<ul>
<li><p><strong>Destination:</strong> CIDR of VPC-1 → <strong>Target:</strong> Transit Gateway</p>
</li>
<li><p><strong>Destination:</strong> CIDR of VPC-3 → <strong>Target:</strong> Transit Gateway</p>
</li>
</ul>
<h3 id="heading-vpc-3-route-table">🛣️ VPC-3 Route Table:</h3>
<ul>
<li><p><strong>Destination:</strong> CIDR of VPC-1 → <strong>Target:</strong> Transit Gateway</p>
</li>
<li><p><strong>Destination:</strong> CIDR of VPC-2 → <strong>Target:</strong> Transit Gateway</p>
</li>
</ul>
<blockquote>
<p>📌 <em>Note:</em> I used each VPC’s CIDR block (e.g., <code>10.1.0.0/16</code>, <code>10.2.0.0/16</code>, <code>10.3.0.0/16</code>) as the destination in the respective route tables.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752747119593/f82f1f5e-4a0b-4eb9-8c5d-903bd5d974db.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752747128847/1316f1af-c588-409b-9cf4-8b48a041e3d2.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752747138476/bbfb603e-8d4c-4502-bb5b-6f1d45cbd9c1.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-7-updating-security-groups-to-allow-all-traffic-for-testing-only">🔒 Step 7: Updating Security Groups to Allow All Traffic (For Testing Only)</h2>
<p>To ensure temporary connectivity between EC2 instances across VPC-1, VPC-2, and VPC-3 during Transit Gateway testing, I updated the <strong>Security Groups (SGs)</strong> as follows:</p>
<ul>
<li><p><strong>Inbound Rules:</strong></p>
<ul>
<li><p>Type: <strong>All Traffic</strong></p>
</li>
<li><p>Protocol: All</p>
</li>
<li><p>Port Range: All</p>
</li>
<li><p>Source: <code>0.0.0.0/0</code> or respective VPC CIDR blocks</p>
</li>
</ul>
</li>
<li><p><strong>Outbound Rules:</strong></p>
<ul>
<li><p>Type: <strong>All Traffic</strong></p>
</li>
<li><p>Protocol: All</p>
</li>
<li><p>Port Range: All</p>
</li>
<li><p>Destination: <code>0.0.0.0/0</code> or respective VPC CIDR blocks</p>
</li>
</ul>
</li>
</ul>
<blockquote>
<p>⚠️ <strong>Note:</strong><br />Allowing <strong>All Traffic</strong> (0.0.0.0/0) is <strong>not recommended for production environments</strong>.<br />This is done here <strong>only for understanding and testing purposes</strong>.<br />In real-world setups, always apply <strong>principle of least privilege</strong> for better security.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752747258897/99ef6b29-8830-44bb-a322-8b90f9e0268f.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-final-step-ping-test-for-verification">✅ Final Step: Ping Test for Verification</h3>
<ul>
<li><p>Finally, I logged into <strong>EC2-1 (Public Server in VPC-1)</strong> and pinged the <strong>private IPs</strong> of <strong>EC2-2 (Private Server in VPC-2)</strong> and <strong>EC2-3 (Private Server in VPC-3)</strong>.</p>
</li>
<li><p>The <strong>ping was successful</strong>, which confirms that communication between all VPCs is properly set up through the <strong>Transit Gateway</strong>.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752747440374/4a8802e0-06f7-406e-a2b4-9dc85bb118f8.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>🔒 <strong>Note:</strong><br />Allowing <em>All Traffic</em> in the Security Group was done <strong>only for testing and learning purposes</strong>.<br />This is <strong>not a recommended practice</strong> in production. Always apply <strong>least privilege</strong> security rules.</p>
</blockquote>
<h2 id="heading-final-wrap-up-transit-gateway-task">✅ Final Wrap-Up: Transit Gateway Task</h2>
<p>In this quick networking task, I configured <strong>3 separate VPCs</strong> and successfully connected them using a <strong>single Transit Gateway</strong>.<br />Key highlights:</p>
<ul>
<li><p><strong>VPC-1</strong> was made public via an <strong>Internet Gateway</strong> and a public subnet.</p>
</li>
<li><p>Proper <strong>route tables</strong> were updated in all VPCs to route traffic through the <strong>Transit Gateway</strong>.</p>
</li>
<li><p><strong>Security Groups</strong> were temporarily set to allow <strong>all traffic</strong> for testing (⚠️ Not recommended for production).</p>
</li>
<li><p>Finally, from the <strong>public EC2 in VPC-1</strong>, I was able to <strong>ping the private EC2 instances</strong> in VPC-2 and VPC-3.</p>
</li>
</ul>
<p>✔️ <strong>Result:</strong> Smooth inter-VPC communication confirmed via Transit Gateway setup.</p>
<h2 id="heading-about-the-author"><strong>👨‍💻 About the Author</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751797710818/123a7231-3dca-4273-ad68-7bd026f69b95.png?auto=compress,format&amp;format=webp&amp;auto=compress,format&amp;format=webp" alt /></p>
<p>This series isn't just about using AWS; it's about <strong>mastering the core services that power modern cloud infrastructure</strong>.</p>
<hr />
<h3 id="heading-lets-stay-connected">📬 Let's Stay Connected</h3>
<ul>
<li><p>📧 <strong>Email</strong>: <a target="_blank" href="mailto:gujjarapurv181@gmail.com"><strong>gujjarapurv181@gmail.com</strong></a></p>
</li>
<li><p>🐙 <strong>GitHub</strong>: <a target="_blank" href="http://github.com/ApurvGujjar07"><strong>github.com/ApurvGujjar07</strong></a></p>
</li>
<li><p>💼 <strong>LinkedIn</strong>: <a target="_blank" href="http://linkedin.com/in/apurv-gujjar"><strong>linkedin.com/in/apurv-gujjar</strong></a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[🌐 Deploying a 3-Tier Web Application Architecture on AWS using VPC]]></title><description><![CDATA[In this project, I have designed and deployed a 3-Tier Web Application architecture within a custom Virtual Private Cloud (VPC) using AWS services. This architecture follows industry best practices for security, scalability, and separation of concern...]]></description><link>https://apurv-gujjar.me/deploying-a-3-tier-web-application-architecture-on-aws-using-vpc</link><guid isPermaLink="true">https://apurv-gujjar.me/deploying-a-3-tier-web-application-architecture-on-aws-using-vpc</guid><category><![CDATA[AWS]]></category><category><![CDATA[nginx]]></category><category><![CDATA[Tomcat]]></category><category><![CDATA[MySQL]]></category><category><![CDATA[Databases]]></category><dc:creator><![CDATA[Gujjar Apurv]]></dc:creator><pubDate>Thu, 17 Jul 2025 07:23:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1752736373362/6cb9af59-3466-4132-b496-5fb9e6c18fa5.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this project, I have designed and deployed a <strong>3-Tier Web Application</strong> architecture within a custom Virtual Private Cloud (VPC) using AWS services. This architecture follows industry best practices for security, scalability, and separation of concerns</p>
<blockquote>
<p>🔒 <strong>Secure</strong> | ⚙️ <strong>Modular</strong> | ☁️ <strong>AWS-Powered</strong></p>
</blockquote>
<p>In this blog, I’ll demonstrate how I deployed a <strong>3-tier application</strong> on AWS using <strong>custom VPC</strong>. This architecture includes:</p>
<ul>
<li><p><strong>Nginx</strong> as a Reverse Proxy (Web Layer)</p>
</li>
<li><p><strong>Apache Tomcat</strong> as the Application Server</p>
</li>
<li><p><strong>MySQL</strong> as the Database Server</p>
</li>
</ul>
<h2 id="heading-what-is-3-tier-architecture">🧱 What is 3-Tier Architecture?</h2>
<p>A 3-tier architecture separates the app into:</p>
<ol>
<li><p><strong>Web Tier</strong> (Nginx) – Handles incoming HTTP requests</p>
</li>
<li><p><strong>App Tier</strong> (Tomcat) – Runs backend application logic</p>
</li>
<li><p><strong>DB Tier</strong> (MySQL) – Stores application data</p>
</li>
</ol>
<h2 id="heading-tech-stack-amp-aws-services-used">🔧 Tech Stack &amp; AWS Services Used</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Layer</td><td>Component</td><td>AWS Service</td></tr>
</thead>
<tbody>
<tr>
<td>Web</td><td>Nginx</td><td>EC2 in Public Subnet</td></tr>
<tr>
<td>App</td><td>Tomcat</td><td>EC2 in Private Subnet</td></tr>
<tr>
<td>DB</td><td>MySQL</td><td>EC2 in Private Subnet</td></tr>
<tr>
<td>Network</td><td>VPC, Subnets, Route Tables</td><td>AWS VPC</td></tr>
<tr>
<td>Others</td><td>NAT Gateway, IGW, SGs</td><td>AWS Infra</td></tr>
</tbody>
</table>
</div><h2 id="heading-high-level-architecture-diagram">🗺️ High-Level Architecture Diagram</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752733002921/712a857f-3459-4bee-af92-d2e03762ed16.png" alt class="image--center mx-auto" /></p>
<p>This setup includes:</p>
<ul>
<li><p><strong>1 Public Subnet</strong> for Nginx</p>
</li>
<li><p><strong>2 Private Subnets</strong>: App Tier (Tomcat) and DB Tier (MySQL)</p>
</li>
<li><p><strong>Security Groups</strong> with limited, directional access</p>
</li>
<li><p><strong>NAT Gateway</strong> for outbound internet access from private subnets</p>
</li>
</ul>
<h2 id="heading-step-by-step-implementation">🪜 Step-by-Step Implementation</h2>
<h3 id="heading-step-1-create-vpc">✅ Step 1: Create VPC</h3>
<ul>
<li><strong>CIDR Block:</strong> <code>10.1.0.0/16</code></li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752733109900/c34f32f9-f207-47e8-ac84-3d8e6294eed8.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-2-create-subnets">✅ Step 2: Create Subnets</h3>
<ul>
<li><p><code>10.1.1.0/24</code> – Public (Web: Nginx)</p>
</li>
<li><p><code>10.1.2.0/24</code> – Private (App: Tomcat)</p>
</li>
<li><p><code>10.1.3.0/24</code> – Private (DB: MySQL)</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752733258149/a96e39bc-bdba-4792-b84d-f806488bd2ca.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-3-setup-internet-gateway-nat">✅ Step 3: Setup Internet Gateway + NAT</h3>
<ul>
<li><p>IGW for public subnet (Nginx)</p>
</li>
<li><p>NAT Gateway for public subnets (Web)</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752733354085/163f7353-1a72-4e0a-9140-f97c4fb1b2c0.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752733337592/792f8e51-1b1c-4f60-a1e9-2fa0d0a7429b.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-4-configure-route-tables">✅ Step 4: Configure Route Tables</h3>
<ul>
<li><p>Public Route Table: <code>0.0.0.0/0 → IGW</code></p>
</li>
<li><p>Private Route Table: <code>0.0.0.0/0 → NAT</code></p>
</li>
</ul>
<h3 id="heading-a-public-route-table-configuration">✅ A. Public Route Table Configuration</h3>
<ul>
<li><p>I created a <strong>Route Table</strong> named <code>web-rt</code>.</p>
</li>
<li><p>I <strong>associated</strong> the <strong>public subnet</strong> (used for Nginx) with this <code>web-rt</code>.</p>
</li>
<li><p>Then, I edited the route in <code>web-rt</code>:</p>
<ul>
<li><p><strong>Destination</strong>: <code>0.0.0.0/0</code> (this allows internet traffic)</p>
</li>
<li><p><strong>Target</strong>: <strong>Internet Gateway</strong> (attached to the VPC)</p>
</li>
</ul>
</li>
<li><p>This allows public instances like Nginx server to access the internet directly.</p>
</li>
</ul>
<h3 id="heading-b-private-route-table-configuration">🔒 B. Private Route Table Configuration</h3>
<ul>
<li><p>I created another <strong>Route Table</strong> named <code>private-rt</code>.</p>
</li>
<li><p>I <strong>associated both private subnets</strong> (one for Tomcat app and one for MySQL DB) with this <code>private-rt</code>.</p>
</li>
<li><p>Then, I edited the route in <code>private-rt</code>:</p>
<ul>
<li><p><strong>Destination</strong>: <code>0.0.0.0/0</code></p>
</li>
<li><p><strong>Target</strong>: <strong>NAT Gateway</strong> (deployed in the public subnet)</p>
</li>
</ul>
</li>
<li><p>This setup allows private instances to <strong>access the internet only for updates</strong> (e.g., apt install), <strong>without being exposed</strong> to incoming public traffic.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752733665222/d1317054-0d22-49e3-b23b-572ad27446f7.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752733721561/cdda6581-3208-4eea-9a93-92382065a8a2.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752733648562/e2b00add-4fe6-41b8-b613-0dbf50831383.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752733690138/9ce0dcd5-87b9-481e-bf28-5eb7eb092ebb.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-5-launch-ec2-instances">✅ Step 5: Launch EC2 Instances</h3>
<h4 id="heading-web-tier-nginx">🌍 Web Tier (Nginx)</h4>
<ul>
<li><p>EC2 in public subnet</p>
</li>
<li><p>Installed <strong>Nginx</strong></p>
</li>
<li><p>Acts as <strong>Reverse Proxy</strong> forwarding to Tomcat</p>
</li>
<li><p>SG allows ports: <strong>80 (HTTP)</strong> and <strong>8080 (proxy)</strong></p>
</li>
</ul>
<h4 id="heading-bastion-host">🛡 Bastion Host</h4>
<ul>
<li><p>EC2 in public subnet for SSH access to private EC2s</p>
</li>
<li><p>SG allows port: <strong>22</strong></p>
</li>
</ul>
<h4 id="heading-app-tier-tomcat">⚙ App Tier (Tomcat)</h4>
<ul>
<li><p>EC2 in private subnet</p>
</li>
<li><p>Installed <strong>Apache Tomcat</strong></p>
</li>
<li><p>SG allows traffic only from Nginx EC2 (Web SG)</p>
</li>
</ul>
<h4 id="heading-db-tier-mysql">💾 DB Tier (MySQL)</h4>
<ul>
<li><p>EC2 in private subnet</p>
</li>
<li><p>Installed <strong>MySQL</strong>, secured</p>
</li>
<li><p>SG allows port <strong>3306</strong> only from App Server</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752733817188/f9bffe6b-9f9e-4506-862f-e80eeebb120f.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-step-3-private-server-access-via-public-ec2-jump-server-method">🔐 Step 3: <strong>Private Server Access via Public EC2 (Jump Server Method)</strong></h4>
<p>Since <strong>App</strong> and <strong>DB</strong> servers are in <strong>private subnets</strong>, I used the <strong>Web Server (Public EC2)</strong> as a <strong>jump host</strong> to access them.</p>
<p><strong>Steps Followed:</strong></p>
<ol>
<li><p>SSH into Public Web Server using <code>.pem</code> file:</p>
<pre><code class="lang-powershell"> ssh <span class="hljs-literal">-i</span> <span class="hljs-string">"my-key.pem"</span> ubuntu<span class="hljs-selector-tag">@</span>&lt;public_ip&gt;
</code></pre>
</li>
<li><p>Created a new file using:</p>
<pre><code class="lang-powershell"> vim jump.pem
</code></pre>
</li>
<li><p>Pasted the <strong>private key</strong> of the internal servers (App/DB) in <code>jump.pem</code>.</p>
</li>
<li><p>Changed permission:</p>
<pre><code class="lang-powershell"> chmod <span class="hljs-number">400</span> jump.pem
</code></pre>
</li>
<li><p>Then, from the public server, logged in to private server:</p>
<pre><code class="lang-powershell"> ssh <span class="hljs-literal">-i</span> jump.pem ubuntu@<span class="hljs-number">10.1</span>.<span class="hljs-number">2.97</span>  <span class="hljs-comment"># App Server</span>
 ssh <span class="hljs-literal">-i</span> jump.pem ubuntu@<span class="hljs-number">10.1</span>.<span class="hljs-number">3.105</span> <span class="hljs-comment"># DB Server</span>
</code></pre>
</li>
</ol>
<p>✅ <em>This way, I securely accessed private servers using the public EC2 as a jump point.</em></p>
<h2 id="heading-nginx-installation-on-web-ec2-instance-public-subnet">🌐 Nginx Installation on Web EC2 Instance (Public Subnet)</h2>
<p>After launching the <strong>Web EC2 instance</strong> (in the <strong>public subnet</strong>), I connected to it using <strong>SSH</strong> with the default Ubuntu user. Then I installed and configured <strong>Nginx</strong> as follows:</p>
<h3 id="heading-steps-performed">🔧 Steps Performed:</h3>
<ol>
<li><p><strong>SSH into EC2 Instance:</strong></p>
<pre><code class="lang-powershell"> ssh <span class="hljs-literal">-i</span> <span class="hljs-string">"keypair.pem"</span> ubuntu<span class="hljs-selector-tag">@</span>&lt;Public<span class="hljs-literal">-IP</span>&gt;
</code></pre>
</li>
<li><p><strong>Update the System Packages:</strong></p>
<pre><code class="lang-powershell"> sudo apt update <span class="hljs-literal">-y</span>
</code></pre>
</li>
<li><p><strong>Install Nginx Web Server:</strong></p>
<pre><code class="lang-powershell"> sudo apt install nginx <span class="hljs-literal">-y</span>
</code></pre>
</li>
<li><p><strong>Start the Nginx Service:</strong></p>
<pre><code class="lang-powershell"> sudo systemctl <span class="hljs-built_in">start</span> nginx
</code></pre>
</li>
<li><p><strong>Enable Nginx to Start on Boot:</strong></p>
<pre><code class="lang-powershell"> sudo systemctl enable nginx
</code></pre>
</li>
</ol>
<h2 id="heading-tomcat-installation-on-app-ec2-instance-private-subnet">🚀 Tomcat Installation on App EC2 Instance (Private Subnet)</h2>
<p>On the <strong>App Server EC2 instance</strong> (launched in the <strong>private subnet</strong>), I installed and started <strong>Apache Tomcat</strong> to host Java-based web applications.</p>
<p>Since Tomcat requires Java, I installed JDK first, then downloaded and configured Tomcat.</p>
<h3 id="heading-steps-performed-1">🔧 Steps Performed:</h3>
<ol>
<li><p><strong>Update System Packages:</strong></p>
<pre><code class="lang-powershell"> sudo apt update <span class="hljs-literal">-y</span>
</code></pre>
</li>
<li><p><strong>Install Java Development Kit (Required for Tomcat):</strong></p>
<pre><code class="lang-powershell"> sudo apt install default<span class="hljs-literal">-jdk</span> <span class="hljs-literal">-y</span>
</code></pre>
</li>
<li><p><strong>Download the Latest Tomcat (Version 11.0.9):</strong></p>
<pre><code class="lang-powershell"> <span class="hljs-built_in">wget</span> https://downloads.apache.org/tomcat/tomcat<span class="hljs-literal">-11</span>/v11.<span class="hljs-number">0.9</span>/bin/apache<span class="hljs-literal">-tomcat</span><span class="hljs-literal">-11</span>.<span class="hljs-number">0.9</span>.tar.gz.asc
</code></pre>
</li>
<li><p><strong>Extract the Downloaded Archive:</strong></p>
<pre><code class="lang-powershell"> tar <span class="hljs-literal">-xvzf</span> apache<span class="hljs-literal">-tomcat</span><span class="hljs-literal">-11</span>.<span class="hljs-number">0.9</span>.tar.gz
</code></pre>
</li>
<li><p><strong>Start the Tomcat Server:</strong></p>
<pre><code class="lang-powershell"> <span class="hljs-built_in">ls</span>
 <span class="hljs-built_in">cd</span> apache<span class="hljs-literal">-tomcat</span><span class="hljs-literal">-11</span>.<span class="hljs-number">0.9</span>.tar.gz
 <span class="hljs-built_in">ls</span>
 <span class="hljs-built_in">cd</span> bin
 ./startup.sh
</code></pre>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752734185866/7fa6ce94-83da-4f32-a8f1-e5d8579dc7d6.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-mysql-installation-on-db-ec2-instance-private-subnet">🛢️ MySQL Installation on DB EC2 Instance (Private Subnet)</h2>
<p>On the <strong>Database Server EC2 instance</strong> (placed in the <strong>private subnet</strong>), I installed <strong>MySQL Server</strong> to manage the backend database of the application securely.</p>
<h3 id="heading-steps-performed-2">🔧 Steps Performed:</h3>
<ol>
<li><p><strong>Update System Packages:</strong></p>
<pre><code class="lang-powershell"> sudo apt update <span class="hljs-literal">-y</span>
</code></pre>
</li>
<li><p><strong>Install MySQL Server:</strong></p>
<pre><code class="lang-powershell"> sudo apt install mysql<span class="hljs-literal">-server</span> <span class="hljs-literal">-y</span>
</code></pre>
</li>
<li><p><strong>Start and Enable MySQL Service:</strong></p>
<pre><code class="lang-powershell"> sudo systemctl <span class="hljs-built_in">start</span> mysql
 sudo systemctl enable mysql
</code></pre>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752736736398/c7545c1f-fd7a-4068-8c6e-87c2d777fd48.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-mysql-configuration-on-db-ec2-private-subnet">⚙️ MySQL Configuration on DB EC2 (Private Subnet)</h2>
<p>After installing MySQL on the <strong>DB Server (private subnet)</strong>, I performed additional configuration to allow <strong>internal app server access</strong> by setting the <strong>bind-address</strong> to the server’s <strong>private IP</strong>.</p>
<h3 id="heading-step-1-login-to-mysql-as-root-user">🔐 Step 1: Login to MySQL as Root User</h3>
<p>To securely access MySQL, I logged in using the root user:</p>
<pre><code class="lang-powershell">sudo mysql <span class="hljs-literal">-u</span> root <span class="hljs-literal">-p</span>
</code></pre>
<p><em>(You’ll be prompted to enter the root password set during secure installation.)</em></p>
<hr />
<h3 id="heading-step-2-edit-mysql-configuration-file">🛠️ Step 2: Edit MySQL Configuration File</h3>
<p>I modified the <strong>MySQL bind-address</strong> to allow access from the app server (within the VPC):</p>
<pre><code class="lang-powershell">sudo vim /etc/mysql/mysql.conf.d/mysqld.cnf
</code></pre>
<p>Inside this file, I located the following line:</p>
<pre><code class="lang-plaintext">bind-address = 127.0.0.1
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752734455242/e0c1296c-376a-4a5f-9ee4-1459500bcc30.png" alt class="image--center mx-auto" /></p>
<p>And changed it to my <strong>DB EC2’s private IP</strong>, for example:</p>
<pre><code class="lang-plaintext">bind-address = 10.0.2.15
</code></pre>
<blockquote>
<p>✅ This step ensures MySQL accepts connections only from internal sources (e.g., the app server), not from the public internet — keeping the database secure.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752734479134/cc97d642-9061-49f7-9e1a-2a7e1ba9ae7e.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-step-3-restart-mysql-to-apply-changes">🔄 Step 3: Restart MySQL to Apply Changes</h3>
<pre><code class="lang-powershell">sudo systemctl restart mysql
</code></pre>
<h2 id="heading-network-connectivity-testing-ping-amp-telnet">🔗 Network Connectivity Testing (Ping &amp; Telnet)</h2>
<p>To ensure all the instances in my 3-tier architecture (Web, App, and DB) are properly connected and communicating within the VPC, I performed two essential network checks:</p>
<hr />
<h3 id="heading-1-ping-test-initial-connectivity-verification">✅ 1. <strong>Ping Test (Initial Connectivity Verification)</strong></h3>
<p>Before running any application-level commands, I verified the basic connectivity between all EC2 instances using <code>ping</code>.</p>
<ul>
<li><p>From <strong>Web (10.1.1.44)</strong>:</p>
<ul>
<li><p>Ping to App server (<code>10.1.2.97</code>)</p>
</li>
<li><p>Ping to DB server (<code>10.1.3.105</code>)</p>
</li>
</ul>
</li>
<li><p>From <strong>App (10.1.2.97)</strong>:</p>
<ul>
<li><p>Ping to Web server (<code>10.1.1.44</code>)</p>
</li>
<li><p>Ping to DB server (<code>10.1.3.105</code>)</p>
</li>
</ul>
</li>
<li><p>From <strong>DB (10.1.3.105)</strong>:</p>
<ul>
<li><p>Ping to Web server (<code>10.1.1.44</code>)</p>
</li>
<li><p>Ping to App server (<code>10.1.2.97</code>)</p>
</li>
</ul>
</li>
</ul>
<blockquote>
<p>📝 All ping tests were successful, confirming that the subnet routing and security group rules were correctly configured for basic communication.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752735954614/dbcd1f78-5b0a-48b6-848b-e1192acc8a98.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-verifying-internal-connectivity-using-telnet-in-3-tier-architecture">✅ Verifying Internal Connectivity using Telnet in 3-Tier Architecture</h3>
<p>To ensure all components in the 3-Tier Architecture (Web, App, and DB) can communicate with each other, we use the <code>telnet</code> command to test port-level connectivity.</p>
<p>Below is how each instance should verify connection with others using <strong>Telnet</strong>:</p>
<hr />
<h4 id="heading-from-web-server-public-subnet-ip-101144">🔹 From Web Server (Public Subnet - IP: <code>10.1.1.44</code>)</h4>
<ul>
<li><p>Check connectivity to <strong>App Server (Tomcat)</strong>:</p>
<pre><code class="lang-powershell">  telnet <span class="hljs-number">10.1</span>.<span class="hljs-number">2.97</span> <span class="hljs-number">8080</span>
</code></pre>
</li>
<li><p>Check connectivity to <strong>Database Server (MySQL)</strong>:</p>
<pre><code class="lang-powershell">  telnet <span class="hljs-number">10.1</span>.<span class="hljs-number">3.105</span> <span class="hljs-number">3306</span>
</code></pre>
</li>
</ul>
<hr />
<h4 id="heading-from-app-server-private-subnet-ip-101297">🔹 From App Server (Private Subnet - IP: <code>10.1.2.97</code>)</h4>
<ul>
<li><p>Check connectivity to <strong>Web Server</strong>:</p>
<pre><code class="lang-powershell">  telnet <span class="hljs-number">10.1</span>.<span class="hljs-number">1.44</span> <span class="hljs-number">22</span>
</code></pre>
</li>
<li><p>Check connectivity to <strong>Database Server</strong>:</p>
<pre><code class="lang-powershell">  telnet <span class="hljs-number">10.1</span>.<span class="hljs-number">3.105</span> <span class="hljs-number">3306</span>
</code></pre>
</li>
</ul>
<hr />
<h4 id="heading-from-db-server-private-subnet-ip-1013105">🔹 From DB Server (Private Subnet - IP: <code>10.1.3.105</code>)</h4>
<ul>
<li><p>Check connectivity to <strong>Web Server</strong>:</p>
<pre><code class="lang-powershell">  telnet <span class="hljs-number">10.1</span>.<span class="hljs-number">1.44</span> <span class="hljs-number">22</span>
</code></pre>
</li>
<li><p>Check connectivity to <strong>App Server</strong>:</p>
<pre><code class="lang-powershell">  telnet <span class="hljs-number">10.1</span>.<span class="hljs-number">2.97</span> <span class="hljs-number">8080</span>
</code></pre>
</li>
</ul>
<hr />
<blockquote>
<p>✅ If Telnet successfully connects (i.e., blank screen or "Connected"), it confirms that the port is open and reachable from that instance.</p>
<p>❌ If Telnet fails (i.e., "Connection refused" or "Unable to connect"), check <strong>Security Groups</strong>, <strong>NACLs</strong>, and <strong>Routing Tables</strong>.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752735815959/ea072fff-0ca7-4e23-b415-0ebf26884c24.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752735820273/75ec1098-e843-40e1-9699-5dec5bbd1138.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752735830041/9b9d7673-7a90-4e65-8c0e-9763a4954a54.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">📝 Conclusion</h2>
<p>This project demonstrates a successful deployment of a secure and scalable 3-tier architecture on AWS using Nginx (Web), Tomcat (App), and MySQL (DB).<br />It follows best practices for network isolation, access control, and modular application deployment in a cloud environment.</p>
<h2 id="heading-about-the-author"><strong>👨‍💻 About the Author</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751797710818/123a7231-3dca-4273-ad68-7bd026f69b95.png?auto=compress,format&amp;format=webp" alt /></p>
<p>This project is a deep dive into the AWS ecosystem designed to strengthen my foundation in cloud-native architecture, automation, and service integration <strong>using only AWS services</strong>.</p>
<p>This series isn't just about using AWS; it's about <strong>mastering the core services that power modern cloud infrastructure</strong>.</p>
<hr />
<h3 id="heading-lets-stay-connected">📬 Let's Stay Connected</h3>
<ul>
<li><p>📧 <strong>Email</strong>: <a target="_blank" href="mailto:gujjarapurv181@gmail.com"><strong>gujjarapurv181@gmail.com</strong></a></p>
</li>
<li><p>🐙 <strong>GitHub</strong>: <a target="_blank" href="https://github.com/ApurvGujjar07"><strong>github.com/ApurvGujjar07</strong></a></p>
</li>
<li><p>💼 <strong>LinkedIn</strong>: <a target="_blank" href="https://www.linkedin.com/in/apurv-gujjar"><strong>linkedin.com/in/apurv-gujjar</strong></a></p>
</li>
</ul>
<hr />
<blockquote>
<p><strong><em>💡 If you found this project useful, or have any suggestions or feedback, feel free to reach out or drop a comment I’d love to connect and improve.<br />This is just the beginning many more builds, deployments, and learnings ahead.</em></strong></p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA["Mastering AWS EC2 and EBS: Attach, Mount, Snapshot & Access Data Across Availability Zones"]]></title><description><![CDATA[🔰 Introduction

This blog is a practical guide to using Amazon EC2 and EBS together. You'll learn how to attach an extra EBS volume to an EC2 instance, mount and store data, and create a snapshot to access that data from another Availability Zone. I...]]></description><link>https://apurv-gujjar.me/mastering-aws-ec2-and-ebs-attach-mount-snapshot-and-access-data-across-availability-zones</link><guid isPermaLink="true">https://apurv-gujjar.me/mastering-aws-ec2-and-ebs-attach-mount-snapshot-and-access-data-across-availability-zones</guid><category><![CDATA[AWS]]></category><category><![CDATA[ec2]]></category><category><![CDATA[ebs]]></category><category><![CDATA[snapshot]]></category><category><![CDATA[availability zone]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Cloud]]></category><dc:creator><![CDATA[Gujjar Apurv]]></dc:creator><pubDate>Sat, 12 Jul 2025 02:30:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1752206065742/6583d3ed-c6e6-42a0-82c5-d2c3d954d00a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>🔰 <strong>Introduction</strong></p>
<blockquote>
<p>This blog is a practical guide to using Amazon EC2 and EBS together. You'll learn how to attach an extra EBS volume to an EC2 instance, mount and store data, and create a snapshot to access that data from another Availability Zone. It's a complete hands-on tutorial for real-world AWS scenarios like data backup, high availability, and cross-AZ storage access perfect for beginners and DevOps learners.</p>
</blockquote>
<h2 id="heading-what-youll-do">🧠 <strong>What You'll Do</strong></h2>
<ul>
<li><p>Launch an EC2 instance</p>
</li>
<li><p>Attach an EBS volume</p>
</li>
<li><p>Format and mount it</p>
</li>
<li><p>Add sample data</p>
</li>
<li><p>Create a snapshot</p>
</li>
<li><p>Create a volume in another AZ using that snapshot</p>
</li>
<li><p>Attach to a new EC2 instance</p>
</li>
<li><p>Access the same data cross-AZ</p>
</li>
</ul>
<h2 id="heading-step-1-launch-ec2-instance">🟢 <strong>Step 1: Launch EC2 Instance</strong></h2>
<ul>
<li><p>Go to <strong>AWS Console &gt; EC2 &gt; Launch Instance</strong></p>
</li>
<li><p>Set:</p>
<ul>
<li><p>Name: <code>ebs-task-instance</code></p>
</li>
<li><p>AMI: Amazon Linux 2</p>
</li>
<li><p>Instance Type: <code>t2.micro</code></p>
</li>
<li><p>Key Pair: Create or choose existing</p>
</li>
<li><p>Subnet: e.g., <code>us-east-1c</code></p>
</li>
<li><p>Auto-assign Public IP: Enabled</p>
</li>
<li><p>Security Group: Allow SSH (22), HTTP (80) optional</p>
</li>
</ul>
</li>
<li><p>Keep default 8 GiB EBS</p>
</li>
<li><p>Click <strong>Launch</strong></p>
</li>
<li><p>After instance is running, connect via:</p>
</li>
</ul>
<pre><code class="lang-powershell">ssh <span class="hljs-literal">-i</span> <span class="hljs-string">"aws-key.pem"</span> ec2<span class="hljs-literal">-user</span><span class="hljs-selector-tag">@</span>&lt;public<span class="hljs-literal">-ip</span>&gt;
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752206422638/2b1fe6c7-3c91-43ba-ac1f-eedecedc7f76.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-2-create-and-attach-ebs-volume">🟢 <strong>Step 2: Create and Attach EBS Volume</strong></h2>
<ul>
<li><p>Go to <strong>EC2 &gt; Volumes &gt; Create Volume</strong></p>
</li>
<li><p>Set:</p>
<ul>
<li><p>Size: 10 GiB</p>
</li>
<li><p>Type: <code>gp3</code></p>
</li>
<li><p>AZ: <code>us-east-1c</code> (same as EC2)</p>
</li>
</ul>
</li>
<li><p>Create Volume</p>
</li>
<li><p>Select volume → <strong>Actions &gt; Attach Volume</strong></p>
</li>
<li><p>Attach to instance as: <code>/dev/sdf</code></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752206535570/12129e10-8a8a-42b8-a004-6d4add83ce21.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-3-format-and-mount-ebs-volume">🟢 <strong>Step 3: Format and Mount EBS Volume</strong></h2>
<p>Inside EC2:</p>
<pre><code class="lang-powershell">lsblk                          <span class="hljs-comment"># Check volume as /dev/xvdf</span>
sudo mkfs <span class="hljs-literal">-t</span> ext4 /dev/xvdf   <span class="hljs-comment"># Format volume</span>
sudo mkdir /mnt/myvolume      <span class="hljs-comment"># Create mount point</span>
sudo <span class="hljs-built_in">mount</span> /dev/xvdf /mnt/myvolume  <span class="hljs-comment"># Mount volume</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752206602317/c3e29112-6cf4-4fc1-9a5d-3b6214d36bb4.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p><code>sudo mkfs -t ext4 /dev/xvdf</code><br />  🔹 Formats the volume <code>/dev/xvdf</code> with the ext4 filesystem.</p>
</li>
<li><p><code>sudo mkdir /mnt/myvolume</code><br />  🔹 Creates a directory to serve as a mount point for the volume.</p>
</li>
<li><p><code>sudo mount /dev/xvdf /mnt/myvolume</code><br />  🔹 Mounts the formatted volume to the mount point directory <code>/mnt/myvolume</code>.</p>
</li>
</ul>
<h2 id="heading-step-4-add-sample-data">🟢 <strong>Step 4: Add Sample Data</strong></h2>
<pre><code class="lang-powershell"><span class="hljs-built_in">echo</span> <span class="hljs-string">"Hello from Apurv's AWS Volume!"</span> | sudo <span class="hljs-built_in">tee</span> /mnt/myvolume/data.txt
<span class="hljs-built_in">cat</span> /mnt/myvolume/data.txt     <span class="hljs-comment"># Verify file</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752206699311/4327c67b-476a-42a0-a19c-ae49c91db071.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-5-create-snapshot-of-volume">🟢 <strong>Step 5: Create Snapshot of Volume</strong></h2>
<ul>
<li><p>Go to <strong>EC2 &gt; Volumes</strong></p>
</li>
<li><p>Select volume → <strong>Actions &gt; Create Snapshot</strong></p>
</li>
<li><p>Set:</p>
<ul>
<li><p>Name: <code>data-snapshot</code></p>
</li>
<li><p>Description: <code>Snapshot for cross-AZ access</code></p>
</li>
</ul>
</li>
<li><p>Click <strong>Create Snapshot</strong></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752206805819/0c7e8ca5-f0c2-4baf-a057-cf602fe47b75.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-6-create-volume-in-another-az-from-snapshot">🟢 <strong>Step 6: Create Volume in Another AZ from Snapshot</strong></h2>
<ul>
<li><p>Go to <strong>EC2 &gt; Snapshots</strong></p>
</li>
<li><p>Select snapshot → <strong>Actions &gt; Create Volume</strong></p>
</li>
<li><p>Set:</p>
<ul>
<li><p>AZ: <code>us-east-1a</code> (different from original)</p>
</li>
<li><p>Size: 10 GiB</p>
</li>
</ul>
</li>
<li><p>Create volume</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752206898163/5b7a2006-9c7f-4576-b4db-05261c26490c.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-7-launch-second-ec2-in-target-az">🟢 <strong>Step 7: Launch Second EC2 in Target AZ</strong></h2>
<ul>
<li><p>Launch another EC2 instance</p>
</li>
<li><p>Subnet: <code>us-east-1a</code></p>
</li>
<li><h3 id="heading-use-same-settings-as-step-1"><strong><mark>Use same settings as Step 1</mark></strong></h3>
</li>
</ul>
<h2 id="heading-step-8-attach-snapshot-based-volume">🟢 <strong>Step 8: Attach Snapshot-Based Volume</strong></h2>
<ul>
<li><p>Go to <strong>EC2 &gt; Volumes</strong></p>
</li>
<li><p>Select new volume → <strong>Actions &gt; Attach Volume</strong></p>
</li>
<li><p>Attach to second EC2 as: <code>/dev/sdf</code></p>
</li>
</ul>
<h2 id="heading-step-9-mount-and-access-data-in-2nd-ec2">🟢 <strong>Step 9: Mount and Access Data in 2nd EC2</strong></h2>
<p>Inside second EC2:</p>
<pre><code class="lang-powershell">lsblk
sudo mkdir /mnt/myvolume
sudo <span class="hljs-built_in">mount</span> /dev/xvdf /mnt/myvolume
<span class="hljs-built_in">cat</span> /mnt/myvolume/data.txt    <span class="hljs-comment"># You'll see the same data!</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752207086660/6e5b8af4-8c64-4ee2-bb20-5d2d79ca0ba6.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">✅ <strong>Conclusion</strong></h2>
<p>In this blog, we covered a complete hands-on process of managing storage in AWS using EC2 and EBS. Starting from launching an EC2 instance, attaching a new EBS volume, formatting and mounting it, to writing custom data each step mirrors real-world AWS workflows.</p>
<p>We then created a snapshot of that volume, and used it to create a new volume in a <strong>different Availability Zone</strong>, proving that:</p>
<blockquote>
<p><strong>You can easily access your data from any Availability Zone in AWS</strong> by creating volumes from snapshots and attaching them to instances in the target AZ.</p>
</blockquote>
<p>This process is not only useful for <strong>cross-AZ data access</strong>, but also essential for:</p>
<ul>
<li><p><strong>Disaster recovery</strong></p>
</li>
<li><p><strong>High availability setups</strong></p>
</li>
<li><p><strong>Data backups and migration</strong></p>
</li>
</ul>
<p>Whether you're preparing for AWS certification or building scalable cloud systems, mastering these EBS and EC2 operations is a foundational skill every cloud or DevOps engineer should have.</p>
<h2 id="heading-about-the-author"><strong>👨‍💻 About the Author</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751797710818/123a7231-3dca-4273-ad68-7bd026f69b95.png?auto=compress,format&amp;format=webp" alt /></p>
<p>This project is a deep dive into the AWS ecosystem designed to strengthen my foundation in cloud-native architecture, automation, and service integration <strong>using only AWS services</strong>.</p>
<blockquote>
<p><strong><em>From launching EC2 instances, managing storage with S3 and EBS, configuring IAM for secure access, setting up VPCs and subnets, to automating infrastructure with CloudFormation each service I used brought real-world relevance and clarity to cloud concepts.</em></strong></p>
</blockquote>
<p>This series isn't just about using AWS; it's about <strong>mastering the core services that power modern cloud infrastructure</strong>.</p>
<hr />
<h3 id="heading-lets-stay-connected">📬 Let's Stay Connected</h3>
<ul>
<li><p>📧 <strong>Email</strong>: <a target="_blank" href="mailto:gujjarapurv181@gmail.com"><strong>gujjarapurv181@gmail.com</strong></a></p>
</li>
<li><p>🐙 <strong>GitHub</strong>: <a target="_blank" href="https://github.com/ApurvGujjar07"><strong>github.com/ApurvGujjar07</strong></a></p>
</li>
<li><p>💼 <strong>LinkedIn</strong>: <a target="_blank" href="https://www.linkedin.com/in/apurv-gujjar"><strong>linkedin.com/in/apurv-gujjar</strong></a></p>
</li>
</ul>
<hr />
<blockquote>
<p><strong><em>💡 If you found this project useful, or have any suggestions or feedback, feel free to reach out or drop a comment I’d love to connect and improve.<br />This is just the beginning many more builds, deployments, and learnings ahead.</em></strong></p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[The Ultimate AWS EBS Workflow: EC2 Integration]]></title><description><![CDATA[📝 Abstract
In this guide, I demonstrate the complete lifecycle of working with AWS EC2 and EBS volumes starting from launching an EC2 instance and attaching a new EBS volume, to preserving data by disabling delete-on-termination.
You’ll also learn h...]]></description><link>https://apurv-gujjar.me/ebs-part-1</link><guid isPermaLink="true">https://apurv-gujjar.me/ebs-part-1</guid><category><![CDATA[AWS]]></category><category><![CDATA[ebs]]></category><category><![CDATA[snapshot]]></category><category><![CDATA[Backup]]></category><dc:creator><![CDATA[Gujjar Apurv]]></dc:creator><pubDate>Thu, 10 Jul 2025 04:27:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1752028453343/33e4d09a-ae78-4eec-9adb-69720a44d953.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-abstract">📝 Abstract</h2>
<p>In this guide, I demonstrate the <strong>complete lifecycle of working with AWS EC2 and EBS volumes</strong> starting from launching an EC2 instance and attaching a new EBS volume, to <strong>preserving data</strong> by disabling <em>delete-on-termination</em>.</p>
<p>You’ll also learn how to:</p>
<ul>
<li><p><strong>Recover EBS data</strong> after EC2 termination</p>
</li>
<li><p><strong>Migrate EBS volumes across Availability Zones</strong> using snapshots</p>
</li>
<li><p><strong>Copy snapshots across AWS Regions</strong> to restore data remotely</p>
</li>
</ul>
<p>This end-to-end workflow ensures <strong>data durability</strong>, <strong>flexibility</strong>, and <strong>disaster recovery readiness</strong> for modern cloud infrastructure setups.</p>
<h3 id="heading-step-1-launch-an-ec2-instance">✅ Step 1: Launch an EC2 Instance</h3>
<p>To begin, launch a new EC2 instance:</p>
<ul>
<li><p>Go to the <strong>EC2 Dashboard</strong> in AWS Management Console.</p>
</li>
<li><p>Click on <strong>“Launch Instance”</strong>.</p>
</li>
<li><p>Select an appropriate <strong>AMI</strong> (Amazon Machine Image), such as Ubuntu or Amazon Linux.</p>
</li>
<li><p>Choose an <strong>instance type</strong> (e.g., t2.micro for testing or t3.medium for production).</p>
</li>
<li><p>In the <strong>network section</strong>, you can <strong>keep the default VPC and Subnet</strong> settings — AWS will automatically manage networking unless you have a custom setup.</p>
</li>
<li><p>Continue with key pair, storage, and security group settings.</p>
</li>
</ul>
<p>Once launched, the EC2 instance will automatically have a <strong>root EBS volume</strong> attached.</p>
<h3 id="heading-understanding-the-delete-on-termination-setting-in-ec2">🔒 Understanding the “Delete on Termination” Setting in EC2</h3>
<p>When launching an EC2 instance, AWS automatically attaches a <strong>root EBS volume</strong> to store the operating system and configuration files. By default, this root volume is <strong>deleted</strong> when the instance is terminated — but you can control this behavior.</p>
<h4 id="heading-why-its-important-to-uncheck-delete-on-termination">📌 Why it's important to uncheck “Delete on Termination”:</h4>
<ul>
<li><p>🔐 <strong>Preserve critical data:</strong> Prevents the root EBS volume from being deleted, even if the EC2 instance is terminated.</p>
</li>
<li><p>🔄 <strong>Easily recover configuration:</strong> Retain OS setup, logs, and installed packages for later use or migration.</p>
</li>
<li><p>💼 <strong>Safe for production use:</strong> Especially helpful in staging or production environments where instance termination is temporary or planned.</p>
</li>
<li><p>☁️ <strong>Avoid accidental data loss:</strong> Ensures your volume (and its data) remains available in the <strong>EBS → Volumes</strong> section after termination.</p>
</li>
<li><p>🔧 <strong>Attach to a new EC2:</strong> The preserved volume can be re-attached to any EC2 instance in the <strong>same Availability Zone</strong> for instant recovery.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752029929308/7b19e150-5e37-42ad-a96c-d492c7b6f700.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752029942629/0f547661-6105-431d-8fcb-0bb979e12354.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752029686849/5419eb8d-7490-436d-a4b9-ce2acfc302ad.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>As shown below, AWS automatically creates and attaches a root EBS volume when you launch an EC2 instance.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752029752332/06ac26d6-72ef-4b9e-b9d9-3eaa45b99828.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-2-create-and-attach-a-new-ebs-volume-to-your-ec2-instance">✅ Step 2: Create and Attach a New EBS Volume to Your EC2 Instance</h3>
<p>After launching your EC2 instance, you may need extra storage. For that, you can create a new EBS volume and attach it.</p>
<h4 id="heading-steps-to-create">🔧 Steps to Create:</h4>
<ul>
<li><p>Go to <strong>EC2 → Elastic Block Store → Volumes → Create Volume</strong></p>
</li>
<li><p>Select <strong>Volume Type</strong>, <strong>Size</strong>, and most importantly, choose the <strong>same Availability Zone</strong> as your EC2 instance</p>
<blockquote>
<p>📌 EBS volumes can only be attached to EC2 instances in the <strong>same Availability Zone</strong></p>
</blockquote>
</li>
</ul>
<h4 id="heading-steps-to-attach">🔗 Steps to Attach:</h4>
<ul>
<li><p>After creation, select the volume → Click <strong>Actions → Attach Volume</strong></p>
</li>
<li><p>Choose your EC2 instance and confirm the <strong>device name</strong> (e.g., <code>/dev/xvdf</code>)</p>
</li>
<li><p>Click <strong>Attach</strong> — your volume is now connected to the instance</p>
</li>
</ul>
<blockquote>
<p>✅ You’ll now need to format and mount the volume inside the EC2, which we’ll do in the next step.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752030513241/516964dd-2863-403f-8784-6e023afa65c4.png" alt class="image--center mx-auto" /></p>
</blockquote>
<p>✅ Our new EBS volume has been successfully created and is now ready to be attached to the EC2 instance.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752030572775/81e0f79d-7aba-4b05-9748-1dcab0f448fb.png" alt class="image--center mx-auto" /></p>
<p>📌 Go to <strong>Actions → Attach Volume</strong>, then select your EC2 instance to attach the volume.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752030621249/fe020805-2a1c-4809-9e86-38d63df4f513.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752030672570/375c1eeb-d807-43ba-a2a5-d3e091917b4c.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-3-connect-to-ec2-and-mount-the-attached-ebs-volume">✅ Step 3: Connect to EC2 and Mount the Attached EBS Volume</h3>
<p>Once your EBS volume is attached, you need to <strong>connect to your EC2 instance</strong> and prepare the volume for use.</p>
<h4 id="heading-steps-to-check-and-mount">💻 Steps to Check and Mount:</h4>
<ol>
<li><p><strong>SSH into your EC2 instance</strong> using terminal or any SSH client<br /> <em>(e.g., using</em> <code>.pem</code> key file)</p>
</li>
<li><p>Run the following to <strong>verify attached volumes</strong>:</p>
<pre><code class="lang-powershell"> lsblk
</code></pre>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752030849427/80bd43cf-84a9-472c-8a07-de418783455d.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-4-format-the-ebs-volume-with-ext4-file-system">🧩 Step 4: Format the EBS Volume with ext4 File System</h2>
<pre><code class="lang-powershell">sudo mkfs <span class="hljs-literal">-t</span> ext4 /dev/xvdd
</code></pre>
<ul>
<li><p><code>mkfs</code>: Stands for “make file system” — used to format the disk.</p>
</li>
<li><p><code>-t ext4</code>: Specifies the ext4 file system type (widely used in Linux).</p>
</li>
<li><p><code>/dev/xvdd</code>: The name of the newly attached EBS volume.</p>
</li>
</ul>
<blockquote>
<p>📌 This command prepares the volume for storing files by formatting it.</p>
</blockquote>
<hr />
<h2 id="heading-step-5-create-a-mount-point">📁 Step 5: Create a Mount Point</h2>
<pre><code class="lang-powershell">sudo mkdir /mydata
</code></pre>
<ul>
<li><p><code>mkdir</code>: Command to make a new directory.</p>
</li>
<li><p><code>/mydata</code>: This directory will act as the mount point for the EBS volume.</p>
</li>
</ul>
<blockquote>
<p>📌 This is where your EBS volume will be accessible from.</p>
</blockquote>
<hr />
<h2 id="heading-step-6-mount-the-volume">🔗 Step 6: Mount the Volume</h2>
<pre><code class="lang-powershell">sudo <span class="hljs-built_in">mount</span> /dev/xvdd /mydata
</code></pre>
<ul>
<li><p><code>mount</code>: Command to attach the volume to a directory.</p>
</li>
<li><p><code>/dev/xvdd</code>: The formatted EBS volume.</p>
</li>
<li><p><code>/mydata</code>: The directory where the volume will be mounted.</p>
</li>
</ul>
<blockquote>
<p>📌 This makes the volume usable like a local disk through <code>/mydata</code>.</p>
</blockquote>
<hr />
<h2 id="heading-step-7-verify-the-mount-status">📊 Step 7: Verify the Mount Status</h2>
<pre><code class="lang-powershell">df <span class="hljs-literal">-h</span>
</code></pre>
<ul>
<li><p><code>df</code>: Disk free — shows storage usage.</p>
</li>
<li><p><code>-h</code>: Human-readable format (GB/MB).</p>
</li>
</ul>
<blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752033383255/929e96a4-ab49-43a3-9154-7c17415b2ec9.png" alt class="image--center mx-auto" /></p>
<p>📌 This helps confirm whether the EBS volume is mounted and shows its usage.</p>
</blockquote>
<hr />
<h2 id="heading-step-8-write-data-into-the-mounted-volume">📝 Step 8: Write Data into the Mounted Volume</h2>
<pre><code class="lang-powershell"><span class="hljs-built_in">echo</span> <span class="hljs-string">"This is task for EBS performed by Apurv Gujjar"</span> | sudo <span class="hljs-built_in">tee</span> /mydata/ebs<span class="hljs-literal">-test</span>.txt
</code></pre>
<ul>
<li><p><code>echo</code>: Prints the given message.</p>
</li>
<li><p><code>| sudo tee</code>: Pipes the message into a file with root permissions.</p>
</li>
<li><p><code>/mydata/ebs-test.txt</code>: The file to create inside the mounted volume.</p>
</li>
</ul>
<blockquote>
<p>📌 This command creates a test file in the EBS volume to validate it's writable.</p>
</blockquote>
<hr />
<h2 id="heading-step-9-check-stored-data">🔍 Step 9: Check Stored Data</h2>
<pre><code class="lang-powershell"><span class="hljs-built_in">ls</span> /mydata
</code></pre>
<ul>
<li><p><code>ls</code>: Lists all files and folders in the directory.</p>
</li>
<li><p><code>/mydata</code>: The mount point where the volume is attached.</p>
</li>
</ul>
<pre><code class="lang-powershell"><span class="hljs-built_in">cat</span> /mydata/ebs<span class="hljs-literal">-test</span>.txt
</code></pre>
<ul>
<li><blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752033421246/cc4aa9c3-cb78-43c1-9669-da856a7101de.png" alt class="image--center mx-auto" /></p>
<p>📌 Helps you verify that the file (<code>ebs-test.txt</code>) exists.</p>
</blockquote>
<p>  <code>cat</code>: Used to display file content in terminal.</p>
</li>
<li><p><code>/mydata/ebs-test.txt</code>: The test file we just created.</p>
</li>
</ul>
<blockquote>
<p>📌 Confirms the file contains the correct data you wrote earlier.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752033572954/a23a57c6-5e8a-4fd8-afd4-cc3f932ec8d1.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-about-the-author">👨‍💻 About the Author</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751797710818/123a7231-3dca-4273-ad68-7bd026f69b95.png" alt class="image--center mx-auto" /></p>
<p>This project is a deep dive into the AWS ecosystem designed to strengthen my foundation in cloud-native architecture, automation, and service integration <strong>using only AWS services</strong>.</p>
<blockquote>
<p>From launching <strong>EC2 instances</strong>, managing storage with <strong>S3 and EBS</strong>, configuring <strong>IAM for secure access</strong>, setting up <strong>VPCs and subnets</strong>, to automating infrastructure with <strong>CloudFormation</strong> each service I used brought real-world relevance and clarity to cloud concepts.</p>
</blockquote>
<p>This series isn't just about using AWS; it's about <strong>mastering the core services that power modern cloud infrastructure</strong>.</p>
<hr />
<h3 id="heading-lets-stay-connected">📬 Let's Stay Connected</h3>
<ul>
<li><p>📧 <strong>Email</strong>: gujjarapurv181@gmail.com</p>
</li>
<li><p>🐙 <strong>GitHub</strong>: <a target="_blank" href="https://github.com/ApurvGujjar07">github.com/ApurvGujjar07</a></p>
</li>
<li><p>💼 <strong>LinkedIn</strong>: <a target="_blank" href="https://www.linkedin.com/in/apurv-gujjar">linkedin.com/in/apurv-gujjar</a></p>
</li>
</ul>
<hr />
<blockquote>
<p>💡 <em>If you found this project useful, or have any suggestions or feedback, feel free to reach out or drop a comment I’d love to connect and improve.</em><br />This is just the beginning <strong>many more builds, deployments, and learnings ahead.</strong></p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA["Two-Tier Web App with Docker: Flask Frontend + MySQL Backend"]]></title><description><![CDATA[📘 Introduction
In this blog, I’ll walk you through how I deployed a two-tier Flask web application with a MySQL database using Docker containers on an AWS EC2 instance. This setup ensures container isolation, environment consistency, and real-time c...]]></description><link>https://apurv-gujjar.me/flask-docker-app</link><guid isPermaLink="true">https://apurv-gujjar.me/flask-docker-app</guid><category><![CDATA[Docker]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[Ubuntu]]></category><category><![CDATA[deployment]]></category><category><![CDATA[Devops]]></category><category><![CDATA[MySQL]]></category><category><![CDATA[Databases]]></category><dc:creator><![CDATA[Gujjar Apurv]]></dc:creator><pubDate>Tue, 08 Jul 2025 04:09:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1751946963253/82ca568b-cfbc-42d1-8169-218b23a9b1f7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">📘 <strong>Introduction</strong></h2>
<p>In this blog, I’ll walk you through how I deployed a <strong>two-tier Flask web application</strong> with a <strong>MySQL database</strong> using <strong>Docker</strong> containers on an <strong>AWS EC2 instance</strong>. This setup ensures container isolation, environment consistency, and real-time communication between the app and database via Docker networking.</p>
<p>By the end of this guide, your application will be accessible over the internet with just a public IP and a browser!</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong>🚀🚀 Check out the full project on my </strong><a target="_self" href="https://github.com/ApurvGujjar07/Two-Tier-Flask-App-.git">GitHub Repository</a></div>
</div>

<h2 id="heading-1-launching-aws-ec2-instance-ubuntuhttpsgithubcomgujjarapurvtwo-tier-flask-docker"><a target="_blank" href="https://github.com/gujjarapurv/two-tier-flask-docker">☁️ <strong>1. Launching AWS EC2 Instance (Ubuntu)</strong></a></h2>
<ul>
<li><p><a target="_blank" href="https://github.com/gujjarapurv/two-tier-flask-docker">Go</a> to AW<a target="_blank" href="https://github.com/gujjarapurv/two-tier-flask-docker">S EC2 Console.</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/gujjarapurv/two-tier-flask-docker">Launch a new instance with the fol</a>lowing:</p>
<ul>
<li><p><strong>AMI</strong>: Ubuntu 20.04 or 22.04 LTS</p>
</li>
<li><p><strong>Instance Type</strong>: t2.micro (Free Tier)</p>
</li>
<li><p><strong>Storage</strong>: 8 GB or more</p>
</li>
<li><p><strong>Security Group Rules</strong>:</p>
<ul>
<li><p>SSH (Port 22) – for terminal access</p>
</li>
<li><p>HTTP (Port 80) – optional</p>
</li>
<li><p><strong>Custom TCP (Port 5000)</strong> – to access Flask App</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p>Key pair: Create or use an existing one (e.g., <code>my-key.pem</code>)</p>
</li>
</ul>
<blockquote>
<p>✅ Connect to the instance:</p>
</blockquote>
<pre><code class="lang-powershell">ssh <span class="hljs-literal">-i</span> my<span class="hljs-literal">-key</span>.pem ubuntu@your<span class="hljs-literal">-public</span><span class="hljs-literal">-ip</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751945661664/9d24457c-6a8f-488b-be2b-64a7748a903e.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-2-installing-docker-and-git-on-ubuntu">🐳 <strong>2. Installing Docker and Git on Ubuntu</strong></h2>
<pre><code class="lang-powershell">sudo apt update
sudo apt install <span class="hljs-literal">-y</span> docker.io 
sudo usermod <span class="hljs-literal">-aG</span> docker <span class="hljs-variable">$USER</span> &amp;&amp; newgrp docker
</code></pre>
<p><strong>1.</strong> <code>sudo apt update</code><br />Updates the package list to fetch the latest versions available from repositories.</p>
<p><strong>2.</strong> <code>sudo apt install -y</code> <a target="_blank" href="http://docker.io"><code>docker.io</code></a><br />Installs Docker Engine from Ubuntu’s default repository without asking for confirmation.</p>
<p><strong>3.</strong> <code>sudo usermod -aG docker $USER &amp;&amp; newgrp docker</code><br />Adds your user to the Docker group so you can run Docker without <code>sudo</code>.</p>
<h2 id="heading-3-cloning-the-flask-project">🔧 <strong>3. Cloning the Flask Project</strong></h2>
<pre><code class="lang-powershell">clone https://github.com/ApurvGujjar07/Two<span class="hljs-literal">-Tier</span><span class="hljs-literal">-Flask</span><span class="hljs-literal">-App</span>-.git
<span class="hljs-built_in">cd</span> two<span class="hljs-literal">-tier</span><span class="hljs-literal">-flask</span><span class="hljs-literal">-app</span>
<span class="hljs-built_in">ls</span>
</code></pre>
<p>You’ll see files like <a target="_blank" href="http://app.py"><code>app.py</code></a>, <code>Dockerfile</code>, etc.</p>
<h2 id="heading-4-build-flask-app-docker-image">📦 <strong>4. Build Flask App Docker Image</strong></h2>
<pre><code class="lang-powershell">docker build <span class="hljs-literal">-t</span> two<span class="hljs-literal">-tier</span><span class="hljs-literal">-backend</span> .
</code></pre>
<p>This command packages the app into a Docker image named <code>two-tier-backend</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751946195016/27df688c-5725-445d-9546-486e0f839eb0.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-5-create-docker-network">🌐 <strong>5. Create Docker Network</strong></h2>
<pre><code class="lang-powershell">docker network create mynetwork
</code></pre>
<p>This allows containers to communicate internally by name (DNS-style linking).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751946242612/56109110-d953-4a20-959c-a778b1dfbf3a.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-6-run-mysql-container">🐬 <strong>6. Run MySQL Container</strong></h2>
<pre><code class="lang-powershell">docker run <span class="hljs-literal">-d</span> -<span class="hljs-literal">-name</span> mysql -<span class="hljs-literal">-network</span> mynetwork \
<span class="hljs-literal">-e</span> MYSQL_ROOT_PASSWORD=root \
<span class="hljs-literal">-e</span> MYSQL_DATABASE=devops \
mysql
</code></pre>
<blockquote>
<p>✅ MySQL now runs and is ready for connection on the Docker network.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751946280314/b7bfb388-c01f-4001-944f-6343bef4b283.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-7-run-flask-app-container">🚀 <strong>7. Run Flask App Container</strong></h2>
<pre><code class="lang-powershell">docker run <span class="hljs-literal">-d</span> <span class="hljs-literal">-p</span> <span class="hljs-number">5000</span>:<span class="hljs-number">5000</span> -<span class="hljs-literal">-network</span> mynetwork \
<span class="hljs-literal">-e</span> MYSQL_HOST=mysql \
<span class="hljs-literal">-e</span> MYSQL_USER=root \
<span class="hljs-literal">-e</span> MYSQL_PASSWORD=root \
<span class="hljs-literal">-e</span> MYSQL_DB=devops \
two<span class="hljs-literal">-tier</span><span class="hljs-literal">-backend</span>:latest
</code></pre>
<p>Your Flask app is now running and mapped to <strong>port 5000</strong> of your EC2 instance.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751946355676/4456af58-0a09-481d-bea8-56fa6844e85b.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-8-verify-everything-is-working">🔍 <strong>8. Verify Everything is Working</strong></h2>
<h3 id="heading-check-running-containers">Check running containers</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751946373970/2fff60e2-7058-4b54-aced-7040f9f0d9d7.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-9-access-app-in-browser">🌐 <strong>9. Access App in Browser</strong></h2>
<ul>
<li><p>Go to AWS Console → <strong>Security Groups</strong></p>
</li>
<li><p><strong>Add inbound rule</strong>:</p>
<ul>
<li><p>Type: Custom TCP</p>
</li>
<li><p>Port: <strong>5000</strong></p>
</li>
<li><p>Source: Anywhere (0.0.0.0/0)</p>
</li>
</ul>
</li>
</ul>
<p>Then visit:<br />🌍 <code>http://&lt;your-public-ip&gt;:5000</code></p>
<blockquote>
<p>✅ You should see the live Flask web app now!</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751946529474/09a837d1-98f8-4b9a-b7c9-d0340cb1d2fd.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751946556646/2372b1de-4bfc-4df3-9a86-3540f58672a8.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-connect-to-mysql-and-verify-db">Connect to MySQL and verify DB:</h3>
<pre><code class="lang-powershell">docker exec <span class="hljs-literal">-it</span> mysql mysql <span class="hljs-literal">-u</span> root <span class="hljs-literal">-p</span>
<span class="hljs-comment"># Enter: root</span>
</code></pre>
<p>Then run:</p>
<pre><code class="lang-powershell">SHOW DATABASES;
USE devops;
SHOW TABLES;
<span class="hljs-built_in">SELECT</span> * FROM messages;
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751946633049/4a7abbca-4112-4baa-aeba-979fc2286f77.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751946612074/d7bf6f52-e727-4101-bd87-3e7da1ba88be.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">🧠 <strong>Conclusion</strong></h2>
<p>In this blog, I demonstrated how to:</p>
<ul>
<li><p>Launch an EC2 instance</p>
</li>
<li><p>Install and configure Docker</p>
</li>
<li><p>Deploy a two-tier Flask + MySQL app</p>
</li>
<li><p>Connect containers via Docker network</p>
</li>
<li><p>Expose the app publicly via port 5000</p>
</li>
</ul>
<p>This is a solid DevOps practice project showing real-world containerization, networking, and deployment skills.</p>
<h2 id="heading-about-the-author">👨‍💻 About the Author</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751797710818/123a7231-3dca-4273-ad68-7bd026f69b95.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>Hi, I'm <strong>Apurv Gujjar</strong>, a passionate and aspiring DevOps Engineer.<br />"This project marks the beginning of my journey into the world of DevOps, containerization, and cloud-native application deployment."</p>
</blockquote>
]]></content:encoded></item></channel></rss>