<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Mahdi Bagheri's Blog]]></title><description><![CDATA[Mahdi Bagheri's Blog]]></description><link>https://blog.bagheri.me</link><generator>RSS for Node</generator><lastBuildDate>Fri, 17 Apr 2026 07:47:04 GMT</lastBuildDate><atom:link href="https://blog.bagheri.me/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Why You Should Consider Using Email Aliases]]></title><description><![CDATA[Most of us created our first email address when we were younger, and many of us still use them today.Over the years we registered on multiple services and subscribed to many newsletters. As a result we received many mails from these services filling ...]]></description><link>https://blog.bagheri.me/why-you-should-consider-using-email-aliases</link><guid isPermaLink="true">https://blog.bagheri.me/why-you-should-consider-using-email-aliases</guid><category><![CDATA[addy.io]]></category><category><![CDATA[privacy]]></category><category><![CDATA[email]]></category><category><![CDATA[alias]]></category><category><![CDATA[simplelogin]]></category><dc:creator><![CDATA[Mahdi Bagheri]]></dc:creator><pubDate>Sun, 02 Mar 2025 22:25:18 GMT</pubDate><content:encoded><![CDATA[<p>Most of us created our first email address when we were younger, and many of us still use them today.<br />Over the years we registered on multiple services and subscribed to many newsletters. As a result we received many mails from these services filling up our mailbox. But soon we realized that some of these mails are spam or phishing mails recognizable by their weird content, poor grammar or the unfamiliar sender.</p>
<p>Sometimes, services we registered for experience data breaches. In that process user emails can be exposed. If our mail was leaked in that breach, it will be only a matter of time until an attacker gets his hands on it to target us with phishing attempts. The more breaches our email appears in, the more phishing emails we will receive.</p>
<p>By visiting the website <a target="_blank" href="https://haveibeenpwned.com/">haveibeenpwned.com</a> we can check if our email was exposed in data breach and at which service that breach occurred.</p>
<h1 id="heading-email-alias-advantages">Email Alias Advantages</h1>
<p>An email alias is an alternative email address that forwards incoming messages to our main inbox. By using an alias to register for services, we avoid the exposure of our primary email address. If a service experiences a data breach, only the alias is affected and not our primary email.</p>
<p>When using unique aliases for different services, we can easily identify the compromised service if we receive a phishing email. In case our alias has been compromised, we can simply delete it and create a new one to minimize spam and prevent further phishing attempts. By doing so we can add an extra layer of privacy and security to our email and accounts.</p>
<h1 id="heading-creating-email-aliases">Creating Email Aliases</h1>
<p>Most email service providers offer email aliases as part of their premium subscriptions. In some cases a custom domain may be required. Depending on the provider there may be no restrictions at all for the creation of aliases. But sometimes the amount of possible aliases is limited.</p>
<p>In case the email provider doesn’t support aliases or has strict restrictions, the use of third-party email forwarding services like <a target="_blank" href="https://addy.io/">addy.io</a> (AnonAddy) or <a target="_blank" href="https://simplelogin.io/">SimpleLogin</a> could be considered. To use them fully without restrictions a subscription will be needed. In the case of addy.io it is also possibly to self host this service.</p>
<p>It would be best to create an entirely new email or use one that hasn’t been in a breach before applying this approach.</p>
<p>My service of choice is SimpleLogin. Mainly because of its user-friendly and intuitive mobile app which was incredibly useful in situations where I needed to create new aliases on the go.</p>
<p>Disclaimer: This is not a sponsored post. I am simply sharing my personal experience after using both services. Also it has been a while since I last used addy.io. I recommend trying both to determine which one best suits your needs.</p>
]]></content:encoded></item><item><title><![CDATA[Begin Security Monitoring with Wazuh]]></title><description><![CDATA[Introduction
Wazuh is a monitoring solution with focus on security, combining features for XDR (Extended Detection and Response) and SIEM (Security Information and Event Management) into one platform. And the best part is, it’s free and open source.
...]]></description><link>https://blog.bagheri.me/begin-security-monitoring-with-wazuh</link><guid isPermaLink="true">https://blog.bagheri.me/begin-security-monitoring-with-wazuh</guid><category><![CDATA[wazuh]]></category><category><![CDATA[SIEM]]></category><category><![CDATA[xdr]]></category><category><![CDATA[Security]]></category><category><![CDATA[Docker]]></category><dc:creator><![CDATA[Mahdi Bagheri]]></dc:creator><pubDate>Tue, 28 Jan 2025 23:11:35 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738105514266/53cafd58-85d4-4004-81f6-20c894b8c3eb.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>Wazuh is a monitoring solution with focus on security, combining features for XDR (Extended Detection and Response) and SIEM (Security Information and Event Management) into one platform. And the best part is, it’s free and open source.</p>
<p>Some of the capabilities of Wazuh are <em>Log Data Analysis</em>, <em>Intrusion Detection</em>, <em>File Integrity Monitoring</em>, <em>Vulnerability Detection</em> and <em>Compliance Reporting</em>.</p>
<p>By using a security monitoring solution like Wazuh it is possible to gain more insights into the security posture of the machines available so we can act faster in case threats and vulnerabilities arise to mitigate possible attacks.</p>
<p>In this article we are going to setup Wazuh and also install our first Wazuh agent on the machine of our choice to start collecting data.</p>
<h1 id="heading-installation">Installation</h1>
<p>The following steps will show how to setup Wazuh with Docker. It is possible to deploy Wazuh as a <em>single-node</em> or <em>multi-node</em> stack. The following steps show the deployment of the single-node stack.<br />As recommended per Documentation we will start by adding the following line to our <em>/etc/sysctl.conf</em> file:</p>
<pre><code class="lang-bash">vm.max_map_count=262144
</code></pre>
<h2 id="heading-retrieving-the-source-codes">Retrieving the source codes</h2>
<p>First we will clone the Wazuh repository to our system via a git clone command:</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> https://github.com/wazuh/wazuh-docker.git -b v4.10.1
</code></pre>
<h2 id="heading-generating-self-signed-certificates">Generating self signed certificates</h2>
<p>Next we will move into the <em>wazuh-docker/single-node</em> folder. We are provided with the <em>generate-indexer-certs.yml</em> file through which we will generate some certificates for our wazuh containers. We will execute it with:</p>
<pre><code class="lang-bash">docker-compose -f generate-indexer-certs.yml run --rm generator
</code></pre>
<p>This will generate the certificates for the Wazuh indexer, Wazuh manager and Wazuh dashbaord.</p>
<p>If you don’t want to run the application behind a reverse proxy this would be it for the first part and a single <em>docker-compose up</em> would start the application and map it to host port 443.</p>
<h2 id="heading-running-wazuh-behind-a-reverse-proxy">Running Wazuh behind a Reverse Proxy</h2>
<p>First adjust the docker-compose.yml file and map the Wazuh dashboard container port 5601 to host port 5601 or any other available port of your liking.</p>
<p>Next navigate to <em>wazuh-docker/single-node/config/wazuh-dashboard</em> and configure the <em>opensearch_dashboard.yml</em> file by commenting the <em>server.ssl.key</em> and <em>server.ssl.certificate</em> entries and changing the <em>server.ssl.enable</em> value to <em>false</em>.</p>
<pre><code class="lang-bash">server.ssl.enabled: <span class="hljs-literal">false</span>
<span class="hljs-comment">#server.ssl.key: "/usr/share/wazuh-dashboard/certs/wazuh-dashboard-key.pem"</span>
<span class="hljs-comment">#server.ssl.certificate: "/usr/share/wazuh-dashboard/certs/wazuh-dashboard.pem"</span>
</code></pre>
<p>With these changes the Wazuh setup should run behind the proxy after starting it with a <em>docker-compose</em> <em>up</em>.</p>
<h2 id="heading-changing-the-default-credentials">Changing the default credentials</h2>
<p>We probably don’t want to run Wazuh with the default credentials to make our application an easy target for attackers. Therefore we will now change them.</p>
<p>If your application is already running, stop it first with a simple <em>docker-compose down</em> command.</p>
<p>Inside config/wazuh_indexer/internal_users.yml we will find sections for the <em>admin</em> user and the <em>kibanaserver</em> user. The passwords stored in this file are hashed via <em>bcrypt</em>. After having decided for the new passwords we want to provide run the following command:</p>
<pre><code class="lang-bash">docker run --rm -ti wazuh/wazuh-indexer:4.10.1 bash /usr/share/wazuh-indexer/plugins/opensearch-security/tools/hash.sh
</code></pre>
<p>This will start a prompt asking for the password and generate the corresponding bcrypt hashes. We will take these newly generated hashes and replace them with the old values inside the internal_users.yml file.</p>
<p>Afterwards we will need to also replace the values inside the <em>docker-compose.yml</em> file. For <em>admin</em> we replace the value wherever we find the <em>INDEXER_PASSWORD</em> entry (wazuh.manager and wazu.dashboard container). For <em>kibanaserver</em> we do the same for the <em>DASHBOARD_PASSWORD</em> entry (wazuh.dashboard container).</p>
<p>Now we start the Wazuh stack again via <em>docker-compose up</em> and access the bash shell of the <em>single-node-wazuh.indexer-1</em>. Easiest is to first determine the <em>container ID</em> via <em>docker ps</em> and then entering it with:</p>
<pre><code class="lang-bash">docker <span class="hljs-built_in">exec</span> -t &lt;Container ID of single-node-wazuh.indexer-1&gt; bash
</code></pre>
<p>Inside the container we execute the following commands:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> INSTALLATION_DIR=/usr/share/wazuh-indexer
CACERT=<span class="hljs-variable">$INSTALLATION_DIR</span>/certs/root-ca.pem
KEY=<span class="hljs-variable">$INSTALLATION_DIR</span>/certs/admin-key.pem
CERT=<span class="hljs-variable">$INSTALLATION_DIR</span>/certs/admin.pem
<span class="hljs-built_in">export</span> JAVA_HOME=/usr/share/wazuh-indexer/jdk
</code></pre>
<p>We wait a moment (2-5 minutes) as suggested by the Wazuh documentation and run finally</p>
<pre><code class="lang-bash">bash /usr/share/wazuh-indexer/plugins/opensearch-security/tools/securityadmin.sh -<span class="hljs-built_in">cd</span> /usr/share/wazuh-indexer/opensearch-security/ -nhnv -cacert  <span class="hljs-variable">$CACERT</span> -cert <span class="hljs-variable">$CERT</span> -key <span class="hljs-variable">$KEY</span> -p 9200 -icl
</code></pre>
<p>We wait again a short moment and then the changes should be applied and the Wazuh dashboard accessible.</p>
<p>In case we want to also change the credentials for the API we would do this by changing the Password inside <em>config/wazuh_dashboard/wazuh.yml</em> and in the <em>docker-compose.yml</em> and re-run the whole stack via <em>docker-compose down</em> and <em>up</em>.</p>
<h1 id="heading-deploying-an-wazuh-agent">Deploying an Wazuh agent</h1>
<p>To start collecting data from the machine of our choice, we need to deploy an <em>Wazuh agent</em> on it first. For this we log into our Wazuh dashboard and navigate to the agents section. Here we find the option to <em>Deploy a new agent</em>. We select the button and on the new page we select the OS and the file format in which the installation instructions should be delivered. For <em>server address</em> we provide the address through which the agent can reach the machine on which the Wazuh server is running. We provide a <em>name</em> for the agent and select a <em>group</em> to put it into. Next we are provided with <em>installation instructions</em>, e.g. a command to run.</p>
<pre><code class="lang-bash">wget https://packages.wazuh.com/4.x/apt/pool/main/w/wazuh-agent/wazuh-agent_4.10.1-1_amd64.deb \ 
&amp;&amp; sudo WAZUH_MANAGER=<span class="hljs-string">'&lt;IP&gt;'</span> WAZUH_AGENT_GROUP=<span class="hljs-string">'default'</span> WAZUH_AGENT_NAME=<span class="hljs-string">'&lt;Name&gt;'</span> \
dpkg -i ./wazuh-agent_4.10.1-1_amd64.deb
</code></pre>
<p>When the agent has been installed we enable and start wazuh agent service (command are also provided). And when we navigate back to the agents sections of the dashboard an entry should be listed for the deployed agent showing it in an active state.</p>
<h1 id="heading-next-steps">Next steps</h1>
<p>Now that we have set up Wazuh and deployed our first agent it is up to you to bring out everything Wazuh has to offer and configure it to your needs. This could include configuring the log sources that you want to observer, checking for vulnerabilities or suspicious files, configuring alerts and receiving notifications in case of events that require fast intervention.</p>
<p>For more information refer to the official <a target="_blank" href="https://documentation.wazuh.com/current/index.html">Documentation</a>.</p>
]]></content:encoded></item><item><title><![CDATA[SonarQube Setup and GitHub Integration]]></title><description><![CDATA[Why SonarQube?
In today’s competitive software development world, speed is crucial to develop new features, as well as improving existing features to stay ahead of the competition. Multiple developers of varying skill levels and work experience contr...]]></description><link>https://blog.bagheri.me/sonarqube-setup-and-github-integration</link><guid isPermaLink="true">https://blog.bagheri.me/sonarqube-setup-and-github-integration</guid><category><![CDATA[Devops]]></category><category><![CDATA[sonarqube]]></category><category><![CDATA[sonarqube installation]]></category><category><![CDATA[static code analysis]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[GitHub Actions]]></category><dc:creator><![CDATA[Mahdi Bagheri]]></dc:creator><pubDate>Fri, 24 Jan 2025 15:45:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1737663958759/7823a7bd-5dc3-4bd5-8d0d-b397e2a1aadf.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-why-sonarqube">Why SonarQube?</h1>
<p>In today’s competitive software development world, speed is crucial to develop new features, as well as improving existing features to stay ahead of the competition. Multiple developers of varying skill levels and work experience contribute to the software. Through the addition of new code lines the code base will grow larger over time. Through the differences level of experience and coding styles between the contributors the code base will also become more complex and in some areas even less efficient or insecure, making it more difficult to maintain.</p>
<p><em>SonarQube</em> is a powerful static code analysis tool that helps development teams maintain high-quality, maintainable and secure code. It scans the the code base for potential issues like bugs, vulnerabilities and code smells to ensure problems are identified early. It can also be integrated into the <em>CI/CD pipeline</em>, ensuring that every code commit is automatically analyzed for issues. This integration provides developers with immediate feedback on code quality and security vulnerabilities. By using SonarQube, teams can siginificantly reduce the time spent on manual code reviews and prevent defects from reaching production, thereby improving overall efficiency and the software’s reliability.</p>
<h1 id="heading-installation">Installation</h1>
<p>SonarSource, the company behind SonarQube, provides a Docker Compose file that simplifies the process of running SonarQube. The code can be found at the following <a target="_blank" href="https://github.com/SonarSource/docker-sonarqube/blob/master/example-compose-files/sq-with-postgres/docker-compose.yml">GitHub</a> page. I’ve made slight adjustment to at the networks configuration settings and also moved credentials into environment variables stored in a .env file.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">services:</span>
  <span class="hljs-attr">sonarqube:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">sonarqube:community</span>
    <span class="hljs-attr">restart:</span> <span class="hljs-string">unless-stopped</span>
    <span class="hljs-attr">hostname:</span> <span class="hljs-string">sonarqube</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">sonarqube</span>
    <span class="hljs-attr">read_only:</span> <span class="hljs-literal">true</span>
    <span class="hljs-attr">depends_on:</span>
      <span class="hljs-attr">db:</span>
        <span class="hljs-attr">condition:</span> <span class="hljs-string">service_healthy</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">SONAR_JDBC_URL:</span> <span class="hljs-string">${SONAR_JDBC_URL}</span>
      <span class="hljs-attr">SONAR_JDBC_USERNAME:</span> <span class="hljs-string">${SONAR_JDBC_USERNAME}</span>
      <span class="hljs-attr">SONAR_JDBC_PASSWORD:</span> <span class="hljs-string">${SONAR_JDBC_PASSWORD}</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">sonarqube_data:/opt/sonarqube/data</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">sonarqube_extensions:/opt/sonarqube/extensions</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">sonarqube_logs:/opt/sonarqube/logs</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">sonarqube_temp:/opt/sonarqube/temp</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"9000:9000"</span>
    <span class="hljs-attr">networks:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">sast</span>
  <span class="hljs-attr">db:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">postgres:15</span>
    <span class="hljs-attr">restart:</span> <span class="hljs-string">unless-stopped</span>
    <span class="hljs-attr">healthcheck:</span>
      <span class="hljs-attr">test:</span> [<span class="hljs-string">"CMD-SHELL"</span>, <span class="hljs-string">"pg_isready -d ${POSTGRES_DB} -U ${POSTGRES_USER}"</span>]
      <span class="hljs-attr">interval:</span> <span class="hljs-string">10s</span>
      <span class="hljs-attr">timeout:</span> <span class="hljs-string">5s</span>
      <span class="hljs-attr">retries:</span> <span class="hljs-number">5</span>
    <span class="hljs-attr">hostname:</span> <span class="hljs-string">postgresql</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">postgresql</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">POSTGRES_USER:</span> <span class="hljs-string">${POSTGRES_USER}</span>
      <span class="hljs-attr">POSTGRES_PASSWORD:</span> <span class="hljs-string">${POSTGRES_PASSWORD}</span>
      <span class="hljs-attr">POSTGRES_DB:</span> <span class="hljs-string">${POSTGRES_DB}</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">postgresql:/var/lib/postgresql</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">postgresql_data:/var/lib/postgresql/data</span>
    <span class="hljs-attr">networks:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">sast</span>

<span class="hljs-attr">volumes:</span>
  <span class="hljs-attr">sonarqube_data:</span>
  <span class="hljs-attr">sonarqube_temp:</span>
  <span class="hljs-attr">sonarqube_extensions:</span>
  <span class="hljs-attr">sonarqube_logs:</span>
  <span class="hljs-attr">postgresql:</span>
  <span class="hljs-attr">postgresql_data:</span>

<span class="hljs-attr">networks:</span>
  <span class="hljs-attr">sast:</span>
    <span class="hljs-attr">driver:</span> <span class="hljs-string">bridge</span>
</code></pre>
<pre><code class="lang-bash">SONAR_JDBC_URL: jdbc:postgresql://db:5432/<span class="hljs-variable">${POSTGRES_DB}</span>
SONAR_JDBC_USERNAME: &lt;sonarqube_username&gt;
SONAR_JDBC_PASSWORD: &lt;sonarqube_password&gt;
POSTGRES_USER: &lt;postgres_username&gt;
POSTGRES_PASSWORD: &lt;postgres_password&gt;
POSTGRES_DB: &lt;postgres_database_name&gt;
</code></pre>
<p>And finally run it with a simple</p>
<pre><code class="lang-bash">docker compose up <span class="hljs-comment">#or docker-compose up</span>
</code></pre>
<h1 id="heading-github-integration">GitHub Integration</h1>
<p>Now that Sonarqube is up and running, we can log in using our credentials and begin configuring it for integration with GitHub. The process is similar to integrating SonarQube with other DevOps platforms. Detailed instructions can be found in the <a target="_blank" href="https://docs.sonarsource.com/sonarqube-server/latest/">documentation</a> under the <em>DevOps platform intergration</em> section*.*</p>
<p>One thing I don’t like is that new projects are set to <em>Public</em> by default. To change this to <em>Private</em>, navigate to <em>Administration → Projects → Management</em>. On the page, just to the left of the blue <em>Create Project</em> button we will find the option to modify the default visibility setting for new projects, which can be switched from Public to Private.</p>
<h2 id="heading-set-up-server-base-url">Set up server base URL</h2>
<p>First we need to configure a <em>base URL</em> for the application. To do this, we connect our application to a custom domain and secure it with a <em>TLS certificate</em>. For the sake of this example, I will use the domain <em>example.com</em>.</p>
<p>To set the base URL, navigate to <em>Administration → General → Server base URL</em> and enter the URL<br />(e.g., <em>https://example.com</em>).</p>
<h2 id="heading-setting-up-a-github-app">Setting up a GitHub App</h2>
<p>Now that we have configured the server base URL, we will create a <em>GitHub Application</em> to allow SonarQube access to the repositories. The steps presented are for a personal GitHub account, but they should also apply to GitHub organizations. For more information please refer to the <a target="_blank" href="https://docs.sonarsource.com/sonarqube-server/latest/devops-platform-integration/github-integration/setting-up-at-global-level/setting-up-github-app/">documentation</a>.</p>
<p>For this we log into out personal GitHub account and navigate to <em>GitHub Profile → Settings → Developer Settings → New GitHub App</em>. In the new <em>Register new GitHub App</em> page we will provide the following information:</p>
<ul>
<li><p>GitHub App name → Provide a name for this application</p>
</li>
<li><p>Homepage URL → The base URL of SonarQube (https://example.com)</p>
</li>
<li><p>Callback URL → The base URL of SonarQube again</p>
</li>
<li><p>Webhook URL → Disable as per recommendation of the SonarQube documentation</p>
</li>
<li><p>Permissions</p>
<ul>
<li><p>Repository permissions</p>
<ul>
<li><p>Checks → Read &amp; Write</p>
</li>
<li><p>Administration → Read-only</p>
</li>
<li><p>Metadata (GitHub.com) | Repository metadata (GitHub Enterprise) → Read-only</p>
</li>
<li><p>Pull Requests → Read &amp; Write</p>
</li>
<li><p>Private repositories: Content → Read only</p>
</li>
<li><p>Code scanning alerts → Read &amp; Write</p>
</li>
</ul>
</li>
<li><p>Organizations permissions</p>
<ul>
<li><p>Administration → Read-only</p>
</li>
<li><p>GitHub Copilot Business → Read-only</p>
</li>
<li><p>Members → Read-only</p>
</li>
<li><p>Projects → Read-only</p>
</li>
</ul>
</li>
<li><p>Account permissions</p>
<ul>
<li>Email addresses → Read-only</li>
</ul>
</li>
</ul>
</li>
<li><p>Select the “Only on this account” option under “Where can this GitHub App be installed?”</p>
</li>
</ul>
<p>Once we are done creating the GitHub App, we will be taken to an overview page for newly created applications. Here we click the <em>Generate a new client secret</em> button, which generates a client secret that should be saved securely. It will be needed later to configure SonarQube. Also make notes of the <em>App ID</em> and <em>Client ID</em> of this application. Further down the page there is a <em>Generate a private key</em> button. We need to click it to generate and download the <em>private key</em>. Now that we have all the necessary credentials, we can continue the setup in SonarQube.</p>
<h2 id="heading-sonarqube-devops-platform-integration">SonarQube DevOps Platform Integration</h2>
<p>Now that we have created the GitHub application and saved all the necessary information, we can proceed in SonarQube. Navigate to <em>Administration → Configuration → General Settings → DevOps Platform Integrations</em>. Select <em>GitHub</em> and then click the <em>Create Configuration</em> button. This will open a popup window where we need to provide a name for the configuration. Next, we enter the <em>GitHub API URL</em>. Since we are using a personal GitHub account, we can use the default URL: <em>https://api.github.com</em>. In the following fields enter the <em>App ID, Client ID and Client secret</em> that we saved earlier. To provide the <em>Private Key</em>, simply retrieve it from the downloaded file.</p>
<pre><code class="lang-bash">cat &lt;filename&gt;.private-key.pem

-----BEGIN RSA PRIVATE KEY-----
AFGDSAFasdfasfasdfGAsd....
.........
SAFGSAgfaGADSFASDFSFSF
-----END RSA PRIVATE KEY-----
</code></pre>
<p>Copy the contents of the private key file and paste it directly into the provided field in the configuration. Once all required information has been entered, we submit the configuration and wait for the result. If everything is correct, we should see a confirmation message stating <em>Configuration valid</em>.</p>
<p>Next we navigate to the <em>Projects</em> page where we will se the <em>Import from GitHub</em> option. We select it and then choose our GitHub username from the dropdown list under <em>Choose an organization</em>. This will display all our repositories, from which we can select the ones we wish to import.</p>
<h1 id="heading-first-repository-scan">First Repository Scan</h1>
<p>Let’s start our first repository scan. We navigate to <em>Projects page → Import from GitHub → and select our username</em> from dropdown. Then we choose a repository from the list of available repositories and click the Import button on the right.</p>
<p>On the next page, we select <em>Use the global settings</em> for now and create the project by clicking the <em>Create Project</em> button. Afterwards we will be directed to a new page where we can choose our analysis method. We select <em>GitHub Actions</em>.</p>
<p>On the following page, SonarQube will provide the necessary secrets, along with information for the <em>sonar-project.properties</em> and <em>YAML</em> files that need to be added to our repository. We follow the instructions, commit the required files to our project and shortly after, the pipeline should start running. Once the scan is complete, we will be able to view the results in our newly created project in SonarQube.</p>
]]></content:encoded></item><item><title><![CDATA[Start Infrastructure Monitoring with Prometheus and Grafana]]></title><description><![CDATA[Introduction
To ensure the health, performance and availability of our systems we need a way to monitor and observe our machines in use. Prometheus and Grafana are two of the most widely used solutions to achieve this goal.
Prometheus is an open-sour...]]></description><link>https://blog.bagheri.me/start-infrastructure-monitoring-with-prometheus-and-grafana</link><guid isPermaLink="true">https://blog.bagheri.me/start-infrastructure-monitoring-with-prometheus-and-grafana</guid><category><![CDATA[#prometheus]]></category><category><![CDATA[Grafana]]></category><category><![CDATA[monitoring]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Mahdi Bagheri]]></dc:creator><pubDate>Sun, 19 Jan 2025 12:58:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1737290646892/2d336972-b816-45c2-b9b0-d4987da300b8.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>To ensure the health, performance and availability of our systems we need a way to monitor and observe our machines in use. <em>Prometheus</em> and <em>Grafana</em> are two of the most widely used solutions to achieve this goal.</p>
<p>Prometheus is an open-source monitoring solution which can collect data from multiple endpoints, query data with its <em>PromQL</em> (Prometheus Query Language), evaluate the data and trigger alerts based on given rules.</p>
<p>Grafana is an open-source monitoring solution as well, but it doesn’t collect the data from the endpoints directly. It uses data sources like Prometheus that already scraped the necessary data and it allows us to create visually appealing dashboards in which we can see the collected data through which we can make informed decisions about the state of our machines. Possible data to display are for example different metric values or logs and Grafana provides also the ability to create alerts based on given rules.</p>
<p>In the following I will guide you through the steps on how to quickly start with a basic Prometheus and Grafana setup. And we will also collect our first data from the host machine on which Prometheus and Grafana are running with the help of the <em>Node exporter</em> and present the data inside a dashboard.</p>
<h1 id="heading-prerequisites">Prerequisites</h1>
<p>To follow this process for setting up Prometheus and Grafana, it is recommended to have some basic experience with Linux and Docker.</p>
<p>I will perform these steps on an Ubuntu server, an I am assuming you have already set up your own Linux-based environment with Docker installed.</p>
<h1 id="heading-folder-structure">Folder Structure</h1>
<p>The following shows the folder structure of the directories and files in use to quickly start a basic Prometheus and Grafana environment.</p>
<pre><code class="lang-bash">/opt/monitoring
|---.env
|---docker-compose.yaml
|---prometheus
|   |---prometheus.yaml
|
|---grafana
    |
    |---provisioning
        |    
        |---dashboards
        |   |---dashboards.yaml
        |   |---node-exporter-full-dashboard.json
        |   
        |---datasources
            |---datasource.yaml
</code></pre>
<p>In the presented folder structure, the monitoring stack is organized within the <em>/opt/monitoring</em> directory, where everything needed for the Prometheus and Grafana setup is stored. This structure is designed to automate much of the configuration, ensuring that you can easily replicate the setup on different machines without having to manually adjust these settings through the web interfaces.</p>
<p>Inside the <em>monitoring</em> directory there are two files <em>.env</em>, which holds the credentials for logging into Grafana and the <em>docker-compose.yaml</em> file, which is used to define and start the services via <em>Docker Compose</em>.</p>
<p>Within the monitoring directory there are two subdirectories: <em>prometheus</em> and <em>grafana</em>. The prometheus folder contains the prometheus.yaml file, which is the primary configuration file for Prometheus. It defines how and from where Prometheus scrapes metrics data. The grafana folder contains a dashboard .json file, as well as the the necessary configuration files <em>dashboard.yaml</em>, which specifies the dashboards that should be automatically loaded when Grafana starts and the <em>datasource.yaml</em> file through which Prometheus is already provided as a data source.</p>
<h1 id="heading-docker-compose">Docker Compose</h1>
<p>Let’s start with the longest file first.</p>
<p>The docker-compose.yaml file defines three services: Prometheus, Grafana and the Node exporter.<br />They are configured so they can communicate through the same network named <em>monitoring</em>. Also two volumes are defined, one for Prometheus and one for Grafana, ensuring persistent data storage for each service.</p>
<h2 id="heading-prometheus-service">Prometheus Service</h2>
<p>We use the latest Prometheus Docker image and configure the service the automatically restart, unless stopped manually. In the volumes section, we mount the <em>prometheus-data</em> Docker volume to ensure persistent storage of Prometheus data, and we also provide our custom <em>prometheus.yaml</em> configuration file. In the command section, we specify the location of the Prometheus configuration file and set the storage path to <em>/prometheus</em> where Prometheus will store the collected time series data. The service exposes the port 9090 to the host machine and is connected to the <em>monitoring</em> network.</p>
<h2 id="heading-grafana-service">Grafana Service</h2>
<p>We use the latest Grafana Docker image and configure the container to restart automatically, unless manually stopped. In the volumes section, we mount the <em>grafana-data</em> Docker volume to persist Grafana’s data and also mount the <em>dashboard</em> and <em>datasources</em> folders into the container containing the pre-downloaded dashboards and <em>.yaml</em> configuration files. In the environment section we set the admin username and password using environment variables stored in the .<em>env</em> file. Additionally we disable the option for sign-up, ensuring only the admin can create new users. The service exposes port 3000 to the host machine and is connected to the <em>monitoring</em> network.</p>
<h2 id="heading-node-exporter-service">Node exporter Service</h2>
<p>We use the latest Node exporter Docker image and configure the container to restart automatically unless manually stopped. In the volumes section we grant the container <em>read-only</em> access to the hosts’s <em>/sys</em>, <em>/proc</em>, and <em>/</em> mount points, enabling it to collect system metrics. Through the command section we configure the Node exporter to gather metrics from these specific mount points. This service exposes the port 9100 to the host machine and is connected to the <em>monitoring</em> network.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">services:</span>
  <span class="hljs-attr">prometheus:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">prom/prometheus:latest</span>
    <span class="hljs-attr">restart:</span> <span class="hljs-string">unless-stopped</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">./prometheus/prometheus.yaml:/etc/prometheus/prometheus.yaml</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">prometheus-data:/prometheus</span>
    <span class="hljs-attr">command:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">'--config.file=/etc/prometheus/prometheus.yaml'</span>
      <span class="hljs-bullet">-</span>  <span class="hljs-string">"--storage.tsdb.path=/prometheus"</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-number">9090</span><span class="hljs-string">:9090</span>
    <span class="hljs-attr">networks:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">monitoring</span>

  <span class="hljs-attr">grafana:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">grafana/grafana:latest</span>
    <span class="hljs-attr">restart:</span> <span class="hljs-string">unless-stopped</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">grafana-data:/var/lib/grafana</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">./grafana/provisioning/dashboards:/etc/grafana/provisioning/dashboards</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">./grafana/provisioning/datasources:/etc/grafana/provisioning/datasources</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">GF_SECURITY_ADMIN_USER=${GF_ADMIN_USER}</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">GF_SECURITY_ADMIN_PASSWORD=${GF_ADMIN_PW}</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">GF_USERS_ALLOW_SIGN_UP=false</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-number">3000</span><span class="hljs-string">:3000</span>
    <span class="hljs-attr">networks:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">monitoring</span>

  <span class="hljs-attr">node-exporter:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">prom/node-exporter:latest</span>
    <span class="hljs-attr">restart:</span> <span class="hljs-string">unless-stopped</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">/proc:/host/proc:ro</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">/sys:/host/sys:ro</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">/:/rootfs:ro</span>
    <span class="hljs-attr">command:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"--path.procfs=/host/proc"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"--path.rootfs=/rootfs"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"--path.sysfs=/host/sys"</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-number">9100</span><span class="hljs-string">:9100</span>
    <span class="hljs-attr">networks:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">monitoring</span>



<span class="hljs-attr">networks:</span>
  <span class="hljs-attr">monitoring:</span>
    <span class="hljs-attr">driver:</span> <span class="hljs-string">bridge</span>

<span class="hljs-attr">volumes:</span>
  <span class="hljs-attr">prometheus-data:</span>
  <span class="hljs-attr">grafana-data:</span>
</code></pre>
<h1 id="heading-env">.env</h1>
<p>The .env file is very small consisting of two variables.</p>
<pre><code class="lang-yaml"><span class="hljs-string">GF_ADMIN_USER=&lt;username&gt;</span>
<span class="hljs-string">GF_ADMIN_PW=&lt;password&gt;</span>
</code></pre>
<h1 id="heading-prometheusyaml">prometheus.yaml</h1>
<p>The prometheus.yaml file is structured into two main sections: <em>global</em> and <em>scrape_configs</em>.</p>
<p>In the global section, we define configurations that apply globally across all jobs. In this case, we set both the <em>scrape_interval</em> and <em>evaluation_interval</em> to 15 seconds. This means Prometheus will check every 15 seconds for new data to scrape and will evaluate the collected data at the same interval.</p>
<p>The <em>scrape_configs</em> section defines the various target from which Prometheus will collect data. Each target is associated with a unique job name for easier identification. In this example the first job is named <em>prometheus</em> with the target <em>localhost:9090</em>, which points to the Prometheus service itself. The second job is named <em>node-exporter-local</em>, with the target <em>node-exporter:9100</em>, referring to the Node exporter service. Since Prometheus and Node exporter are both part of the same monitoring network, Prometheus can directly access the Node exporter’s endpoint.</p>
<p>For more detailed configuration options to fine-tune the prometheus.yaml, please refer to the Prometheus <a target="_blank" href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/">Documentation</a>.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">global:</span>
  <span class="hljs-attr">scrape_interval:</span>  <span class="hljs-string">15s</span>
  <span class="hljs-attr">evaluation_interval:</span> <span class="hljs-string">15s</span>

<span class="hljs-attr">scrape_configs:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">job_name:</span> <span class="hljs-string">"prometheus"</span>
    <span class="hljs-attr">static_configs:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">targets:</span> [<span class="hljs-string">'localhost:9090'</span>]
  <span class="hljs-bullet">-</span> <span class="hljs-attr">job_name:</span> <span class="hljs-string">"node-exporter-local"</span>
    <span class="hljs-attr">static_configs:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">targets:</span> [<span class="hljs-string">'node-exporter:9100'</span>]
</code></pre>
<h1 id="heading-datasourceyaml">datasource.yaml</h1>
<p>To avoid manually adding the Prometheus service as a data source in Grafana after startup, we can create a file called datasource.yaml and provide it to Grafana. This configuration automatically connects Grafana to Prometheus when it starts.</p>
<p>For more information on how to further adjust the datasource.yaml, please refer to the Grafana <a target="_blank" href="https://grafana.com/docs/grafana/latest/datasources/prometheus/">documentation</a>.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-number">1</span>

<span class="hljs-attr">datasources:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Prometheus</span>
    <span class="hljs-attr">type:</span> <span class="hljs-string">prometheus</span>
    <span class="hljs-attr">url:</span> <span class="hljs-string">http://prometheus:9090</span>
    <span class="hljs-attr">basicAuth:</span> <span class="hljs-literal">false</span>
    <span class="hljs-attr">isDefault:</span> <span class="hljs-literal">true</span>
    <span class="hljs-attr">editable:</span> <span class="hljs-literal">true</span>
</code></pre>
<h1 id="heading-dashboardsyaml">dashboards.yaml</h1>
<p>The dashbaord.yaml file is used to define how dashboards are provisioned in Grafana. Through this file we specify the location and settings for the dashboards that will be automatically loaded when Grafana starts.</p>
<p>For more information on how to adjust this configuration file to your needs please refer to the Grafana <a target="_blank" href="https://grafana.com/docs/grafana/latest/administration/provisioning/">documentation</a> at the <em>Dashboards</em> section.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-number">1</span>

<span class="hljs-attr">providers:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">'MyDashboards'</span>
    <span class="hljs-attr">orgId:</span> <span class="hljs-number">1</span>
    <span class="hljs-attr">folder:</span> <span class="hljs-string">''</span>
    <span class="hljs-attr">type:</span> <span class="hljs-string">file</span>
    <span class="hljs-attr">disableDeletion:</span> <span class="hljs-literal">false</span>
    <span class="hljs-attr">updateIntervalSeconds:</span> <span class="hljs-number">10</span>
    <span class="hljs-attr">alloUiUpdates:</span> <span class="hljs-literal">true</span>
    <span class="hljs-attr">editable:</span> <span class="hljs-literal">true</span>
    <span class="hljs-attr">options:</span>
      <span class="hljs-attr">path:</span> <span class="hljs-string">/etc/grafana/provisioning/dashboards</span>
</code></pre>
<h1 id="heading-retrieving-existing-dashboards">Retrieving existing Dashboard(s)</h1>
<p>On the following site you can download a pre-configured <a target="_blank" href="https://grafana.com/grafana/dashboards/1860-node-exporter-full/">dashboard</a> that is tailored to visualize the metrics collected by the Node exported. We save the .json file for this dashboard into the dashboards folder. Due to our Docker Compose configuration, Grafana will automatically detect and load the dashboard at runtime.</p>
<h1 id="heading-start-the-services">Start the services</h1>
<p>Now that all the necessary files are prepared, you can start the services with a simple command:</p>
<pre><code class="lang-bash">docker compose up
</code></pre>
<p>Depending on your version of Docker Compose you might need to use a hyphen in the command:</p>
<pre><code class="lang-bash">docker-compose up
</code></pre>
<p>Wait a moment for the services to start, and then navigate to the Grafana web interface at localhost:3000.</p>
<p>Log in using the credentials defined in the .env file. Once logged in you should see the Prometheus data source listed under <em>Connections → Data Sources</em>, and the Node exporter dashboard should be available under <em>Dashboards.</em></p>
<h1 id="heading-whats-next">What’s next?</h1>
<p>Your next steps could involve identifying additional areas from which you want to collect data. For example, you could integrate <em>cAdvisor</em> into your existing Docker Compose setup to gain insight into the performance and resource usage of your running containers. Additionally, you may want to explore the creation of alerts for specific events, ensuring that you’re notified when critical conditions arise, allowing you to act quickly and effectively.</p>
]]></content:encoded></item><item><title><![CDATA[Setting up a Home Server Connection via VPN (Wireguard) and a Cloud VPS]]></title><description><![CDATA[Motivation
I wanted to setup a home server where I could run applications and access them when I am not at home. But I also wanted to avoid opening ports directly to my home network to access the services running on the home server. Another issue is ...]]></description><link>https://blog.bagheri.me/setting-up-a-home-server-connection-via-vpn-wireguard-and-a-cloud-vps</link><guid isPermaLink="true">https://blog.bagheri.me/setting-up-a-home-server-connection-via-vpn-wireguard-and-a-cloud-vps</guid><category><![CDATA[homeserver]]></category><category><![CDATA[vpn]]></category><category><![CDATA[vps]]></category><category><![CDATA[wireguard]]></category><category><![CDATA[Caddy]]></category><dc:creator><![CDATA[Mahdi Bagheri]]></dc:creator><pubDate>Wed, 23 Oct 2024 19:57:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1729712776550/a3eddb2d-1710-4fc8-96f5-426057782c65.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-motivation">Motivation</h1>
<p>I wanted to setup a home server where I could run applications and access them when I am not at home. But I also wanted to avoid opening ports directly to my home network to access the services running on the home server. Another issue is that the global IP address received by my ISP changes every few days and I also didn’t want to work with something like DynDNS.</p>
<p>Therefore as a workaround I purchased a cheap Cloud VPS which has its own static IPv4 address through which I built a tunnel via a VPN connection to my home server for accessing the services.</p>
<h1 id="heading-initial-setup">Initial Setup</h1>
<p>At first I went through the initial setup for both the home server and the VPS.</p>
<ol>
<li><p>Installing the Server Software (Ubuntu Server)</p>
</li>
<li><p>Creating a dedicated user and granting it sudo access</p>
</li>
<li><p>Providing root and user with strong passwords</p>
</li>
<li><p>Upgrading all available packages</p>
</li>
<li><p>Removing SSH permission for root user</p>
</li>
<li><p>Allowing only SSH access via SSH key</p>
</li>
<li><p>Changing default SSH port to reduce Bot noises</p>
</li>
<li><p>Installing and activating unattended-upgrades</p>
</li>
<li><p>Installing and enabling fail2ban</p>
</li>
<li><p>Installing and running the web application</p>
</li>
</ol>
<p>I won’t be going into details for these steps in this article since the focus is on the VPN connection between home server and the VPS.</p>
<h1 id="heading-vpn-connection-setup">VPN Connection Setup</h1>
<p>After setting up the application on our home server, we need to setup the VPN connection between our home server and the VPS. For this I have been using Wireguard since it was far more easy to setup than any other VPN software.</p>
<p>The following steps are similar for both the home server and the VPS.</p>
<h2 id="heading-wireguard-installation">Wireguard Installation</h2>
<p>The installation process is as easy as executing following commands in Ubuntu.</p>
<pre><code class="lang-plaintext">sudo apt update
sudo apt install wireguard
</code></pre>
<h2 id="heading-private-key-and-public-key-generation">Private Key and Public Key Generation</h2>
<p>Next we need to generate our private and public keys that we will provide to our Wireguard interface configuration files.</p>
<h3 id="heading-private-key">Private Key</h3>
<pre><code class="lang-plaintext">umask 077
wg genkey &gt; privatekey
</code></pre>
<h3 id="heading-public-key">Public Key</h3>
<pre><code class="lang-plaintext">wg pubkey &lt; privatekey &gt; publickey
</code></pre>
<h2 id="heading-wireguard-configuration-file">Wireguard Configuration File</h2>
<p>After the installation of Wireguard a directory with the name wireguard will be available inside /etc.</p>
<p>Here we will create a .conf file and name it whatever we want. For simplicity I will name it the same as found in many other tutorials wg0.conf.</p>
<pre><code class="lang-plaintext">sudo touch /etc/wireguard/wg0.conf
</code></pre>
<h2 id="heading-vps">VPS</h2>
<p>On the VPS we will add the following lines to the configuration file.</p>
<pre><code class="lang-plaintext"># VPS
[Interface]
PrivateKey = &lt;VPS Private Key&gt;
Address = 10.0.0.1/24
ListenPort = 51820

# Home Server
[Peer]
PublicKey = &lt;Home Server Public Key&gt;
AllowedIPs = 10.0.0.2/32
</code></pre>
<h2 id="heading-home-server">Home Server</h2>
<p>One the home server we will add the following lines to the configuration file.</p>
<pre><code class="lang-plaintext"># Home Server
[Interface]
PrivateKey = &lt;Home Server Private Key&gt;
Address = 10.0.0.2/24
ListenPort = 51820

# VPS
[Peer]
PublicKey = &lt;VPS Public Key&gt;
Endpoint = &lt;Static IPv4 Address of VPS&gt;
AllowedIPs = 10.0.0.1/32
PersistentKeepalive = 25
</code></pre>
<p>You can add whatever private IP addresses you want to the Endpoints for the VPN connection. But they must be in the same subnet so they will be able to reach each other. In this case I decided to go with a 10.0.0.X/24 Subnet. The VPN subnet should also not overlap with the subnets of either endpoint.</p>
<p>It is also possible to assign a different Port, which becomes useful if you want to use multiple interfaces to establish different connections in order to separate different endpoints. In that case you would create an additional .conf file (e.g. wg1.conf) and start this interface similarly to the following steps shown for the wg0.conf example. Additionally this port must also be opened in the firewall, as shown later.</p>
<h2 id="heading-enabling-and-starting-the-wireguard-interface">Enabling and Starting the Wireguard Interface</h2>
<pre><code class="lang-plaintext">sudo systemctl enable wg-quick@wg0
sudo systemctl start wg-quick@wg0
</code></pre>
<p>If you named your .conf file something else you would use the that name instead of wg0.</p>
<h1 id="heading-allow-port-51820-on-firewall">Allow Port 51820 on Firewall</h1>
<p>In case UFW (Uncomplicated Firewall) is not installed and enabled yet, do it with the following commands.</p>
<pre><code class="lang-plaintext">sudo apt install ufw
sudo ufw enable
</code></pre>
<p>For the communication to work we will also need to open port 51820 on the home server. This is as easy as executing the following UFW command. This has to be done on both machines.</p>
<pre><code class="lang-plaintext">sudo ufw allow 51820/udp
</code></pre>
<p>We might also go a step further and permit connection to only IP address 10.0.0.1 for the application.</p>
<pre><code class="lang-plaintext">sudo ufw allow from 10.0.0.1 to any port &lt;APPLICATION_PORT&gt;
sudo ufw deny &lt;APPLICATION_PORT&gt;
</code></pre>
<h2 id="heading-check-vpn-connectivity">Check VPN Connectivity</h2>
<p>Now test the connectivity by trying to ping the other device.</p>
<pre><code class="lang-plaintext">ping 10.0.0.1 # From Home Server
ping 10.0.0.2 # From VPS
</code></pre>
<p>If the ping worked, we might also try to access the application with a curl command.</p>
<pre><code class="lang-plaintext">curl http://10.0.0.2:&lt;APPLICATION_PORT&gt;
</code></pre>
<h1 id="heading-setting-up-a-reverse-proxy-connection-to-the-home-server-application">Setting up a Reverse Proxy Connection to the Home Server Application</h1>
<p>Now that we know that we are able to reach the application through our VPN connection, we need to configure the access from our VPS to the application.</p>
<p>This is fairly easy with Caddy as a Reverse Proxy. If you own a domain create an A record to the IP address of your VPS for the domain or subdomain of your choosing.</p>
<p>Next we will install Caddy on our VPS.</p>
<pre><code class="lang-plaintext">sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update
sudo apt install caddy
</code></pre>
<h2 id="heading-firewall-adjustments">Firewall Adjustments</h2>
<p>Before we continue with configuring Caddy we need to open the ports 80 and 443 both on our VPS as well as the firewall rules in our administration panel for our VPS provided by the cloud provider.</p>
<pre><code class="lang-plaintext">sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
</code></pre>
<h2 id="heading-caddyfile-configuration">Caddy(file) Configuration</h2>
<p>Now that Caddy is installed we need to configure the Caddyfile which is located under /etc/caddy.</p>
<pre><code class="lang-plaintext">https://&lt;Your-Domain&gt;{
    reverse_proxy 10.0.0.2:&lt;APPLICATION_PORT&gt;{
        header_up X-Real-IP {remote_host} 
        header_up X-Forwarded-For {remote_host} 
        header_up X-Forwarded-Proto {scheme} 
        header_up X-Forwarded-Port {server_port} 
    }
    header {
        -Server
    }
    log {
        output file /var/log/caddy/access.log
        format json
    }
}
</code></pre>
<p>This basic setup should work fine for the start and can be easily adjusted depending on the needs and how the application is installed and served on the home server (native or container).</p>
<p>It might be necessary to format the file correctly for it to work with. Run this following command when your are in the /etc/caddy directory where the Caddyfile resides.</p>
<pre><code class="lang-plaintext">sudo caddy fmt --overwrite
</code></pre>
<h2 id="heading-enabling-and-starting-caddy">Enabling and Starting Caddy</h2>
<pre><code class="lang-plaintext">sudo systemctl enable caddy
sudo systemctl start caddy
</code></pre>
<p>Caddy should automatically generate a TLS certificate for you. But in case not stop the service first.</p>
<pre><code class="lang-plaintext">sudo systemctl stop caddy
</code></pre>
<p>And run the command.</p>
<pre><code class="lang-plaintext">sudo caddy reverse-proxy --from &lt;your-domain&gt; --to 10.0.0.2:&lt;APPLICATION_PORT&gt;
</code></pre>
<p>Then restart caddy service again.</p>
<h1 id="heading-check-vps-connectivity">Check VPS Connectivity</h1>
<p>At last we will check if everything works by opening our browser and typing in our domain and hopefully we will have established a secure HTTPS connection to our home server.</p>
<p>But it should be noted that one drawback of such an setup can be reduced speed. So choose wisely what applications you want to run and what data you want to access and send through this way of connection.</p>
<h1 id="heading-resources">Resources</h1>
<p>For fine tuning and more information I can recommend to visit the <a target="_blank" href="https://www.wireguard.com/quickstart/">Wireguard</a> and <a target="_blank" href="https://caddyserver.com/docs/quick-starts">Caddy</a> documentation.</p>
]]></content:encoded></item><item><title><![CDATA[OSI and  TCP/IP Model]]></title><description><![CDATA[What are the conditions for network communication?
Devices need to be able to understand each other when communicating over networks. The same way humans need to understand the language spoken by their communication partner. Otherwise the received in...]]></description><link>https://blog.bagheri.me/osi-and-tcpip-network-layer-introduction</link><guid isPermaLink="true">https://blog.bagheri.me/osi-and-tcpip-network-layer-introduction</guid><category><![CDATA[network]]></category><category><![CDATA[networking]]></category><category><![CDATA[internet]]></category><dc:creator><![CDATA[Mahdi Bagheri]]></dc:creator><pubDate>Sun, 14 Jan 2024 20:03:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1705629187552/51a3c1e0-bd0a-421d-8b84-45322de0f2ff.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-what-are-the-conditions-for-network-communication">What are the conditions for network communication?</h1>
<p>Devices need to be able to understand each other when communicating over networks. The same way humans need to understand the language spoken by their communication partner. Otherwise the received information couldn't be processed to react properly.</p>
<p>For network communication the devices and applications use network protocols. These are rules that dictate how communication should take place. What information needs to be transmitted and how the information has to be evaluated.<br />Now vendors that create hardware devices and applications need to follow the rules of these protocols when designing them. This allows vendor-independent communication, since all devices no matter who produced them follow the same rules and speak the same language now.<br />Without them each vendor would have to come up with their own idea for network communication, therefore only the devices of this same vendor could communicate.</p>
<p>We might compare it with rules for sports. Even if two teams of different countries face each other in a match, they still know how to behave thanks to the rules that act as guidelines. But if each country had their own rules for a specific sport, then both teams would have learned the sport with different rules which would only cause chaos on the game field and simply wouldn't work.</p>
<h1 id="heading-why-do-wee-need-network-layers">Why do wee need network layers?</h1>
<p>There are two network models available that are most referred when it comes to describing networks and their functions. These are the OSI and TCP/IP model. Both follow the same approach of dividing the network into layers.</p>
<p>In the previous section we did learn about protocols and rules. There are multiple network protocols available that operate on a different layer of these models. Thus each layer is responsible for handling different tasks and there are multiple protocols being used simultaneously when starting a network communication.<br />For example TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) are protocols that both work at the Transport layer and their task is to establish a connection between two devices before data is sent.</p>
<p>With this layered approach not only are we able to take a closer look at the communication processes in more detail and understand them better, but in case of networking issues we are able to better pinpoint where those issues might come from and solve them.</p>
<p>While both models follow the same layering methodology, they differ in terms of layer number. The TCP/IP model describes the network with four layers, but the OSI model goes a step further an breaks the TCP/IP layers 1 and 4 into five additional layers, resulting in seven layers. As we can see from the cover image the layers 5, 6 and 7 of OSI can be mapped to the layer 4 of TCP/IP and the layers 1 and 2 of OSI can be mapped to the layer 1 of TCP/IP.  </p>
<p>The TCP/IP model is more practical oriented and derives its name from the TCP and IP protocols, which serve as the foundational elements of the internet, since they were introduced before UDP. On the other hand, the OSI model provides a more theoretical perspective on networks, making it better suited for understanding network layer functions.</p>
<h2 id="heading-short-overview-of-layer-functions">Short overview of layer functions</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705251743389/63de300f-b6d7-408f-8ce4-00275c43bb86.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-what-happens-with-the-data-at-each-layer">What happens with the data at each layer?</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705262058855/1a7679c0-155f-4faa-b2b6-12a9f344eb8b.png" alt class="image--center mx-auto" /></p>
<p>Data is also described as Protocol Data Unit (PDU). But when the data moves along the layers a new name will be assigned at different layers for better differentiation.<br />PDUs can have the following names: Bits, Frame, Packet, Segment (when transport protocol TCP is used), Datagram (when transport protocol UDP is used) and Data.<br />The following table presents a short overview.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>PDU Name</td><td>Layer</td></tr>
</thead>
<tbody>
<tr>
<td>Data</td><td>Application, Presentation and Session</td></tr>
<tr>
<td>Segment [TCP], Datagram [UDP]</td><td>Transport</td></tr>
<tr>
<td>Packet</td><td>Network</td></tr>
<tr>
<td>Frame</td><td>Data Link</td></tr>
<tr>
<td>Bits</td><td>Physical</td></tr>
</tbody>
</table>
</div><p>A PDU consist of a body with the actual data and a header that contains information about how to handle the PDU at each layer. When data is sent, PDUs are passed down from the highest layer (Application) to the lowest layer (Physical). At each layer the PDU gets processed and new information gets added to the header. This is called encapsulation.<br />When data is received the data flow is in the opposite direction from lowest layer to highest layer until the data reaches the user. The additional PDU Header Information that have been added are now processed backwards at each layer and removed afterwards. Almost like peeling an onion layer by layer. This process is called de-encapsulation.</p>
<p>Let's look at it with an imaginary scenario. We work at a company and want to send a letter and put it in an envelope. The letter itself is our data and the envelope is our header. Before the envelope reaches the postman (the network media), it has to be passed down through different departments (the different layers). At each department our envelope with our letter in it is put in another envelope with new information added by the department. When the letter reaches the postman, it is nested into multiple envelopes. Now the postman delivers our letter to another company with the same structure as ours and passes it first to the department at the lowest hierarchy. They open the envelope, read the message relevant to them. Take the other envelopes out and pass them to the next higher layer, where they redo the process until the letter reaches the intended recipient.</p>
]]></content:encoded></item><item><title><![CDATA[How does data travel the internet?]]></title><description><![CDATA[What is the internet?
To answer this question we first have to understand what exactly the internet is. In short the internet is the interconnection of multiple networks into one big network that stretches around the globe.
How are those networks int...]]></description><link>https://blog.bagheri.me/how-does-data-travel-the-internet</link><guid isPermaLink="true">https://blog.bagheri.me/how-does-data-travel-the-internet</guid><category><![CDATA[networking]]></category><category><![CDATA[network]]></category><category><![CDATA[internet]]></category><dc:creator><![CDATA[Mahdi Bagheri]]></dc:creator><pubDate>Sat, 06 Jan 2024 14:34:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1704924248657/9297be0b-c5dd-4b8d-be86-aab9494c6079.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-what-is-the-internet">What is the internet?</h3>
<p>To answer this question we first have to understand what exactly the internet is. In short the internet is the interconnection of multiple networks into one big network that stretches around the globe.</p>
<h3 id="heading-how-are-those-networks-interconnected">How are those networks interconnected?</h3>
<p>Different types of equipment are at work here. We can categorize them into end devices, intermediary devices and network media. End devices are at the source or destination of every data transmission. Our computer, notebook, smartphone, tablet or printer would count as them. Intermediary devices are able to connect different networks with each other and route data between those intermediary devices. Routers and switches are part of this category. What is left are the network media. They allow us to connect those intermediary devices with each other so we can establish a connection between them. Those are cables and wireless connections.</p>
<p>This infrastructure we can enjoy today didn't happen overnight. Over the years all those devices had to be physically connected over larger regions, between countries and even continents. We can even see a map of the cables that have been laid underwater at this site <a target="_blank" href="https://www.submarinecablemap.com/">https://www.submarinecablemap.com/</a>. For wireless transmissions antennas and satellite is used.</p>
<h3 id="heading-how-can-we-access-the-internet">How can we access the internet?</h3>
<p>We receive access through our Internet Service Providers (ISP). They either provide us with a preconfigured router or we can use our own and configure it accordingly. Some routers have an in-built modem and switch. The modem allows us to connect with our ISP and the switch is necessary, so we can establish at least a cable connection if the router or our end devices don't have wireless functionality. When everything is set up and all devices are connected, we were able to create our own network to gain internet access. Connected devices receive a unique IP address from the router. This allows the router to know which device is sending and receiving data later on.</p>
<p>Networks can be as small as a home network, but can also be as big as a company network with many hundreds or thousands of users and devices. Depending on the scenario there are many types of network devices with advanced functions that can be used to fulfill the desired needs.</p>
<h3 id="heading-how-does-data-travel-the-internet">How does data travel the internet?</h3>
<p>At last we circle back to our initial question. When we think about network connections, we need to think in form of data. But not in big chunks of data. Rather smaller pieces of data, also described as packets, in which the larger files get divided by our end device before it gets sent.</p>
<p>When we browse the web, we make requests to receive data from a destination. When we visit a website like Google, Amazon or Netflix. We ask its server to sent us information, so it can be shown in the browser. Our requests are sent in form of packets as mentioned earlier.<br />Those packets will reach our router, because it is our gateway to access all the information provided by the internet. Each packet contains information about the sender and destination in form of IP addresses. Now our router has to find the next best path for the packets and sends them on their way out. On their journey those packets have to pass multiple routers. And at each stop they as well try to find the next best path for these packets, so they can reach their destination.<br />After arriving at the desired destination, the information inside those packets get evaluated and the requested resource gets sent back to the source of the request (our end device). This is only possible because the packets contain the information from where they are coming from. And as you can guess, the answer of the destination will be sent in form of packets as well.<br />Now our router receives those response packets and forwards them to the correct device waiting for them. The end device puts all those packets back together and presents us with the awaited web page, stream or the downloaded file.</p>
<p>We see that intermediary devices like routers play a crucial role in networks. They provide our devices with an unique IP address, so they can be distinguished from one another. They receive data from inside and outside the personal network, evaluate the packet information and forward them accordingly. There are many more devices with equally important different functionalities as well like firewall appliances that are responsible for the security of networks. But in this article we have been interested in the movement of data between networks therefore our focus was on the routers.</p>
]]></content:encoded></item></channel></rss>