<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[All things Backend n' Cloud w/Oluwafemi]]></title><description><![CDATA[Hey, welcome! I'm an experienced backend and cloud engineer. This is a space where I write about all things backend, cloud, devops, infra, automation, engineeri]]></description><link>https://oluwafemiakinde.dev</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 20:27:04 GMT</lastBuildDate><atom:link href="https://oluwafemiakinde.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Circuit Breakers in Go: Preventing Cascading Failures]]></title><description><![CDATA[Overview
Circuit Breakers
To put it simply, circuit breaker in the context of software engineering is a pattern that prevents further damage made to a system/service when a section of it fails, allowing the other parts of the system to continue funct...]]></description><link>https://oluwafemiakinde.dev/circuit-breakers-in-go-preventing-cascading-failures</link><guid isPermaLink="true">https://oluwafemiakinde.dev/circuit-breakers-in-go-preventing-cascading-failures</guid><category><![CDATA[Go Language]]></category><category><![CDATA[circuit breaker]]></category><category><![CDATA[Resilience]]></category><dc:creator><![CDATA[Oluwafemi Akinde]]></dc:creator><pubDate>Fri, 07 Jun 2024 23:32:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1717805149798/a9027e04-26c4-439c-8e4b-828c08df8bc1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-overview"><strong>Overview</strong></h3>
<p><strong>Circuit Breakers</strong></p>
<p>To put it simply, circuit breaker in the context of software engineering is a pattern that prevents further damage made to a system/service when a section of it fails, allowing the other parts of the system to continue functioning smoothly. You would usually talk about the circuit breaker pattern when implementing resiliency and reliability in your system.</p>
<p>A circuit breaker detects failures and encapsulate the logic of handling those failures in a way that prevents the failure from constantly recurring. For example, they're useful when dealing with network calls to external services, databases, or really, any part of your system that might fail temporarily. By using a circuit breaker, you can prevent cascading failures, manage temporary errors, and maintain a stable and responsive system amidst a system breakdown.</p>
<h4 id="heading-cascading-failures"><strong>Cascading Failures</strong></h4>
<p>Cascading failures occur when a failure in one part of the system triggers failures in other parts, leading to widespread disruption. An example is when a microservice in a distributed system becomes unresponsive, causing dependent services to timeout and eventually fail. Depending on the scale of the application, the impact of these failures can be catastrophic which is going to degrade performance and probably even impact user experience.</p>
<h3 id="heading-circuit-breaker-patterns"><strong>Circuit Breaker Patterns</strong></h3>
<p>A circuit breaker itself is a technique/pattern and there are three different states it operates which we will talk about:</p>
<ol>
<li><p><strong>Closed State:</strong> In a closed state, the circuit breaker allows all requests to pass through to the target service normally as they would. If the requests are successful, the circuit remains closed. However, if a certain threshold of failures is reached, the circuit transitions to the open state.</p>
<p> Think of it like a fully operational service where users can log in and access data without issues. Everything is running smoothly.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716630492980/6033f0c5-ba75-4e0d-8f88-dae10eb5c7fa.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Open State</strong>: In an open state, the circuit breaker immediately fails all incoming requests without attempting to contact the target service. The state is entered to prevent further overload of the failing service and give it time to recover. After a predefined timeout, the circuit breaker moves to the half-open state. A relatable example is this; Imagine an online store experiences a sudden issue where every purchase attempt fails. To avoid overwhelming the system, the store temporarily stops accepting any new purchase requests.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716630513360/40678a33-133d-419e-8487-065b3e037577.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Half-Open State</strong>: In the half-open state, the circuit breaker allows a (configurable) limited number of test requests to pass through to the target service. And if these requests are successful, the circuit transitions back to the closed state. If they fail, the circuit returns to the open state. In the example of the online store i gave in the open state above, this is where the online store starts to allow a few purchase attempts to see if the issue has been fixed. If these few attempts succeed, the store will fully reopen its service to accept new purchase requests.</p>
<p> This diagram shows when the circuit breaker tries to see if requests to <strong>Service B</strong> are successful and then it fails/breaks:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716634698055/832fe957-e1d1-48c6-85d6-2a7428592adc.png" alt class="image--center mx-auto" /></p>
<p> The follow up diagram then shows when the test requests to <strong>Service B</strong> succeeds, the circuit is closed, and all further calls are routed to <strong>Service B</strong> again:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716635083575/4ddbca6b-83cf-4820-8025-20ed205caf50.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<p><strong>Note</strong>: Key configurations for a circuit breaker include the failure threshold (number of failures needed to open the circuit), the timeout for the open state, and the number of test requests in the half-open state.</p>
<h3 id="heading-implementing-circuit-breakers-in-go"><strong>Implementing Circuit Breakers in Go</strong></h3>
<p>It's important to mention that prior knowledge of Go is required to follow along in this article.</p>
<p>Let's continue.</p>
<p>As with any software engineering pattern, circuit breakers can be implemented in various languages. However, this article will focus on implementation in Golang. While there are several libraries available for this purpose, such as <code>goresilience</code>, <code>go-resiliency</code>, and <code>gobreaker</code>, we will specifically concentrate on using the <code>gobreaker</code> library.</p>
<p><strong>Pro Tip:</strong> You can see the internal circuit breaker implementation of the <code>gobreaker</code> library <a target="_blank" href="https://github.com/sony/gobreaker/blob/master/v2/gobreaker.go">here</a>.</p>
<p>Let's consider a simple Go application where a circuit breaker is implemented to handle calls to an external API. This basic example demonstrates how to wrap an external API call with the circuit breaker technique:</p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="1f8b7f4db50e443f21be6cadf971dc86"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/SirPhemmiey/1f8b7f4db50e443f21be6cadf971dc86" class="embed-card">https://gist.github.com/SirPhemmiey/1f8b7f4db50e443f21be6cadf971dc86</a></div><p> </p>
<p>Let's touch on a few important things:</p>
<ol>
<li><p><code>gobreaker.NewCircuitBreaker</code> function initializes the circuit breaker with our custom settings</p>
</li>
<li><p><code>cb.Execute</code> method wraps the HTTP request, automatically managing the circuit state.</p>
</li>
<li><p><strong>MaximumRequests</strong> is the maximum number of requests allowed to pass through when the state is half-open</p>
</li>
<li><p><strong>Interval</strong> is the cyclic period of the closed state for the circuit breaker to clear the internal counts</p>
</li>
<li><p><strong>Timeout</strong> is the duration before transitioning from open to half-open state.</p>
</li>
<li><p><strong>ReadyToTrip</strong> is called with a copy of <code>counts</code> whenever a request fails in the closed state. If ReadyToTrip returns <code>true</code>, the circuit breaker will be placed into the open state. In our case here, it returns true if requests have failed more then three consecutive times.</p>
</li>
<li><p><strong>OnStateChange</strong> is called whenever the state of the circuit breaker changes. You would usually want to collect the metrics of the state change here and report to any metrics collector of your choice.</p>
</li>
</ol>
<p>Let's write some unit tests to verify our circuit breaker implementation. I will only be explaining the most critical unit tests to understand. You can check <a target="_blank" href="https://github.com/SirPhemmiey/circuit-breaker-with-go/blob/main/main_test.go">here</a> for the full code.</p>
<ol>
<li><p>We will write a test that simulates consecutive failed requests and checks if the circuit breaker trips to the open state. Essentially, after 3 failures, when the forth failure occurs, we expect the circuit breaker to trip (open) since our condition says <code>counts.ConsecutiveFailures &gt; 3</code> . Here's what the test looks like:</p>
<pre><code class="lang-go"> t.Run(<span class="hljs-string">"FailedRequests"</span>, <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(t *testing.T)</span></span> {
         <span class="hljs-comment">// Override callExternalAPI to simulate failure</span>
         callExternalAPI = <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span> <span class="hljs-params">(<span class="hljs-keyword">int</span>, error)</span></span> {
             <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>, errors.New(<span class="hljs-string">"simulated failure"</span>)
         }

         <span class="hljs-keyword">for</span> i := <span class="hljs-number">0</span>; i &lt; <span class="hljs-number">4</span>; i++ {
             _, err := cb.Execute(<span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span> <span class="hljs-params">(<span class="hljs-keyword">interface</span>{}, error)</span></span> {
                 <span class="hljs-keyword">return</span> callExternalAPI()
             })
             <span class="hljs-keyword">if</span> err == <span class="hljs-literal">nil</span> {
                 t.Fatalf(<span class="hljs-string">"expected error, got none"</span>)
             }
         }

         <span class="hljs-keyword">if</span> cb.State() != gobreaker.StateOpen {
             t.Fatalf(<span class="hljs-string">"expected circuit breaker to be open, got %v"</span>, cb.State())
         }
     })
</code></pre>
</li>
<li><p>We will test the <strong>open</strong> &gt; <strong>half</strong>-<strong>open</strong> &gt; <strong>closed</strong> states. But we will first simulate an open circuit and call a timeout. After a timeout, we need to make at least one success request for the circuit to transition to half-open. After the half-open state, we need to make another success request for the circuit to be fully closed again. If for any reason, there's no record of a success request in the case, it will go back to being open. Here's how the test looks like:</p>
<pre><code class="lang-go"> <span class="hljs-comment">//Simulates the circuit breaker being open, </span>
 <span class="hljs-comment">//wait for the defined timeout, </span>
 <span class="hljs-comment">//then check if it closes again after a successful request.</span>
     t.Run(<span class="hljs-string">"RetryAfterTimeout"</span>, <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(t *testing.T)</span></span> {
         <span class="hljs-comment">// Simulate circuit breaker opening</span>
         callExternalAPI = <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span> <span class="hljs-params">(<span class="hljs-keyword">int</span>, error)</span></span> {
             <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>, errors.New(<span class="hljs-string">"simulated failure"</span>)
         }

         <span class="hljs-keyword">for</span> i := <span class="hljs-number">0</span>; i &lt; <span class="hljs-number">4</span>; i++ {
             _, err := cb.Execute(<span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span> <span class="hljs-params">(<span class="hljs-keyword">interface</span>{}, error)</span></span> {
                 <span class="hljs-keyword">return</span> callExternalAPI()
             })
             <span class="hljs-keyword">if</span> err == <span class="hljs-literal">nil</span> {
                 t.Fatalf(<span class="hljs-string">"expected error, got none"</span>)
             }
         }

         <span class="hljs-keyword">if</span> cb.State() != gobreaker.StateOpen {
             t.Fatalf(<span class="hljs-string">"expected circuit breaker to be open, got %v"</span>, cb.State())
         }

         <span class="hljs-comment">// Wait for timeout duration</span>
         time.Sleep(settings.Timeout + <span class="hljs-number">1</span>*time.Second)

         <span class="hljs-comment">//We expect that after the timeout period, </span>
         <span class="hljs-comment">//the circuit breaker should transition to the half-open state. </span>

         <span class="hljs-comment">// Restore original callExternalAPI to simulate success</span>
         callExternalAPI = <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span> <span class="hljs-params">(<span class="hljs-keyword">int</span>, error)</span></span> {
             resp, err := http.Get(server.URL)
             <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
                 <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>, err
             }
             <span class="hljs-keyword">defer</span> resp.Body.Close()
             <span class="hljs-keyword">return</span> resp.StatusCode, <span class="hljs-literal">nil</span>
         }

         _, err := cb.Execute(<span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span> <span class="hljs-params">(<span class="hljs-keyword">interface</span>{}, error)</span></span> {
             <span class="hljs-keyword">return</span> callExternalAPI()
         })
         <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
             t.Fatalf(<span class="hljs-string">"expected no error, got %v"</span>, err)
         }

         <span class="hljs-keyword">if</span> cb.State() != gobreaker.StateHalfOpen {
             t.Fatalf(<span class="hljs-string">"expected circuit breaker to be half-open, got %v"</span>, cb.State())
         }

         <span class="hljs-comment">//After verifying the half-open state, another successful request is simulated to ensure the circuit breaker transitions back to the closed state.</span>
         <span class="hljs-keyword">for</span> i := <span class="hljs-number">0</span>; i &lt; <span class="hljs-keyword">int</span>(settings.MaxRequests); i++ {
             _, err = cb.Execute(<span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span> <span class="hljs-params">(<span class="hljs-keyword">interface</span>{}, error)</span></span> {
                 <span class="hljs-keyword">return</span> callExternalAPI()
             })
             <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
                 t.Fatalf(<span class="hljs-string">"expected no error, got %v"</span>, err)
             }
         }

         <span class="hljs-keyword">if</span> cb.State() != gobreaker.StateClosed {
             t.Fatalf(<span class="hljs-string">"expected circuit breaker to be closed, got %v"</span>, cb.State())
         }
     })
</code></pre>
</li>
<li><p>Let's test the <code>ReadyToTrip</code> condition which triggers after 2 consecutive failure requests. We'll have a variable that tracks for consecutive failures. The <code>ReadyToTrip</code> callback is updated to check if the circuit breaker trips after 2 failures (<code>counts.ConsecutiveFailures &gt; 2</code>). We will write a test that simulates failures and verifies the count and that the circuit breaker transitions to the open state after the specified number of failures.</p>
<pre><code class="lang-go"> t.Run(<span class="hljs-string">"ReadyToTrip"</span>, <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(t *testing.T)</span></span> {
         failures := <span class="hljs-number">0</span>
         settings.ReadyToTrip = <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(counts gobreaker.Counts)</span> <span class="hljs-title">bool</span></span> {
             failures = <span class="hljs-keyword">int</span>(counts.ConsecutiveFailures)
             <span class="hljs-keyword">return</span> counts.ConsecutiveFailures &gt; <span class="hljs-number">2</span> <span class="hljs-comment">// Trip after 2 failures</span>
         }

         cb = gobreaker.NewCircuitBreaker(settings)

         <span class="hljs-comment">// Simulate failures</span>
         callExternalAPI = <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span> <span class="hljs-params">(<span class="hljs-keyword">int</span>, error)</span></span> {
             <span class="hljs-keyword">return</span> <span class="hljs-number">0</span>, errors.New(<span class="hljs-string">"simulated failure"</span>)
         }
         <span class="hljs-keyword">for</span> i := <span class="hljs-number">0</span>; i &lt; <span class="hljs-number">3</span>; i++ {
             _, err := cb.Execute(<span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span> <span class="hljs-params">(<span class="hljs-keyword">interface</span>{}, error)</span></span> {
                 <span class="hljs-keyword">return</span> callExternalAPI()
             })
             <span class="hljs-keyword">if</span> err == <span class="hljs-literal">nil</span> {
                 t.Fatalf(<span class="hljs-string">"expected error, got none"</span>)
             }
         }

         <span class="hljs-keyword">if</span> failures != <span class="hljs-number">3</span> {
             t.Fatalf(<span class="hljs-string">"expected 3 consecutive failures, got %d"</span>, failures)
         }
         <span class="hljs-keyword">if</span> cb.State() != gobreaker.StateOpen {
             t.Fatalf(<span class="hljs-string">"expected circuit breaker to be open, got %v"</span>, cb.State())
         }
     })
</code></pre>
</li>
</ol>
<h3 id="heading-advanced-strategies">Advanced Strategies</h3>
<p>We can take it a step further by adding an exponential backoff strategy to our circuit breaker implementation. We will this article keep it simple and concise by demonstrating an example of the exponential backoff strategy. However, there are other advanced strategies for circuit breakers worth mentioning, such as load shedding, bulkheading, fallback mechanisms, context and cancellation. These strategies basically enhance the robustness and functionality of circuit breakers. Here's an example of using the exponential backoff strategy:</p>
<p><strong>Exponential Backoff</strong></p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="a19af4b469d5a67787ba14f8eeccb1d4"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/SirPhemmiey/a19af4b469d5a67787ba14f8eeccb1d4" class="embed-card">https://gist.github.com/SirPhemmiey/a19af4b469d5a67787ba14f8eeccb1d4</a></div><p> </p>
<p>Let's make a couple of things clear:</p>
<p><strong>Custom Backoff Function:</strong> The <code>exponentialBackoff</code> function implements an exponential backoff strategy with a jitter. It basically calculates the backoff time based on the number of attempts, ensuring that the delay increases exponentially with each retry attempt.</p>
<p><strong>Handling Retries:</strong> As you can see in the <code>/api</code> handler, the logic now includes a loop that attempts to call the external API up to a specified number of attempts (<code>attempts := 5</code>). After each failed attempt, we wait for a duration determined by the <code>exponentialBackoff</code> function before retrying.</p>
<p><strong>Circuit Breaker Execution:</strong> The circuit breaker is used within the loop. If the external API call succeeds (<code>err == nil</code>), the loop breaks, and the successful result is returned. If all attempts fail, an HTTP 503 (Service Unavailable) error is returned.</p>
<p>Integrating custom backoff strategy in a circuit breaker implementation indeed aims to handle transient errors more gracefully. The increasing delays between retries help reduce the load on failing services, allowing them time to recover. As evident in our code above, our <code>exponentialBackoff</code> function was introduced to add delays between retries when calling an external API.</p>
<p>Additionally, we can integrate metrics and logging to monitor circuit breaker state changes using tools like Prometheus for real-time monitoring and alerting. Here's a simple example:</p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="e9af8e9d0e0adf13e2058beb1fc3ee42"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/SirPhemmiey/e9af8e9d0e0adf13e2058beb1fc3ee42" class="embed-card">https://gist.github.com/SirPhemmiey/e9af8e9d0e0adf13e2058beb1fc3ee42</a></div><p> </p>
<p>As you'll see, we have now done the following:</p>
<ol>
<li><p>In L16-21, we define a prometheus counter vector to keep track of the number of requests and their state (success, failure, circuit breaker state changes).</p>
</li>
<li><p>In L25-26, the metrics defined are registered with Prometheus in the <code>init</code> function.</p>
<p> <strong>Pro Tip</strong>: The <code>init</code> function in Go is used to initialize the state of a package before the <code>main</code> function or any other code in the package is executed. In this case, the <code>init</code> function registers the <code>requestCount</code> metric with Prometheus. And this essentially ensures that Prometheus is aware of this metric and can start collect data as soon as the application starts running.</p>
</li>
<li><p>We create the circuit breaker with custom settings, including the <code>ReadyToTrip</code> function that increases the failure counter and determines when to trip the circuit</p>
</li>
<li><p><code>OnStateChange</code> to log state changes and increment the corresponding prometheus metric</p>
</li>
<li><p>We expose the Prometheus metrics at <code>/metrics</code> endpoint</p>
</li>
</ol>
<h3 id="heading-wrapping-up">Wrapping Up</h3>
<p>To wrap up this article, i hope you saw how circuit breakers play a huge role in building resilient and reliable systems. By proactively preventing cascading failures, they fortify the reliability of microservices and distributed systems, ensuring a seamless user experience even in the face of adversity.</p>
<p>Keep in mind, any system designed for scalability must incorporate strategies to gracefully handle failures and swiftly recover. - <strong>Oluwafemi</strong>, <strong>2024</strong></p>
]]></content:encoded></item><item><title><![CDATA[The Beacon API: Enhancing Web Performance with Background Data Transmission]]></title><description><![CDATA[We all know that sending data from a client to a server, especially as a web page is closing, is essential. This article explains how Beacon API makes this easy - a web standard designed to send small bits of data to the server without slowing down t...]]></description><link>https://oluwafemiakinde.dev/the-beacon-api-enhancing-web-performance-with-background-data-transmission</link><guid isPermaLink="true">https://oluwafemiakinde.dev/the-beacon-api-enhancing-web-performance-with-background-data-transmission</guid><category><![CDATA[Background Data Transmission]]></category><category><![CDATA[web performance]]></category><category><![CDATA[web analytics]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[asynchronous]]></category><category><![CDATA[Beacon API]]></category><dc:creator><![CDATA[Oluwafemi Akinde]]></dc:creator><pubDate>Mon, 22 Apr 2024 22:36:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1713824939475/a414c408-22b0-4e20-bd72-0b6fb2681da9.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We all know that sending data from a client to a server, especially as a web page is closing, is essential. This article explains how Beacon API makes this easy - a web standard designed to send small bits of data to the server without slowing down the page or disrupting the user experience.</p>
<h2 id="heading-what-is-the-beacon-api"><strong>What is the Beacon API?</strong></h2>
<p>The Beacon API is a JavaScript-based interface that allows web pages to send data to a server in the background, asynchronously, and without waiting for a response. As you would have guessed, this is useful for sending analytics or diagnostic information that doesn't typically require a response from your server or backend or just before the user leaves a page (for example, during the <code>unload</code> or <code>beforeunload</code> events).</p>
<h3 id="heading-key-features-of-the-beacon-api"><strong>Key Features of the Beacon API</strong></h3>
<ul>
<li><p><strong>Asynchronous Data Transfer:</strong> Unlike AJAX requests, Beacon requests do not require a response from the server, allowing the user to navigate away from the page immediately without delay.</p>
</li>
<li><p><strong>Reliability:</strong> The data is transmitted to the server more reliably. Even if the page is being <code>unloaded</code>, the browser will attempt to send the Beacon data in the background.</p>
</li>
<li><p><strong>Efficiency:</strong> It uses HTTP POST requests and does not impact the performance or the loading time of the web page.</p>
</li>
</ul>
<h2 id="heading-lets-get-to-it-how-does-it-work"><strong>Let's get to it. How Does It Work?</strong></h2>
<p>The Beacon API's main function is <code>navigator.sendBeacon(url, data)</code>, where <code>url</code> is the server endpoint to which data is sent, and <code>data</code> is the payload. The data can be any of several types, including <code>ArrayBuffer</code>, <code>Blob</code>, <code>DOMString</code>, <code>FormData</code>, or <code>URLSearchParams</code> as long as that's what your server or backend is expecting.</p>
<h3 id="heading-implementation-steps"><strong>Implementation Steps</strong></h3>
<ol>
<li><p><strong>Check for Support:</strong> It's usually a good practice to first check if the user's browser supports it. If it does, then go ahead to use the beacon API</p>
<pre><code class="lang-typescript"> <span class="hljs-keyword">if</span> (navigator.sendBeacon) {
     <span class="hljs-comment">// Beacon API is supported</span>
 }
</code></pre>
</li>
<li><p><strong>Sending Data:</strong> To send data with the Beacon API, we simply call <code>navigator.sendBeacon()</code> with the endpoint and data to be sent.</p>
<pre><code class="lang-typescript"> <span class="hljs-built_in">window</span>.addEventListener(<span class="hljs-string">'unload'</span>, <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params">event</span>) </span>{
   <span class="hljs-keyword">var</span> data = { userAction: <span class="hljs-string">'pageExit'</span>, timestamp: <span class="hljs-built_in">Date</span>.now() };
   <span class="hljs-keyword">var</span> beaconUrl = <span class="hljs-string">'https://example.com/analytics'</span>;
   navigator.sendBeacon(beaconUrl, <span class="hljs-built_in">JSON</span>.stringify(data));
 });
</code></pre>
</li>
<li><p><strong>Server-Side Handling:</strong> On the server, you'll receive the Beacon request just like any other POST request. The data can be processed or stored as needed.</p>
</li>
</ol>
<h3 id="heading-lets-see-more-examples-and-uses-cases"><strong>Let's see more examples and uses cases</strong></h3>
<ol>
<li><p><strong>Sending Analytics Data on Page Unload:</strong> With the Beacon API, you can send user interaction data to an analytics endpoint when the user leaves the page. This is useful for capturing page session times, button clicks, or any actions the user performed on the page.</p>
<pre><code class="lang-typescript"> <span class="hljs-built_in">window</span>.addEventListener(<span class="hljs-string">'beforeunload'</span>, <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params">event</span>) </span>{
   <span class="hljs-keyword">const</span> analyticsData = {
     sessionDuration: <span class="hljs-built_in">Date</span>.now() - <span class="hljs-built_in">window</span>.sessionStartTime, <span class="hljs-comment">// Assuming sessionStartTime was recorded at page load</span>
     actions: <span class="hljs-built_in">window</span>.userActions, <span class="hljs-comment">// Assuming userActions were recorded during the session</span>
   };

   navigator.sendBeacon(<span class="hljs-string">'https://youranalyticsendpoint.com/data'</span>, <span class="hljs-built_in">JSON</span>.stringify(analyticsData));
 });
</code></pre>
</li>
<li><p><strong>Tracking Form Data Without Submission:</strong> The Beacon API can be used for draft logic to partially save form details. It allows you to send data to your backend asynchronously. It's useful for creating an autosave or draft feature in a web app.</p>
<pre><code class="lang-typescript">
 <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">saveDraft</span>(<span class="hljs-params"></span>) </span>{
   <span class="hljs-keyword">const</span> formElement = <span class="hljs-built_in">document</span>.getElementById(<span class="hljs-string">'your-form-id'</span>);
   <span class="hljs-keyword">const</span> formData = <span class="hljs-keyword">new</span> FormData(formElement);

   <span class="hljs-comment">// send draft data to the server using Beacon API</span>
   <span class="hljs-keyword">const</span> draftUrl = <span class="hljs-string">'https://yourserver.com/saveDraft'</span>;
   <span class="hljs-keyword">const</span> success = navigator.sendBeacon(draftUrl, formData);
   <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Draft save initiated:'</span>, success ? <span class="hljs-string">'Success'</span> : <span class="hljs-string">'Failed'</span>);
 }

 <span class="hljs-comment">// trigger the saveDraft function on form input (throttled)</span>
 formElement.addEventListener(<span class="hljs-string">'input'</span>, <span class="hljs-function">() =&gt;</span> {
   <span class="hljs-comment">// it's usually a best practice to use a throttle/debounce function to </span>
   <span class="hljs-comment">//prevent too many Beacon requests</span>
   <span class="hljs-keyword">if</span> (<span class="hljs-built_in">window</span>.draftSaveTimeout) {
     <span class="hljs-built_in">clearTimeout</span>(<span class="hljs-built_in">window</span>.draftSaveTimeout);
   }

   <span class="hljs-built_in">window</span>.draftSaveTimeout = <span class="hljs-built_in">setTimeout</span>(saveDraft, <span class="hljs-number">500</span>); <span class="hljs-comment">// save draft every 500 ms of inactivity</span>
 });

 <span class="hljs-comment">// Additional save on page unload</span>
 <span class="hljs-built_in">window</span>.addEventListener(<span class="hljs-string">'unload'</span>, saveDraft);
</code></pre>
<p> When implementing this approach, It's important to also take note of the data size limitations and proper server-side handling, which also involves the backend acknowledging partial data and that it's expected that the backend handles these requests "silently"</p>
</li>
</ol>
<h3 id="heading-advantage-over-traditional-methods"><strong>Advantage Over Traditional Methods</strong></h3>
<p>Before the Beacon API, sending data to the server during <code>unload</code> events was less reliable. Traditional AJAX requests might be cancelled if they were initiated during these events, leading to data loss. The Beacon API ensures that the data is transmitted even after the page has started unloading.</p>
<h2 id="heading-limitations-and-considerations"><strong>Limitations and Considerations</strong></h2>
<ul>
<li><p>Since the Beacon API does not expect a response from the server, it's not suitable for tasks that require any response from the server.</p>
</li>
<li><p>Some browsers may impose their own limits on the size of the data payload.</p>
</li>
</ul>
<p>In general, the Beacon API provides a reliable, efficient method to send data to the server without affecting the user experience.</p>
]]></content:encoded></item><item><title><![CDATA[Leveraging the Power of Google Cloud Preemptible VMs for Cost-Effective Computing]]></title><description><![CDATA[Overview
In the world of cloud computing, optimizing costs without sacrificing performance is a constant challenge. One way that Google Cloud offers to address this is through preemptible virtual machines.
For AWS folks, it's called AWS EC2 Spot Inst...]]></description><link>https://oluwafemiakinde.dev/leveraging-the-power-of-google-cloud-preemptible-vms-for-cost-effective-computing</link><guid isPermaLink="true">https://oluwafemiakinde.dev/leveraging-the-power-of-google-cloud-preemptible-vms-for-cost-effective-computing</guid><category><![CDATA[GCP]]></category><category><![CDATA[Devops]]></category><category><![CDATA[google cloud]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[infrastructure]]></category><dc:creator><![CDATA[Oluwafemi Akinde]]></dc:creator><pubDate>Fri, 23 Jun 2023 06:25:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1687498827863/5e4fac93-765a-44f1-a0e5-8a0f0da4aef7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-overview">Overview</h2>
<p>In the world of cloud computing, optimizing costs without sacrificing performance is a constant challenge. One way that Google Cloud offers to address this is through preemptible virtual machines.</p>
<p><em>For AWS folks, it's called AWS EC2 Spot Instance. The idea behind a spot instance and preemptible VM is the same.</em></p>
<p>Essentially, Preemptible VMs provide a cost-effective solution for running fault-tolerant and non-critical workloads. In this article, we will explore the benefits of preemptible VMs/Spot Instance, limitations, use cases, how the preemptible process work, creating a preemptible VM from a regular instance and many more.</p>
<h2 id="heading-what-exactly-is-a-preemptible-vm">What exactly is a Preemptible VM?</h2>
<p>Google Cloud preemptible VMs are similar to regular instances but come with a significant cost advantage. Well, the tradeoff is that these VMs/Spot Instances may be terminated by Google/AWS at any time albeit with a 30secs/2 mins notice. While this means they are not suitable for long-running, critical tasks, they are ideal for batch processing, distributed computing, and fault-tolerant applications.</p>
<p>Before we move ahead to spin up some preemptible VM instances (actually, in a followup article), i'd like to highlight the benefits, limitations and use cases.</p>
<h2 id="heading-benefits-of-preemptible-vms">Benefits of Preemptible VMs:</h2>
<ol>
<li><p><strong>Cost Efficiency</strong>: Preemptible VMs are priced significantly lower than regular instances, providing cost savings of up to 80%. This makes them an attractive option for workloads that can tolerate occasional interruptions.</p>
</li>
<li><p><strong>Scalability</strong>: By leveraging preemptible VMs, you can easily scale your infrastructure at a fraction of the cost. This is particularly advantageous for bursty (occurring at intervals in a short timespan) workloads that require additional resources <strong>temporarily</strong>.</p>
</li>
<li><p><strong>High Availability</strong>: Preemptible VMs can be used in combination with managed instance groups and autoscaling to ensure high availability and fault tolerance. The system automatically replaces preempted VMs with new ones, maintaining the desired level of capacity.</p>
</li>
</ol>
<h2 id="heading-limitations-of-preemptible-vms">Limitations of Preemptible VMs</h2>
<ol>
<li><p><strong>Limited Availability</strong>: Preemptible VMs are available on a "best-effort" basis and their availability is not guaranteed. They are offered at a significantly reduced price compared to regular VMs because Google Cloud can terminate them at any time. So, this means that they may not be suitable for applications requiring strict uptime or critical workloads.</p>
</li>
<li><p><strong>Maximum Runtime</strong>: Preemptible VMs have a maximum runtime limit of 24 hours. After this time, they will be automatically terminated by Google Cloud. If your application or job requires longer execution times, you need to account for this limitation and design your solution accordingly :).</p>
</li>
<li><p><strong>Termination without Warning</strong>: Preemptible VMs can be terminated at any time, without any advanced warning. While Google Cloud typically provides a 30-second notification before termination, your applications and processes must be designed to handle sudden interruptions and gracefully recover or resume operations when a VM is preempted.</p>
</li>
<li><p><strong>Limited Quantity</strong>: There is a finite capacity of preemptible VMs available within a specific region and zone. If the demand for preemptible VMs exceeds the available capacity, you may not be able to launch new instances until capacity becomes available.</p>
</li>
<li><p><strong>Resource Constraints</strong>: Preemptible VMs have some resource constraints compared to regular VMs. For example, they cannot be live migrated to other hosts, and they have a limited amount of CPU and memory resources. These constraints may impact certain workloads or applications that require specific configurations or resource-intensive operations.</p>
</li>
</ol>
<p>Despite these limitations, preemptible VMs can still be a cost-effective option for certain use cases which are mentioned below:</p>
<h2 id="heading-usecases-of-preemptible-vms">Usecases of Preemptible VMs</h2>
<ol>
<li><p><strong>Batch Processing</strong>: Preemptible VMs are ideal for batch processing workloads that can be divided into smaller tasks or jobs. You can leverage the significant cost savings offered by preemptible VMs to run large-scale data processing, ETL or other batch jobs. If a batch job is preempted, it will be restarted on a new preemptible VM. However, the job may lose some of its state, so it is important to design the job in a way that minimizes the impact of preemptions.</p>
<p> For example, let's say you have a batch job that is processing a large file or running image analysis or machine learning training, you could have a checkpoint logic that checks its progress every few minutes or seconds (the latter would be recommended) configurable by you and saves the state so that if the job is preempted at any point, it can continue processing from the last saved checkpoint when another VM is up and running - this is called <strong>checkpointing technique</strong>.</p>
</li>
<li><p><strong>Test and Development Environments</strong>: Preemptible VMs can be used for creating temporary or short-term test and development environments. For instance, if your dev team requires isolated environments for testing, experimenting, or prototyping, preemptible VMs can provide the necessary resources at a much lower cost.</p>
</li>
<li><p><strong>Non-Critical Workloads</strong>: Applications or workloads that can tolerate occasional interruptions or delays are good candidates for preemptible VMs. Examples include non-production environments, non-critical background tasks, non-time-sensitive data processing, or non-mission-critical applications.</p>
</li>
<li><p><strong>DevOps:</strong> Preemptible VMs can be used for DevOps tasks, such as running continuous integration and continuous delivery (CI/CD) pipelines. These tasks can be interrupted and restarted without any loss of data, so they are well-suited for preemptible VMs. By leveraging the cost savings, you can scale your CI/CD infrastructure without incurring high expenses during idle or low-demand periods.</p>
</li>
<li><p><strong>High-Performance Computing (HPC)</strong>: For certain HPC workloads, preemptible VMs can be used to increase compute capacity while managing costs. Tasks such as rendering, simulation, scientific calculations, or distributed computing can benefit from the availability of preemptible VMs.</p>
</li>
<li><p><strong>Web Crawlers or Scrapers</strong>: Preemptible VMs can be used for web crawling or scraping tasks where the workload can be divided into smaller chunks or parallelized. The lower costs associated with preemptible VMs make them an attractive option for scraping data from websites or conducting periodic web crawls.</p>
</li>
</ol>
<p>Overall, batch jobs can be run on Google Cloud preemptible VMs, but it is important to design the job in a way that minimizes the impact of preemptions. By following these tips, you can save money on your batch processing jobs without sacrificing reliability. It is important to assess your application's requirements, resilience, and cost considerations before incorporating preemptible VMs into your infrastructure.</p>
<p>💡<em>Spot VMs are the latest version of preemptible VMs. New and existing preemptible VMs continue to be supported, and preemptible VMs use the same pricing model as Spot VMs. However, Spot VMs provide new features that preemptible VMs do not support. For example, preemptible VMs can only run for up to 24 hours at a time, but Spot VMs do not have a maximum runtime unless you</em> <a target="_blank" href="https://cloud.google.com/compute/docs/instances/limit-vm-runtime"><em>limit the runtime</em></a><em>. You can read more on them and decide which one to use for your project and/or tasks.</em></p>
<h2 id="heading-preemption-process">Preemption Process</h2>
<p>According to Google Cloud documentation, the preemption process is as follows:</p>
<ol>
<li><p>Once Compute Engine needs the capacity, Google sends a preemption notification as an Advanced Configuration and Power Interface (ACPI) G2 Soft Off signal -- a standard motherboard soft shutdown command, which every OS can handle -- that signals the system must reboot.</p>
</li>
<li><p>Ideally, the Soft Off signal then triggers a shutdown script that users have previously configured to save any system state and application data, terminate processes and stop the VM.</p>
</li>
<li><p>If the instance is still running after 30 seconds, GCE sends an ACPI G3 Mechanical Off signal to the OS, which is the equivalent of pulling the power on a server.</p>
</li>
<li><p>The Compute Engine instance then enters a <a target="_blank" href="https://cloud.google.com/compute/docs/instances/stopping-or-deleting-an-instance">terminated state</a>, which preserves its configuration settings, metadata and attachments to other resources -- such as storage volumes -- but destroys in-memory data and VM state. Users can choose to restart or delete an instance in a terminated state, or leave it terminated indefinitely</p>
</li>
</ol>
<p>Preempted instances still appear in your project, but you are not charged for the instance hours while it remains in a <code>TERMINATED</code> state.</p>
<h2 id="heading-converting-a-regular-vm-into-a-preemtible-vm">Converting a regular VM into a preemtible VM</h2>
<p>There's no direct way to convert an existing regular VM into a preemtible VM but there's a workaround and i'll be showing you the steps.</p>
<h3 id="heading-step-1">Step 1:</h3>
<p>Goto snapshots page <a target="_blank" href="https://console.cloud.google.com/compute/snapshots">here</a> and click on Create Snapshot.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687303078121/ed65c6b4-deb6-4f29-a9b2-0b4b058c365f.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-2">Step 2:</h3>
<p>Input the name of your snapshot, click on "source disk" to choose which VM instance you want to create a snapshot from and then click on "Create"</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687303118572/5ffb4755-e981-468a-9ff9-3f823ec36668.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-3">Step 3:</h3>
<p>Once a snapshot is created, click on it to view details and then click on "Create Instance".</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687303176334/cbc4c9d0-aa71-478b-9ea4-74e2a7a5e8f6.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-4">Step 4:</h3>
<p>Scroll to the near bottom under Available policies. Standard is selected by default but you have to select "Spot" because that's what we want to create. You will also notice that the price when you choose Standard is very different from (and higher than) the Spot and that's because it's a preemtible VM. Once that's done, just click on Create.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687303202174/98c972ce-93ea-4c47-9a26-15f9313e6e8f.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687303219109/03bd9567-22c9-4fa3-9d34-44f350a66444.png" alt class="image--center mx-auto" /></p>
<p>Step 5: That's it. You've successfully created a preemptible VM instance from a regular VM!</p>
<h3 id="heading-icymi-what-to-keep-in-mind-when-using-preemtible-vms">ICYMI: What to keep in mind when using Preemtible VMs</h3>
<ol>
<li><p><strong>Your application must be fault-tolerant:</strong> Your application must be able to handle being interrupted and restarted. If your application cannot handle being interrupted, then you should <strong>not</strong> use preemptible VMs.</p>
</li>
<li><p><strong>Your application must be stateless:</strong> Your application must not store any state on the VM. If your application stores state on the VM, then it will be lost when the VM is preempted.</p>
</li>
<li><p><strong>Your application must be able to run quickly:</strong> Your application should be able to complete its work within 24 hours, wrap up and save the current state within the 30-seconds notice period. If your application takes longer than that to run, then it is possible that it will be preempted before it completes.</p>
</li>
</ol>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Google Cloud preemptible VMs offer an excellent opportunity to optimize costs while leveraging the power of cloud computing. By understanding the benefits, limitations and implementing them in your applications, you can unlock significant savings and scalability. However, it's important to carefully assess the suitability of preemptible VMs for your specific use case and ensure appropriate fault tolerance measures are in place.</p>
]]></content:encoded></item><item><title><![CDATA[Google Cloud Tasks: Next-Level Task Execution for Modern Applications]]></title><description><![CDATA[Introduction
Efficient task management is vital in modern distributed and scalable cloud environments. Google Cloud Tasks offers a managed solution that simplifies the distribution and execution of tasks across various components of your application....]]></description><link>https://oluwafemiakinde.dev/google-cloud-tasks-next-level-task-execution-for-modern-applications</link><guid isPermaLink="true">https://oluwafemiakinde.dev/google-cloud-tasks-next-level-task-execution-for-modern-applications</guid><category><![CDATA[google cloud]]></category><category><![CDATA[task management]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[ScalableSolutions]]></category><dc:creator><![CDATA[Oluwafemi Akinde]]></dc:creator><pubDate>Sat, 20 May 2023 10:12:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1684577682580/de1a88dd-c76f-40e8-a723-f5e5d9a84b7b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Efficient task management is vital in modern distributed and scalable cloud environments. Google Cloud Tasks offers a managed solution that simplifies the distribution and execution of tasks across various components of your application. In this article, we will explore the key features of Google Cloud Tasks and demonstrate how to leverage them using Node.js code snippets.</p>
<h2 id="heading-what-is-google-cloud-tasks">What is Google Cloud Tasks?</h2>
<p>Google Cloud Tasks is a fully managed task distribution service that allows you to reliably enqueue and execute tasks. It provides features such as task queuing, scheduling, retries, and prioritization, making it an ideal choice for building scalable and responsive applications.</p>
<h2 id="heading-getting-started-with-google-cloud-tasks">Getting Started with Google Cloud Tasks</h2>
<p>To start using Google Cloud Tasks, follow these steps:</p>
<p><strong>Step 1: Enable the Cloud Tasks API</strong></p>
<p>Ensure that you have enabled the Cloud Tasks API in your Google Cloud project. You can do this through the Google Cloud Console or by using the <code>gcloud</code> command-line tool.</p>
<pre><code class="lang-bash">gcloud services <span class="hljs-built_in">enable</span> cloudtasks.googleapis.com
</code></pre>
<p><strong>Step 2: Create a Task Queue</strong></p>
<p>A task queue is a container for your tasks. Create a task queue by specifying a name and other optional parameters such as maximum task attempts, rate limits, and worker constraints.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684574792662/5c508210-2fe4-4a9a-92b8-044c1334be98.png" alt class="image--center mx-auto" /></p>
<p><strong>Step 3: Enqueue Tasks:</strong></p>
<p>Enqueue tasks to the task queue by specifying the request method, URL, body and any other optional parameters that fit your needs. The payload can contain any data necessary for task execution. The body needs to be in base64 in order for the data to be sizeable and easily transmitted over the network.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684567835796/a689e12d-73ed-4092-af01-3f6f57c661a4.png" alt class="image--center mx-auto" /></p>
<p><strong>Step 4: Task Handler:</strong></p>
<p>Implement a task handler that processes the tasks. This could be a separate route or function that receives the tasks, extracts the payload, and performs the necessary actions.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684568149753/4721903f-02fd-4e64-9dff-15ed99d40694.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684568157247/aabf24a4-7c8f-4d45-bbaa-42c30f2831b5.png" alt class="image--center mx-auto" /></p>
<p>Note that it is very important to return a 200. Any status code other than that indicates that the execution failed and Cloud Task will keep on retrying (depending on your queue configurations).</p>
<p>That's it, basically.</p>
<p>If you want to get information about a task and/or delete a task, you can use the methods below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684568799374/5641e8fe-778f-4ef0-8b21-cd004fa3ba04.png" alt class="image--center mx-auto" /></p>
<p>Let's go through a simple example of how to use the cloud task functions we created. There are different use cases of Cloud Tasks (i mentioned them in the later part of this article) but for the sake of simplicity of this article, let's imagine that we have to do different batch jobs, sort of like a sequence of jobs.</p>
<p>We can have a <code>BatchJobService</code> that has functions which call the cloud task functions that we created.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { ITaskService } <span class="hljs-keyword">from</span> <span class="hljs-string">"./TaskService"</span>;

<span class="hljs-keyword">const</span> baseUrl = <span class="hljs-string">"http://whatever-your-base-url-is"</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> IObject {
    [key: <span class="hljs-built_in">string</span>]: <span class="hljs-built_in">any</span>;
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> BatchService {

    <span class="hljs-keyword">constructor</span> (<span class="hljs-params"><span class="hljs-keyword">private</span> taskService: ITaskService</span>) {}

    <span class="hljs-keyword">async</span> createFirstBatchTask() {
        <span class="hljs-comment">//create queue</span>
        <span class="hljs-keyword">const</span> queueName = <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.taskService.createTaskQueue(<span class="hljs-string">'first-batch-queue'</span>);

        <span class="hljs-comment">//add a task to the queue you created above</span>
        <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.taskService.createTask(queueName, {
            taskName: <span class="hljs-string">"first-batch-task"</span>,
            url: <span class="hljs-string">`<span class="hljs-subst">${baseUrl}</span>/create-first-batch`</span>,
            data: { <span class="hljs-comment">//whatever data you want to send or pass</span>
                operationType: <span class="hljs-string">"batch"</span>,
                value: <span class="hljs-number">20</span>
            }
        });
    }

    <span class="hljs-keyword">async</span> processFirstBatchTask(data: IObject) {
        <span class="hljs-built_in">console</span>.log(data); <span class="hljs-comment">//{operationType: "batch", value:20}</span>
    }
}
</code></pre>
<p>The example above is pretty explanatory and there's not much to talk about. The first function <code>createFirstBatchTask</code> creates a task queue (like a container), and then enqueues a task into the queue while passing the data to be sent, url to process the data and what HTTP method to use.</p>
<p>The second function <code>processFirstBatchTask</code> is the handler which processes whatever data and does whatever it wants with it.</p>
<p>The full code can be seen here: <a target="_blank" href="https://github.com/SirPhemmiey/cloud-task-tutorial">https://github.com/SirPhemmiey/cloud-task-tutorial</a></p>
<p>If you have used Google Cloud Pub/Sub before, you'd probably be wondering about the difference between Cloud Pub/Sub and Cloud Tasks, just like I did before i started using Cloud Tasks. Truth is, they are both powerful services provided by GCP, but they serve different purposes and have distinct characteristics.</p>
<p><em>Amongst other differences between the two, the core difference is in their message handling and invocation; implicitly and explicitly.</em></p>
<p>What does Implicit and Explicit Invocation even mean?</p>
<p><strong>Implicit</strong>: In this case, the publisher has no control over the delivery of the message. Pub/Sub aims to decouple publishers of events and subscribers to those events. Publishers do not need to know anything about their subscribers.</p>
<p><strong>Explicit</strong>: By contrast, Cloud Tasks is aimed at <strong>explicit</strong> invocation where the publisher retains full control of execution. The publisher can tell how the message should be delivered, when the message should be delivered and what to pass in the message. Full control.</p>
<p>Another benefit of Cloud Tasks is you can pause/resume the queue using Cloud Console and CLI command to stop/start the processing of tasks, very similar to Google Cloud Scheduler.</p>
<ol>
<li><p>Detailed Comparison of Cloud Tasks and Pub/Sub</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1684572841867/fcbf36b6-d4b2-44dc-8e97-df20284960a0.webp" alt class="image--center mx-auto" /></p>
<h3 id="heading-advanced-features-and-use-cases-of-cloud-tasks">Advanced Features and Use Cases of Cloud Tasks</h3>
<p> Google Cloud Tasks offers several advanced features and use cases:</p>
<ol>
<li><p><strong>Task Scheduling</strong>: You can schedule tasks to be executed at specific times or intervals. Set the scheduled time when enqueuing the task, and Google Cloud Tasks ensures the task is executed accordingly. For instance, you may want to send an email 1 month after sign-up for the trial period. Without using Cloud Tasks, you'd ideally have a cron job to check the difference between dates and send the email. <strong>But Cloud Tasks saves you this query!</strong></p>
</li>
<li><p><strong>Task Retries and Acknowledgment</strong>: Google Cloud Tasks automatically retries failed tasks based on configurable settings. Tasks can also be acknowledged upon completion, allowing you to track their status and handle any failures or retries.</p>
</li>
<li><p><strong>Ordering and Prioritization</strong>: You can control the order in which tasks are executed by specifying task priorities. Higher-priority tasks are processed before lower-priority ones, ensuring important tasks are handled promptly.</p>
</li>
<li><p><strong>Monitoring and Insights:</strong> Google Cloud Tasks provides visibility into task execution with built-in monitoring and logging. You can access metrics, logs, and error information to track the performance and health of your task processing.</p>
</li>
<li><p><strong>Point-2-Point Communication</strong>: Asynchronous call between 2 microservices.</p>
</li>
<li><p><strong>Control Traffi</strong>c: Need to control the rate so that worker's scalability is under control. e.g. Push an asynchronous image processing job, hitting an API with max requests etc</p>
</li>
</ol>
</li>
</ol>
<h2 id="heading-limitations-of-google-cloud-tasks">Limitations of Google Cloud Tasks</h2>
<p>    As much as Google Cloud Tasks helps with efficient task management on the cloud, it does have limitations, some of which i don't like and i wish that the limitation is removed in the near future. There are a couple of limitations but i'll highlight the "most important ones" to know and keep in mind</p>
<ol>
<li><p><strong>Limited task payload size</strong>: Google Cloud Tasks imposes a limit on the size of the task payload, which is currently set at 1MB. So, if your tasks require larger payloads, you may need to consider alternative solutions or split the payload across multiple tasks.</p>
</li>
<li><p><strong>Task retention period</strong>: Tasks in Google Cloud Tasks have a limited retention period, which is currently set at 31 days. This means that any task added to a queue must be executed within 31 days. If a task is not processed within this period, it will be automatically deleted. So, you need to ensure your tasks are processed in a timely manner to avoid losing any important data.</p>
</li>
<li><p><strong>Task execution time limits</strong>: Google Cloud Tasks imposes a maximum execution time limit for tasks, which is currently set at 10 minutes. If your tasks require longer execution times, you'll need to consider other mechanisms or split the work into multiple tasks.</p>
</li>
<li><p><strong>Queue Recreation</strong>: If you delete a queue, you must wait for 7 days before creating a queue with the name again. One of the limitations i dislike because this makes me rethink about naming my queues carefully.</p>
</li>
<li><p><strong>Queue dispatch rate</strong>: This refers to the maximum rate at which tasks can be dispatched from a queue. The limitation is that you can only dispatch 500 taks in a queue per second. So, if you want to dispatch more than that, it's best to use multiple queues.</p>
</li>
<li><p><strong>Task de-duplication window</strong>: As much as you can create multiple tasks with different names in queue, once a task is deleted, you'll have to wait for about 1 hour to use the same name again.</p>
</li>
<li><p><strong>Maximum schedule time for a task:</strong> This is the maximum amount of time in the future that a task can be scheduled. If you want to schedule a task to be ran more than 30 days from the current date, it's going to throw an error. This is arguably the limitation i dislike the most.</p>
<p> It's important to consider these limitations when evaluating Google Cloud Tasks for your specific use case. While it is a powerful task queuing service, understanding its constraints will help you make informed decisions and plan accordingly for your application requirements.</p>
</li>
</ol>
<h2 id="heading-conclusion">Conclusion</h2>
<p>    Google Cloud Tasks simplifies the management of distributed tasks in your applications. Its powerful features, such as task queuing, scheduling, retries, and prioritization, make it an excellent choice for building scalable and reliable systems. In this article, we covered the basics of using Google Cloud Tasks and demonstrated how to create task queues, enqueue tasks, and handle them using a task handler in Node.js. We talked about advanced features, and use cases of Cloud Tasks to help you make an informed decision. We also talked about the differences between Cloud Pub/Sub and Cloud Tasks. By leveraging Google Cloud Tasks, you can focus on your application's business logic while relying on a fully managed service to handle task distribution and execution efficiently.</p>
<h2 id="heading-reference">Reference</h2>
<p>    <a target="_blank" href="https://medium.com/google-cloud/cloud-tasks-or-pub-sub-8dcca67e2f7a">https://medium.com/google-cloud/cloud-tasks-or-pub-sub-8dcca67e2f7a</a></p>
]]></content:encoded></item><item><title><![CDATA[Simplify Your Redis Deployment on GCP with Ansible]]></title><description><![CDATA[In this article, we will learn how to install Redis on a GCP VM instance using Ansible.
Prerequisites:

A GCP account with a project and a VM instance.

Ansible is installed on your local machine.

An SSH key pair to access the VM instance.

You must...]]></description><link>https://oluwafemiakinde.dev/simplify-your-redis-deployment-on-gcp-with-ansible</link><guid isPermaLink="true">https://oluwafemiakinde.dev/simplify-your-redis-deployment-on-gcp-with-ansible</guid><category><![CDATA[ansible]]></category><category><![CDATA[Devops]]></category><category><![CDATA[automation]]></category><category><![CDATA[Infrastructure as code]]></category><category><![CDATA[Cloud Computing]]></category><dc:creator><![CDATA[Oluwafemi Akinde]]></dc:creator><pubDate>Mon, 01 May 2023 12:37:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1683153623311/5499276f-a0c3-471b-9f60-1ac8572a559b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this article, we will learn how to install Redis on a GCP VM instance using Ansible.</p>
<h3 id="heading-prerequisites">Prerequisites:</h3>
<ul>
<li><p>A GCP account with a project and a VM instance.</p>
</li>
<li><p>Ansible is installed on your local machine.</p>
</li>
<li><p>An SSH key pair to access the VM instance.</p>
</li>
<li><p>You must have followed my previous article <a target="_blank" href="https://oluwafemiakinde.dev/streamlining-infrastructure-management-provisioning-google-cloud-vms-with-ansible">here</a> because you will be needing to modify the playbook that provisions a VM.</p>
</li>
</ul>
<h3 id="heading-step-1"><strong>Step 1</strong>:</h3>
<p>Create an inventory file Create a file named <code>inventory</code> and add the IP address or hostname of the VM instance you want to install Redis on. For example:</p>
<pre><code class="lang-bash">[redis]
&lt;ip address&gt; or &lt;hostname&gt;
</code></pre>
<h3 id="heading-step-2"><strong>Step 2</strong>:</h3>
<p>Create a playbook Create a file named <code>redis-playbook.yml</code> and add the following tasks:</p>
<pre><code class="lang-yaml"><span class="hljs-meta">---</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Redis</span> <span class="hljs-string">Installation</span>
  <span class="hljs-attr">hosts:</span> <span class="hljs-string">redis</span>
  <span class="hljs-attr">become:</span> <span class="hljs-literal">true</span>

  <span class="hljs-attr">tasks:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Update</span> <span class="hljs-string">package</span> <span class="hljs-string">repositories</span>
      <span class="hljs-attr">yum:</span>
       <span class="hljs-attr">update_cache:</span> <span class="hljs-literal">yes</span>

    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">Redis</span>
      <span class="hljs-attr">yum:</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">redis</span>
        <span class="hljs-attr">state:</span> <span class="hljs-string">present</span>

    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Start</span> <span class="hljs-string">Redis</span> <span class="hljs-string">service</span>
      <span class="hljs-attr">systemd:</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">redis</span>
        <span class="hljs-attr">state:</span> <span class="hljs-string">started</span>
        <span class="hljs-attr">enabled:</span> <span class="hljs-literal">yes</span>
</code></pre>
<p>This playbook uses the <code>yum</code> module to update the package repositories on the target system and it also installs the Redis package. The <code>state</code> parameter is set to <code>present</code> to ensure that the package is installed if it is not already present.</p>
<p>After that, the Redis service is started with the <code>systemd</code> module.</p>
<h3 id="heading-step-3"><strong>Step 3</strong>:</h3>
<p>Run the playbook using the following command:</p>
<pre><code class="lang-bash">ansible-playbook redis-playbook.yml -i inventory --private-key=/path/to/ssh/key
</code></pre>
<p>This command runs the <code>redis-playbook.yml</code> playbook on the hosts specified in the <code>inventory</code> file and uses the SSH key specified in <code>--private-key</code> to access the VM instance.</p>
<h3 id="heading-step-4"><strong>Step 4:</strong></h3>
<p>Verify that Redis is installed and running by connecting to the VM instance using SSH and running the following command:</p>
<pre><code class="lang-bash">redis-cli ping
</code></pre>
<p><strong>TIP</strong>: You can use the command to ssh into your instance</p>
<pre><code class="lang-yaml"><span class="hljs-string">ssh</span> <span class="hljs-string">&lt;external-ip-address&gt;</span> <span class="hljs-string">-i</span> <span class="hljs-string">path/to/private/key</span>
</code></pre>
<p>If Redis is running, the command should return <code>PONG</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1682859591306/68fa0c18-083f-4310-843b-7516247d65cf.png" alt class="image--center mx-auto" /></p>
<p>Now that we've confirmed that Redis is running in the VM instance, we'll need to connect to it outside the instance. Right now, if we try to do it, we're going to get an error (or it will most likely timeout). So this means we need to enable remote access to Redis and also update our firewall to accept TCP connections on port <code>6379</code>.</p>
<p>You first need to install the community versions of Ansible's firewall and google modules with the command:</p>
<pre><code class="lang-yaml"><span class="hljs-string">ansible-galaxy</span> <span class="hljs-string">collection</span> <span class="hljs-string">install</span> <span class="hljs-string">community.general</span> <span class="hljs-string">community.google</span>
</code></pre>
<h3 id="heading-step-5"><strong>Step 5:</strong></h3>
<p>You will need to add <code>apache-libcloud</code> to the list of requirements in <code>requirements.yml</code></p>
<pre><code class="lang-yaml"> <span class="hljs-attr">pip_package_requirements:</span>
       <span class="hljs-string">...</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"apache-libcloud"</span>
</code></pre>
<p>Once that is successful then copy the following tasks:</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Configure</span> <span class="hljs-string">Redis</span>
  <span class="hljs-attr">become:</span> <span class="hljs-literal">yes</span>
  <span class="hljs-attr">lineinfile:</span>
    <span class="hljs-attr">path:</span> <span class="hljs-string">/etc/redis/redis.conf</span> <span class="hljs-comment">#or /etc/redis.conf if you get an error that /etc/redis/redis.conf does not exist</span>
    <span class="hljs-attr">regexp:</span> <span class="hljs-string">"<span class="hljs-template-variable">{{ item.regexp }}</span>"</span>
    <span class="hljs-attr">line:</span> <span class="hljs-string">"<span class="hljs-template-variable">{{ item.line }}</span>"</span>
  <span class="hljs-attr">with_items:</span>
    <span class="hljs-bullet">-</span> { <span class="hljs-attr">regexp:</span> <span class="hljs-string">"^bind .*"</span>, <span class="hljs-attr">line:</span> <span class="hljs-string">"bind 0.0.0.0"</span> }
    <span class="hljs-bullet">-</span> { <span class="hljs-attr">regexp:</span> <span class="hljs-string">"^port .*"</span>, <span class="hljs-attr">line:</span> <span class="hljs-string">"port 6379"</span> }
    <span class="hljs-bullet">-</span> { <span class="hljs-attr">regexp:</span> <span class="hljs-string">"^# requirepass .*"</span>, <span class="hljs-attr">line:</span> <span class="hljs-string">"requirepass your_password_here"</span> }
  <span class="hljs-attr">notify:</span> <span class="hljs-string">Restart</span> <span class="hljs-string">Redis</span> <span class="hljs-string">service</span>

 <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Allow</span> <span class="hljs-string">incoming</span> <span class="hljs-string">connections</span> <span class="hljs-string">on</span> <span class="hljs-string">port</span> <span class="hljs-number">6379</span>
   <span class="hljs-attr">community.general.ufw:</span>
     <span class="hljs-attr">rule:</span> <span class="hljs-string">allow</span>
     <span class="hljs-attr">port:</span> <span class="hljs-number">6379</span>
     <span class="hljs-attr">proto:</span> <span class="hljs-string">tcp</span>

 <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Reload</span> <span class="hljs-string">firewall</span> <span class="hljs-string">rules</span>
   <span class="hljs-attr">community.general.ufw:</span>
     <span class="hljs-attr">state:</span> <span class="hljs-string">enabled</span>
</code></pre>
<p>The task above updates the Redis configuration file and you'll notice that we also set a password. Remember to replace <code>your_password_here</code> with your preferred password.</p>
<p>This is the full Ansible playbook for Redis configuration:</p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="8a7c7949468a5f12de1f0e5b7b272ff7"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/SirPhemmiey/8a7c7949468a5f12de1f0e5b7b272ff7" class="embed-card">https://gist.github.com/SirPhemmiey/8a7c7949468a5f12de1f0e5b7b272ff7</a></div><p> </p>
<h3 id="heading-step-6"><strong>Step 6:</strong></h3>
<p>We will also need to create a GCP compute firewall by adding the following tasks in the playbook we created in my previous article <a target="_blank" href="https://oluwafemiakinde.dev/streamlining-infrastructure-management-provisioning-google-cloud-vms-with-ansible">here</a>.</p>
<p>Copy the following tasks into your playbook. Be careful of indentation though :)</p>
<pre><code class="lang-yaml">  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Create</span> <span class="hljs-string">firewall</span> <span class="hljs-string">policy</span> <span class="hljs-string">for</span> <span class="hljs-string">Redis</span>
      <span class="hljs-attr">gcp_compute_firewall:</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">"<span class="hljs-template-variable">{{ firewall_policy_name }}</span>"</span>
        <span class="hljs-attr">priority:</span> <span class="hljs-number">1000</span>
        <span class="hljs-attr">direction:</span> <span class="hljs-string">"INGRESS"</span>
        <span class="hljs-attr">project:</span> <span class="hljs-string">"<span class="hljs-template-variable">{{ gcp_project }}</span>"</span>
        <span class="hljs-attr">service_account_file:</span> <span class="hljs-string">"<span class="hljs-template-variable">{{ gcp_cred_file }}</span>"</span>
        <span class="hljs-attr">auth_kind:</span> <span class="hljs-string">"<span class="hljs-template-variable">{{ gcp_cred_kind }}</span>"</span>
        <span class="hljs-attr">allowed:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">ip_protocol:</span> <span class="hljs-string">"tcp"</span>
          <span class="hljs-attr">ports:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-number">6379</span>
        <span class="hljs-attr">target_tags:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-string">"redis"</span>
        <span class="hljs-attr">state:</span> <span class="hljs-string">present</span>
      <span class="hljs-attr">register:</span> <span class="hljs-string">firewall_policy_result</span>
      <span class="hljs-comment">#when: firewall_policy_result is not defined</span>

    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Print</span> <span class="hljs-string">firewall_policy_result</span>
      <span class="hljs-attr">debug:</span>
        <span class="hljs-attr">var:</span> <span class="hljs-string">firewall_policy_result</span>

    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Add</span> <span class="hljs-string">firewall</span> <span class="hljs-string">policy</span> <span class="hljs-string">to</span> <span class="hljs-string">Redis</span> <span class="hljs-string">instance</span>
      <span class="hljs-attr">community.google.gce_tag:</span>
        <span class="hljs-attr">instance_name:</span> <span class="hljs-string">"<span class="hljs-template-variable">{{ instance_name }}</span>"</span>
        <span class="hljs-attr">tags:</span> <span class="hljs-string">redis</span>
        <span class="hljs-attr">zone:</span>  <span class="hljs-string">"<span class="hljs-template-variable">{{ zone }}</span>"</span>
        <span class="hljs-attr">project_id:</span> <span class="hljs-string">"<span class="hljs-template-variable">{{ gcp_project }}</span>"</span>
        <span class="hljs-attr">state:</span> <span class="hljs-string">present</span>
</code></pre>
<p>In the tasks above, we're creating a GCP compute firewall policy and we're assigning the policy to our Redis instance.</p>
<p>This is the full Ansible playbook for VM provisioning and firewall policy creation:</p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="ed299b4afd6c95a294e89be2ba9347f5"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/SirPhemmiey/ed299b4afd6c95a294e89be2ba9347f5" class="embed-card">https://gist.github.com/SirPhemmiey/ed299b4afd6c95a294e89be2ba9347f5</a></div><p> </p>
<p>The content of requirements.yml is this:</p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="ac4dbb1d61acacae7c6a16dc16be71c7"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/SirPhemmiey/ac4dbb1d61acacae7c6a16dc16be71c7" class="embed-card">https://gist.github.com/SirPhemmiey/ac4dbb1d61acacae7c6a16dc16be71c7</a></div><p> </p>
<p><em>It's important to note that you'll have to grant your service account permission to create/manage a firewall by assigning the</em> <strong><em>Compute Network Admin</em></strong> <em>role in GCP's IAM page</em> <a target="_blank" href="https://console.cloud.google.com/iam-admin/iam?project=ajar-dev"><em>here</em></a>.</p>
<h3 id="heading-step-7"><strong>Step 7:</strong></h3>
<p>Verify the remote connection by connecting remotely to the Redis instance with the command:</p>
<pre><code class="lang-yaml"><span class="hljs-string">redis-cli</span> <span class="hljs-string">-h</span> <span class="hljs-string">&lt;ip&gt;</span> <span class="hljs-string">-p</span> <span class="hljs-string">&lt;port&gt;</span> <span class="hljs-string">-a</span> <span class="hljs-string">&lt;password&gt;</span> <span class="hljs-string">ping</span>
</code></pre>
<p>And you should be able to connect and get a <code>PONG</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1682976288362/0be4cace-690d-4776-bd88-fcab02330ce6.png" alt class="image--center mx-auto" /></p>
<p>Congratulations! You have successfully installed Redis on a GCP VM instance using Ansible. You can now use Redis as a database, cache, or message broker in your application.</p>
<p>I know this must have been a lot. And that's because this is the first time. Trust me, this can save you and anyone else hours of time because with just this file, you can repeat your tasks and you can deploy the same configuration to multiple VM instances at once.</p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>In this article, we learned how to install Redis on a GCP VM instance using Ansible. We also learned how to configure Redis by updating the configuration file, allow incoming connections, create firewall rules and policy. Ansible provides a simple and efficient way to automate the deployment and configuration of Redis on GCP VM instances.</p>
<p>Thank you for reading my article on <strong>Simplifying Your Redis Deployment with Ansible</strong>! Stay tuned for my upcoming articles on adding monitoring to your Redis server and the many advantages of creating a Redis cluster using Ansible. Don't miss out on the benefits of high availability, scalability, and fault tolerance that a Redis cluster can provide for your applications.</p>
]]></content:encoded></item><item><title><![CDATA[Streamlining Infrastructure Management: Provisioning Google Cloud VMs with Ansible]]></title><description><![CDATA[As more and more organizations move their workloads to the cloud, managing infrastructure becomes an increasingly important task. Infrastructure management involves the provisioning, configuration, and maintenance of computing resources like virtual ...]]></description><link>https://oluwafemiakinde.dev/streamlining-infrastructure-management-provisioning-google-cloud-vms-with-ansible</link><guid isPermaLink="true">https://oluwafemiakinde.dev/streamlining-infrastructure-management-provisioning-google-cloud-vms-with-ansible</guid><category><![CDATA[Infrastructure management]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[Ansible automation ]]></category><dc:creator><![CDATA[Oluwafemi Akinde]]></dc:creator><pubDate>Tue, 18 Apr 2023 10:46:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1681813570867/ede80b0f-825a-4485-93d9-cf346f5a6d99.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As more and more organizations move their workloads to the cloud, managing infrastructure becomes an increasingly important task. Infrastructure management involves the provisioning, configuration, and maintenance of computing resources like virtual machines (VMs) in the cloud. However, managing infrastructure can be a complex and time-consuming process, particularly when it comes to managing large-scale deployments. That’s where Ansible comes in. In this article, we’ll explore how Ansible can be used to streamline infrastructure management by provisioning Google Cloud VMs.</p>
<h3 id="heading-what-is-ansible">What is Ansible?</h3>
<p>Ansible is an open-source automation tool that helps with configuration management, application deployment, and task automation. It uses a simple, human-readable language to describe automation tasks and is easy to use even for those without a programming background. Ansible is agentless, which means that it doesn’t require software to be installed on the target host to manage it.</p>
<h3 id="heading-why-use-ansible-for-infrastructure-management">Why use Ansible for infrastructure management?</h3>
<p>Ansible can help streamline infrastructure management in several ways:</p>
<ol>
<li><p><strong>Consistency</strong>: Ansible ensures that infrastructure is provisioned and configured in a consistent manner across all hosts. This can help reduce errors and make troubleshooting easier.</p>
</li>
<li><p><strong>Scalability</strong>: Ansible can manage large-scale deployments with ease, making it an ideal choice for organizations with a significant number of hosts.</p>
</li>
<li><p><strong>Reusability</strong>: Ansible’s modules and playbooks can be reused across different projects and environments, making it a valuable asset for organizations that require flexibility and agility.</p>
</li>
<li><p><strong>Time-saving</strong>: Ansible’s automation capabilities can significantly reduce the time and effort required to manage infrastructure, freeing up IT teams to focus on more strategic initiatives.</p>
</li>
</ol>
<p>Another benefit of using Ansible for infrastructure management is the ability to use it across different cloud providers and even on-premises infrastructure. This means that you can use the same automation tool to manage infrastructure across different environments, reducing the need for specialized skills and tools. </p>
<h3 id="heading-prerequisites">Prerequisites</h3>
<p>To follow this article and use Ansible to provision Google Cloud VMs, you should have some basic knowledge of the following: </p>
<ol>
<li><p><strong>Linux</strong>: Ansible is primarily a Linux automation tool, so you should have some familiarity with Linux commands, file systems, and permissions.</p>
</li>
<li><p><strong>Cloud Computing</strong>: You should have a basic understanding of cloud computing concepts, such as virtual machines, cloud providers, and cloud infrastructure.</p>
</li>
<li><p><strong>Google Cloud Platform (GCP)</strong>: You should have a GCP account and some familiarity with the GCP console, including creating and managing VMs.</p>
</li>
<li><p><strong>Ansible</strong>: You should have a basic understanding of Ansible concepts, such as playbooks, modules, variables, and tasks.</p>
</li>
</ol>
<p>If you are not familiar with any of these concepts, you may want to spend some time learning about them before attempting to follow this article. Many online resources are available for learning about Linux, cloud computing, GCP, and Ansible.</p>
<p>Fret not though, we won’t be going very deep into them, and I’ll be guiding you through the most important concepts. </p>
<blockquote>
<p>Ansible can provision and automate anything on GCP and other cloud providers. It’s not limited to provisioning VMs on GCP only. </p>
</blockquote>
<p>Long story short, let’s dive straight into what you’re here for!</p>
<h3 id="heading-provisioning-gcp-vms-with-ansible">Provisioning GCP VMs with Ansible</h3>
<p>To provision GCP VMs with Ansible, you’ll first need to install ansible on your machine by following the instructions <a target="_blank" href="https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html">here</a>. We will also need to install the Ansible GCP module. Check <a target="_blank" href="https://docs.ansible.com/ansible/latest/collections/google/cloud/">here</a> to see a list of GCP collections. This module allows you to interact with the GCP API and perform tasks such as creating, starting, stopping, and deleting VMs.</p>
<p>Instead of just installing a single module, it is better to install the whole google cloud collection to avoid getting any errors that something is missing. Here’s the command:</p>
<pre><code class="lang-bash">//install ansible <span class="hljs-keyword">if</span> you don<span class="hljs-string">'t have it
pip install ansible

//install google cloud ansible collection
ansible-galaxy collection install google.cloud</span>
</code></pre>
<p>Before you can use the Ansible GCP module though, you’ll need to set up a service account and download the service account key in JSON format (you’ll need the path to it later).</p>
<p>Now you’re ready to create a playbook. A playbook is a file that describes a set of tasks to be executed on a group of hosts. In this case, we’ll create a playbook that provisions a Google Cloud VM. Copy and paste the content of this file to your machine. The filename is <code>playbook.yml</code></p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="81f21d3e494c1ab4d84854794f1856e3"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/SirPhemmiey/81f21d3e494c1ab4d84854794f1856e3" class="embed-card">https://gist.github.com/SirPhemmiey/81f21d3e494c1ab4d84854794f1856e3</a></div><p> </p>
<p>This playbook creates a VM instance with the specified image, machine type, disk size and type, network, and tags. </p>
<p>You can execute the playbook with the <code>ansible-playbook</code> command like this:</p>
<pre><code class="lang-bash">ansible-playbook initial.yml
</code></pre>
<p>It’s that simple! You should get a response in your terminal like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681814289294/cf7b40d7-a14c-4aae-9dd8-13853a7f87fa.png" alt class="image--center mx-auto" /></p>
<p>And when I go to the VM instance in the google cloud console <a target="_blank" href="https://console.cloud.google.com/compute/instances">here</a>, I can see it created and running! </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681814318331/9796c834-eba7-4b5a-a32a-8b7e03e4d871.png" alt class="image--center mx-auto" /></p>
<p>That’s simple, right? I bet it is! You can see that it took us less than a minute to do this. It may interest you to know that you can run this playbook many times, which means the process is <strong>Idempotent</strong>. Instead of hard coding the values, you can also pass them as an argument to the playbook. You may want to look into the documentation on how to do that.</p>
<p>Let’s talk briefly about some lines in the file:</p>
<p><strong>Line 6</strong>: This is how you set local variables in an Ansible yml file and they can be accessed throughout the file. </p>
<p><strong>Line 7–8</strong>: You’ll need to input the right credentials. </p>
<p><strong>Line 11</strong>: Instance name can be anything. But ideally, it’s recommended to make it meaningful. </p>
<p><strong>Line 12–13</strong>: Zone and region can be any acceptable zone and region respectively. To see a list of all available zones and regions, run the command:</p>
<pre><code class="lang-bash">//list available regions
gcloud compute regions list --project=&lt;project-id&gt;

//list available zones
gcloud compute zones list --project=&lt;project-id&gt;
</code></pre>
<p><strong>Line 14–15</strong>: That’s my preferred machine type and machine image. To see the available list of images, run the command:</p>
<pre><code class="lang-bash">
gcloud compute images list --uri --project=&lt;project-id&gt;
</code></pre>
<p>My Stackoverflow answer <a target="_blank" href="https://stackoverflow.com/questions/54261944/gcp-api-format-of-disk-image-is-incorrect/76025103#76025103">here</a> might help you. <em>Please upvote if it helped you</em> 😢. </p>
<p><strong>Line 25</strong>: Using the <code>register</code> key, we’re saving the result of the task into that variable. This is useful if we want to perform another task based on the result of a previous task. </p>
<p><strong>Line 48</strong>: Finally, we use the <code>debug</code> module to print the VM's IP address. You can see how we used <code>gcp_ip.address</code> to show the address. </p>
<p><strong>NOTE</strong>: If by any chance you get an error that a package or library doesn’t exist, you can just run the command:</p>
<pre><code class="lang-bash">pip3 install &lt;package name&gt;
</code></pre>
<p>I know this is just provisioning a VM and doing nothing with it. In my upcoming articles, I’ll take you through installing and configuring additional things on the VM to make it useful all with Ansible. Stay tuned.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Automating infrastructure management tasks with Ansible can greatly improve efficiency and reduce errors. Provisioning and managing GCP VMs with Ansible is a powerful way to streamline infrastructure management and ensure that your systems are always configured to your specifications. Whether you’re deploying a new application, scaling up an existing system, or just need to make updates to your infrastructure, Ansible provides a simple and powerful way to automate these tasks. By following the steps outlined in this article, you can start using Ansible to provision Google Cloud VMs in no time.</p>
<p>Of course, this is just the beginning of what you can do with Ansible and Google Cloud. Ansible has a wide range of modules for managing different aspects of cloud infrastructure, from networking to security to storage. You can use Ansible to automate the deployment of applications, configure load balancers, and much more.</p>
<p>If you liked this article, please leave a clap or even a comment and don’t forget to follow me to get updated when I publish another one. Thanks!</p>
]]></content:encoded></item><item><title><![CDATA[Deploying to Google Cloud Run with Github Actions: A Step-by-Step Guide]]></title><description><![CDATA[What is Google Cloud Run?

Google Cloud Run is a serverless container platform that enables developers to run applications in a fully managed environment. It allows you to deploy stateless containers on a pay-as-you-go basis and auto-scales your appl...]]></description><link>https://oluwafemiakinde.dev/deploying-to-google-cloud-run-with-github-actions-a-step-by-step-guide</link><guid isPermaLink="true">https://oluwafemiakinde.dev/deploying-to-google-cloud-run-with-github-actions-a-step-by-step-guide</guid><category><![CDATA[GitHub]]></category><category><![CDATA[google cloud]]></category><category><![CDATA[#cloudrun]]></category><dc:creator><![CDATA[Oluwafemi Akinde]]></dc:creator><pubDate>Sun, 16 Apr 2023 14:53:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1681656708892/dc83bb60-c859-41a2-af71-706bf39dca09.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-what-is-google-cloud-run"><strong>What is Google Cloud Run?</strong></h3>
<blockquote>
<p>Google Cloud Run is a serverless container platform that enables developers to run applications in a fully managed environment. It allows you to deploy stateless containers on a pay-as-you-go basis and auto-scales your application based on incoming traffic.</p>
</blockquote>
<h3 id="heading-what-is-github-actions"><strong>What is Github Actions?</strong></h3>
<blockquote>
<p>GitHub Actions is a powerful workflow automation tool that allows developers to automate their development workflows. It integrates well with Google Cloud Run, making it easy to deploy applications from GitHub to Cloud Run.</p>
</blockquote>
<p>In this article, I will be deploying a containerized web application to Google Cloud Run using GitHub Actions.</p>
<p><strong>NOTE: If you want to use Gitlab CI/CD instead of GitHub Actions, see my other article</strong> <a target="_blank" href="https://medium.com/@oluwafemiakinde/deploying-to-google-cloud-run-with-gitlab-ci-cd-a-step-by-step-guide-2c617e4ea2d4"><strong>here</strong></a><strong>.</strong></p>
<p>Let’s continue….</p>
<h3 id="heading-prerequisites">Prerequisites</h3>
<p>Before we dive into the tutorial, make sure you have the following:</p>
<ul>
<li><p>A Google Cloud Platform account</p>
</li>
<li><p>A GitHub account</p>
</li>
<li><p>Docker installed on your local machine</p>
</li>
</ul>
<h3 id="heading-step-1-set-up-your-project-on-google-cloud">Step 1: Set up your project on Google Cloud</h3>
<p>Before we can deploy our application to Google Cloud Run, we need to create a new project on Google Cloud Platform and enable the Cloud Run API. Here’s how to do it:</p>
<ol>
<li><p>Go to the <a target="_blank" href="https://console.cloud.google.com/">Google Cloud Console</a>.</p>
</li>
<li><p>Click on the project dropdown menu and select “New Project”.</p>
</li>
<li><p>Give your project a name and click “Create”.</p>
</li>
<li><p>Once your project is created, click on the “Activate Cloud Shell” button on the top right corner of the page.</p>
</li>
<li><p>Run the following command to enable the Cloud Run API:</p>
</li>
</ol>
<pre><code class="lang-bash">gcloud services <span class="hljs-built_in">enable</span> run.googleapis.com
</code></pre>
<p><strong>An alternative way to enable Cloud Run API</strong></p>
<ol>
<li><p>Go to the Google Cloud Console and select your project.</p>
</li>
<li><p>In the left navigation menu, click on “APIs &amp; Services” and then “Dashboard.”</p>
</li>
<li><p>Click on the “+ ENABLE APIS AND SERVICES” button.</p>
</li>
<li><p>Search for “Cloud Run API” and click on it.</p>
</li>
<li><p>Click the “Enable” button.Step 2: Create a Dockerfile</p>
</li>
</ol>
<p>Next, we need to create a Dockerfile for our application. This file will contain instructions on how to build a container image for our application.</p>
<p>Here’s an example Dockerfile for a Node.js application:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Use the official Node.js image</span>
FROM node:14-alpine

<span class="hljs-comment"># Set the working directory</span>
WORKDIR /app

<span class="hljs-comment"># Copy the package.json and package-lock.json files</span>
COPY package*.json ./

<span class="hljs-comment"># Install the dependencies</span>
RUN npm install --production

<span class="hljs-comment"># Copy the rest of the application code</span>
COPY . .

<span class="hljs-comment"># Expose port 8080</span>
EXPOSE 8080

<span class="hljs-comment"># Start the application</span>
CMD [<span class="hljs-string">"npm"</span>, <span class="hljs-string">"start"</span>]
</code></pre>
<p>Save this file in the root directory of your project.</p>
<h3 id="heading-step-3-build-and-test-the-container-locally">Step 3: Build and test the container locally</h3>
<p>Before deploying our container to Google Cloud Run, let’s build and test it locally. Run the following command to build the container image:</p>
<pre><code class="lang-bash">docker build -t &lt;your-image-name&gt; .
</code></pre>
<p>Replace <code>&lt;your-image-name&gt;</code> with a name for your container image. Once the build is complete, run the container with the following command:</p>
<pre><code class="lang-bash">docker run -p 8080:8080 &lt;your-image-name&gt;
</code></pre>
<p>This will start the container and map port 8080 on your local machine to port 8080 inside the container. Open your web browser and go to <a target="_blank" href="http://localhost:8080"><code>http://localhost:8080</code></a> to test your application.</p>
<h3 id="heading-step-4-set-up-github-actions">Step 4: Set up GitHub Actions</h3>
<p>GitHub Actions is a powerful tool that allows you to automate your software development workflows. In this step, we will be creating a GitHub Actions workflow to build and deploy our container to Google Cloud Run.</p>
<ol>
<li><p>In your GitHub repository, click on the “Actions” tab.</p>
</li>
<li><p>Click on the “Set up a workflow yourself” button.</p>
</li>
<li><p>Replace the contents of the file with the following code:</p>
</li>
</ol>
<pre><code class="lang-bash">name: <span class="hljs-string">"Deploy to Google Cloud Run"</span>

on:
  push:
    branches:
      - main

<span class="hljs-built_in">jobs</span>:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v2

      - name: Set up Google Cloud SDK
        uses: google-github-actions/setup-gcloud@master
        with:
          project_id: &lt;your-project-id&gt;
          service_account_key: <span class="hljs-variable">${{ secrets.GCP_SA_KEY }</span>}
          export_default_credentials: <span class="hljs-literal">true</span>

      - name: Configure docker <span class="hljs-keyword">for</span> GCP
        run: gcloud auth configure-docker

      - name: Build and push Docker image
        uses: docker/build-push-action@v2
        with:
          context: .
          push: <span class="hljs-literal">true</span>
          tags: gcr.io/&lt;your-project-id&gt;/&lt;your-image-name&gt;:latest
          build-args: |
            HTTP_PORT=8080

      - name: Deploy to Cloud Run
        uses: google-github-actions/deploy-cloudrun@main
        with:
          image: gcr.io/&lt;your-project-id&gt;/&lt;your-image-name&gt;:latest
          service: &lt;your-service-name&gt;
          region: &lt;your-region&gt;
          platform: managed
          allow-unauthenticated: <span class="hljs-literal">true</span>
          env_vars: |
              FOO=bar
              ZIP=zap
</code></pre>
<p>Replace <code>&lt;your-project-id&gt;</code>, <code>&lt;your-image-name&gt;</code>, <code>&lt;your-service-name&gt;</code>, and <code>&lt;your-region&gt;</code> with your own values.</p>
<p>You can see more <a target="_blank" href="https://github.com/google-github-actions/deploy-cloudrun">here</a> on how to use the google cloud run github actions.</p>
<p>4. Click on the “Start commit” button and commit the changes to the repository.</p>
<h3 id="heading-step-5-deploy-to-google-cloud-run">Step 5: Deploy to Google Cloud Run</h3>
<p>Once the GitHub Actions workflow completes successfully, your container should be deployed to Google Cloud Run. To verify that your application is running, go to the Google Cloud Console, select your project, and click on “Cloud Run” in the sidebar. You should see your service listed there.</p>
<p>Click on the service to view its details, including the URL for your application. Open this URL in your web browser to test your deployed application.</p>
<p>Congratulations! You have successfully deployed a containerized web application to Google Cloud Run using GitHub Actions.</p>
<p>If you liked this article, please leave a clap or even a comment and don’t forget to follow me to get updated when I publish another one. Thanks!</p>
]]></content:encoded></item><item><title><![CDATA[Deploying to Google Cloud Run with GitLab CI/CD: A Step-by-Step Guide]]></title><description><![CDATA[Google Cloud Run is a powerful platform that allows developers to run stateless HTTP containers without worrying about the underlying infrastructure. With GitLab CI/CD, you can automate your build, test, and deployment process to Cloud Run, making it...]]></description><link>https://oluwafemiakinde.dev/deploying-to-google-cloud-run-with-gitlab-cicd-a-step-by-step-guide</link><guid isPermaLink="true">https://oluwafemiakinde.dev/deploying-to-google-cloud-run-with-gitlab-cicd-a-step-by-step-guide</guid><category><![CDATA[Devops]]></category><category><![CDATA[google cloud]]></category><category><![CDATA[GitLab]]></category><dc:creator><![CDATA[Oluwafemi Akinde]]></dc:creator><pubDate>Sat, 15 Apr 2023 14:23:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1681568520216/5e7a71ef-a2f2-41f8-b802-dc58eabf7a98.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Google Cloud Run is a powerful platform that allows developers to run stateless HTTP containers without worrying about the underlying infrastructure. With GitLab CI/CD, you can automate your build, test, and deployment process to Cloud Run, making it a perfect match for modern application development.</p>
<p>In this article, I will walk you through the process of setting up a GitLab CI/CD pipeline to deploy your code to Google Cloud Run.</p>
<p><strong>NOTE: If you want to use Github Actions instead of Gitlab CI/CD, see my other article</strong> <a target="_blank" href="https://medium.com/@oluwafemiakinde/deploying-containerized-web-apps-to-google-cloud-run-using-github-actions-777590c8bda5"><strong>here</strong></a><strong>.</strong></p>
<p>Let’s continue….</p>
<h3 id="heading-prerequisites">Prerequisites</h3>
<p>Before we get started, make sure that you have the following:</p>
<ul>
<li><p>A Google Cloud account</p>
</li>
<li><p>A GitLab account with a repository containing your code</p>
</li>
<li><p>The Google Cloud SDK installed on your local machine</p>
</li>
<li><p>Docker installed on your local machine</p>
</li>
</ul>
<h3 id="heading-step-1-create-a-google-cloud-run-service">Step 1: Create a Google Cloud Run Service</h3>
<p>First, we need to create a Google Cloud Run service that will host our application. To do this, follow these steps:</p>
<ol>
<li><p>Open the Google Cloud Console and navigate to the Cloud Run page.</p>
</li>
<li><p>Click the “+ Create Service” button.</p>
</li>
<li><p>Choose your preferred region and select the “Deploy one revision from an existing container image” option.</p>
</li>
<li><p>Enter a name for your service and select the container image you want to deploy.</p>
</li>
<li><p>Click “Create” to create your Cloud Run service.</p>
</li>
</ol>
<h3 id="heading-step-2-authenticate-the-google-cloud-sdk">Step 2: Authenticate the Google Cloud SDK</h3>
<p>To deploy your code to Cloud Run, you need to authenticate the Google Cloud SDK on your local machine. To do this, follow these steps:</p>
<ol>
<li>Open your terminal and run the following command:</li>
</ol>
<p><code>gcloud auth login</code></p>
<p>2. Follow the prompts to log in to your Google Cloud account.</p>
<h3 id="heading-step-3-create-a-gitlab-cicd-pipeline">Step 3: Create a GitLab CI/CD Pipeline</h3>
<p>Now that we have our Cloud Run service set up and authenticated the Google Cloud SDK, we can create a GitLab CI/CD pipeline to automate our deployment process.</p>
<ol>
<li><p>In your GitLab repository, create a new file called <code>.gitlab-ci.yml</code>.</p>
</li>
<li><p>Add the following code to the file:</p>
<pre><code class="lang-bash"> image: docker:latest

 services:
   - docker:dind

 before_script:
   - docker login -u <span class="hljs-variable">$CI_REGISTRY_USER</span> -p <span class="hljs-variable">$CI_REGISTRY_PASSWORD</span> <span class="hljs-variable">$CI_REGISTRY</span>

 deploy:
   image: google/cloud-sdk:latest
   script:
     - gcloud auth activate-service-account --key-file=google-creds.json
     - gcloud config <span class="hljs-built_in">set</span> project <span class="hljs-variable">$PROJECT_ID</span>
     - gcloud builds submit --tag gcr.io/<span class="hljs-variable">$PROJECT_ID</span>/<span class="hljs-variable">$CI_PROJECT_NAME</span>:<span class="hljs-variable">$CI_COMMIT_SHA</span>
     - gcloud run deploy --image=gcr.io/<span class="hljs-variable">$PROJECT_ID</span>/<span class="hljs-variable">$CI_PROJECT_NAME</span>:<span class="hljs-variable">$CI_COMMIT_SHA</span> --platform=managed --region=<span class="hljs-variable">$CLOUD_RUN_REGION</span> --allow-unauthenticated --update-env-vars=VAR1=value1,VAR2=value2 --quiet
   only:
     - master
</code></pre>
</li>
</ol>
<p>3. Replace <code>$PROJECT_ID</code> with your Google Cloud project ID and <code>$CLOUD_RUN_REGION</code> with your preferred region.</p>
<p>4. Add any environment variables you need to the <code>--update-env-vars</code> flag.</p>
<p>5. Commit and push your changes to your GitLab repository.</p>
<h3 id="heading-step-4-configure-gitlab-cicd-variables">Step 4: Configure GitLab CI/CD Variables</h3>
<p>Finally, we need to configure some variables in GitLab CI/CD to authenticate our Google Cloud account and registry. To do this, follow these steps:</p>
<ol>
<li><p>In your GitLab repository, navigate to “Settings” &gt; “CI/CD” &gt; “Variables”.</p>
</li>
<li><p>Add the following variables:</p>
</li>
</ol>
<ul>
<li><p><code>GOOGLE_APPLICATION_CREDENTIALS</code> - the contents of your Google Cloud service account key file.</p>
</li>
<li><p><code>PROJECT_ID</code> - your Google Cloud project ID.</p>
</li>
<li><p><code>CI_REGISTRY_USER</code> - your GitLab username.</p>
</li>
<li><p><code>CI_REGISTRY_PASSWORD</code> - your GitLab personal access token.</p>
</li>
</ul>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Congratulations! You now have a fully automated GitLab CI/CD pipeline that deploys your code to Google Cloud Run. With this setup, you can focus on writing code and let GitLab and Google Cloud handle the rest.</p>
]]></content:encoded></item></channel></rss>