<?xml version="1.0" encoding="UTF-8"?>
<rss 
    version="2.0"
    xmlns:dc="http://purl.org/dc/elements/1.1/" 
    xmlns:content="http://purl.org/rss/1.0/modules/content/" 
    xmlns:atom="http://www.w3.org/2005/Atom" 
    xmlns:media="http://search.yahoo.com/mrss/" 
>
    <channel>
        <title><![CDATA[KloudShift GmbH]]></title>
        <description><![CDATA[From concept to cloud]]></description>
        <link>https://kloudshift.net</link>
        
        <generator>Ghost 6.26</generator>
        <lastBuildDate>Fri, 17 Apr 2026 17:49:36 +0200</lastBuildDate>
        <atom:link href="https://kloudshift.net" rel="self" type="application/rss+xml"/>
        <ttl>60</ttl>

                <item>
                    <title><![CDATA[ASP.NET Core Minimal APIs: Quick Guide to API Versioning]]></title>
                    <description><![CDATA[Introduction

When building RESTful APIs with ASP.NET Core, you&#39;ll sooner or later find yourself in the place where you need to add versioning to your API. Mostly because you need to introduce breaking changes but need to keep backwards compatibility.

This is where API versioning comes into]]></description>
                    <link>https://kloudshift.net/blog/asp-net-core-minimal-apis-quick-guide-to-api-versioning/</link>
                    <guid isPermaLink="false">691355ed21b585000115af52</guid>


                        <dc:creator><![CDATA[Matthias Güntert]]></dc:creator>

                    <pubDate>Tue, 11 Nov 2025 16:27:43 +0100</pubDate>

                        <media:content url="https://images.unsplash.com/photo-1591262184859-dd20d214b52a?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;M3wxMTc3M3wwfDF8c2VhcmNofDF8fGV2b2x1dGlvbnxlbnwwfHx8fDE3NjI4NjUwMTl8MA&amp;ixlib&#x3D;rb-4.1.0&amp;q&#x3D;80&amp;w&#x3D;2000" medium="image"/>

                    <content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1591262184859-dd20d214b52a?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;M3wxMTc3M3wwfDF8c2VhcmNofDF8fGV2b2x1dGlvbnxlbnwwfHx8fDE3NjI4NjUwMTl8MA&amp;ixlib&#x3D;rb-4.1.0&amp;q&#x3D;80&amp;w&#x3D;2000" alt="ASP.NET Core Minimal APIs: Quick Guide to API Versioning"/> <h2 id="introduction">Introduction</h2><p>When building RESTful APIs with ASP.NET Core, you'll sooner or later find yourself in the place where you need to add versioning to your API. Mostly because you need to introduce breaking changes but need to keep backwards compatibility. </p><p>This is where API versioning comes into play, and allows you to evolve your application without breaking existing clients. As you add new features or change data contracts, older versions of the API can remain stable and compatible for consumers who depend on them. </p><p>This allows you to introduce improvements safely, manage deprecations in a controlled way, and maintain backward compatibility across mobile apps, integrations, or partner systems. </p><p>Let's have a look at the different types of API versioning and compare them to each other. Later, we'll get to see some code. </p><h2 id="types-of-api-versioning">Types of API versioning</h2><h3 id="versioning-by-url-segment">Versioning by URL segment</h3><p>This is by far the most popular way to version a public API, where the version itself is part of the URL, e.g.</p><pre><code>https://api.kloudshift.net/v1/products
https://api.kloudshift.net/v2/customers</code></pre><p><strong>Advantages</strong></p><p>This approach clearly and explicitly communicates which version a client is using. Also, requiring an explicit service version helps ensure existing clients don't break.</p><p><strong>Disadvantage</strong></p><p>However, with this way of versioning, it's not possible to select a default API version, when clients don't specify one explicitly. To enable such scenarios, <em>double route registration</em> would be required providing multiple routes for the same controller action, that can clutter your code.</p><p>Also, clients must change URLs, when upgrading to a new API version, which increases maintenance friction. </p><p>Further, some REST purists might argue, that this approach breaks resource identity. In pure REST, a resource's URI should be stable - adding <code>/v2</code> means a "new" resource, even if it's semantically the same entity. </p><p><strong>When to use</strong></p><p>This type of versioning is best suited if you're building a public API with multiple long-lived versions and you want maximum visibility and simplicity for clients. </p><p>You should avoid it if you are a REST purist and need strict semantics or prefer transparent evolution. If the later is the case, consider header or media-type versioning...</p><h3 id="versioning-by-header">Versioning by header</h3><p>When versioning an API by header, clients need to specifiy which version of the API they are talking to by adding a custom header.</p><pre><code>curl https://api.kloudshift.net/products -H "X-Api-Version: 1" ...</code></pre><p><strong>Advantage</strong></p><p>With this approach, the URLs and resource paths remain clean and free of version strings. It aligns well with REST principles in that the URL represents the resource not its version. This also implies that API versioning is decoupled from routing. </p><p>Also I like the fact, that default API versions can be controlled backend-side and independelty from the client. </p><p>It further allows clients to switch versions withoug having to change URLs - just the header value. This can be beneficial, e.g. when building an SDK. </p><p><strong>Disadvantage</strong></p><p>However on the downside, I think this approach is less discoverable to clients. Which header should I set? Which versions are supported by this endpoint? </p><p>Some APIs just return a 400 Bad Request, without letting you know that a version header was missing in the request. Don't be like that, this is enoying! Luckily <code>Asp.Versioning.Http</code> has some revelation to that, we'll get to that later. </p><p><strong>When to use</strong></p><p>You should use header-based API versioning when you want to keep your URLs clean and version agnostic, and when your clients (e.g., mobile apps, backends, SDKs) can easily control their requests/headers.</p><p>However, when you rely on caching (CDNs, proxies), you should better choose URL based versioning, since caches often key by URL, not headers. Further, when your API is accessed primarily from browsers or forms you are better of with versioning by URL path segment or query strings...</p><h3 id="versioning-by-query-string">Versioning by query string</h3><p>When versioning an API by query string, you include the version as a query parameter in the URL, for example:</p><pre><code>https://api.kloudshift.net/products?version=1</code></pre><p><strong>Advantages</strong></p><p>For clients, this approach is very simple to use and understand, it doesn't require modifying headers or complex request formats.</p><p><strong>Disadvantages</strong></p><p>Just like the URL path segment versioning approach, REST purist might argue it doesn't follow strict REST semantics. The resource should be identified by the URL - the version is more of a representational concern. Including it as a query parameter mixes concerns. </p><p><strong>When to use</strong></p><p>Query-based versioning works well for internal APIs, low-traffic services, or early-stage projects where simplicity and ease of testing outweigh strict REST compliance or caching concerns.</p><h3 id="versioning-by-media-type">Versioning by media type</h3><p>When using API versioning by media type, a client needs to set the requested version in the <code>Accept</code> header that is used by the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/Content_negotiation?ref=kloudshift.net">HTTP content negotiation mechanism</a>. </p><pre><code>curl https://api.kloudshift.net/api/products -H "Accept: application/json; charset=utf-8; version=1.3-rc" ...</code></pre><p><strong>Advantages</strong></p><p>Just like the header-based versioning approach, this comes with the benefit of clean and stable URLs. It keeps the URL free of version information and aligns well with REST principles.</p><p>Although being the most complex approach, it also provides the best flexibility, in that it allows versioning per representation, not per endpoint. </p><p><strong>Disadvantages</strong></p><p>This approach comes with the same disadvantages as the header-based versioning, in that it's not obvious for a client how it needs to select a version and which are available (the <code>Asp.Versioning.Http</code> NuGet package helps us with in this aspect, more later). </p><p>Also, it can be seen as some overhead to the client, since the client needs to manage headers carefully making it harder for simple integrations or frontend apps that expect straightforward URLs. </p><p><strong>When to use</strong></p><p>You should consider media-type versioning if you're building a highly RESTful enterprise-level, or hypermedia-driven API, where representations evolve independently of endpoints. Otherwise, prefer URL or header-based versioning for ease of maintenance.</p><h2 id="versioning-with-aspnet-core">Versioning with ASP.NET Core</h2><h3 id="getting-started">Getting started</h3><p>To add versioning to your project you need to add the following packages. Since this blog post deals with Minimal APIs only, its sufficient to install <code>Asp.Versioning.Http</code>.</p><table>
<thead>
<tr>
<th>Package</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Asp.Versioning.Http</td>
<td>Adds API versioning to your ASP.NET Core Minimal API applications</td>
</tr>
<tr>
<td>Asp.Versioning.Mvc</td>
<td>Adds API versioning to your ASP.NET Core MVC (Core) applications</td>
</tr>
<tr>
<td>Asp.Versioning.OData</td>
<td>Adds API versioning to your ASP.NET Core applications using OData v4.0</td>
</tr>
</tbody>
</table>
<p>Then add these calls to your <code>Program.cs</code>, where <code>AddProblemDetails</code> adds services required for creation of <code>ProblemDetails</code> for failed requests and <code>AddApiVersioning</code> adds the versioning capabilities.</p><pre><code class="language-csharp">builder.Services.AddProblemDetails();
builder.Services.AddApiVersioning();</code></pre><h3 id="versioning-by-url-segment-1">Versioning by URL segment</h3><p>To enable versioning by URL segment I initialize the <code>ApiVersionReader</code> with an instance of <code>UrlSegmentApiVersionReader</code>. </p><p>Also, I create one <code>ApiVersionSet</code> per endpoint, that allows defining which versions are supported. Finally, a route template is required that is defined with <code>/api/v{version:apiVersion}/products</code> and map each endpoint to a specific version.</p><pre><code class="language-csharp">public static class Program
{
    public static void Main(string[] args)
    {
        var builder = WebApplication.CreateBuilder(args);

        builder.Services.AddProblemDetails();
        builder.Services.AddApiVersioning(o =&gt;
        {
            o.ApiVersionReader = new UrlSegmentApiVersionReader();
        });

        var app = builder.Build();

        var products = app.NewApiVersionSet()
            .HasApiVersion(new(1))
            .HasApiVersion(new(2))
            .Build();

        app.MapPost("/api/v{version:apiVersion}/products", (HttpContext ctx, ProductRequest req) =&gt;
            {
                return TypedResults.Ok(new
                {
                    version = ctx.GetRequestedApiVersion()?.ToString(),
                    request = req
                });
            })
            .WithApiVersionSet(products)
            .MapToApiVersion(1);

        app.MapPost("/api/v{version:apiVersion}/products", (HttpContext ctx, ProductRequestV2 req) =&gt;
            {
                return TypedResults.Ok(new
                {
                    version = ctx.GetRequestedApiVersion()?.ToString(),
                    request = req
                });
            })
            .WithApiVersionSet(products)
            .MapToApiVersion(2);

        app.Run();
    }
}</code></pre><p>Example request</p><pre><code>https://api.kloudshift.net/v2/products</code></pre><h3 id="versioning-by-header-1">Versioning by header</h3><p>To enable versioning by header, we initialize the <code>ApiVersionReader</code> with an instance of <code>HeaderApiVersionReader</code> and pass in the expected header name. </p><pre><code class="language-csharp">builder.Services.AddApiVersioning(o =&gt;
{
    o.ApiVersionReader = new HeaderApiVersionReader("X-Api-Version");
});</code></pre><p>Example request</p><pre><code>https://api.kloudshift.net/products -H "X-Api-Version: 2"</code></pre><h3 id="versioning-by-query-string-1">Versioning by query string</h3><p>To enable versioning by query string, we use the <code>QueryStringApiVersionReader</code> type and optionally pass a name for the parameter name to use. The default is <code>api-version</code>. </p><pre><code class="language-csharp">builder.Services.AddApiVersioning(o =&gt;
{
    // defaults to "api-version"
    o.ApiVersionReader = new QueryStringApiVersionReader("version");
});</code></pre><p>Example request</p><pre><code>https://api.kloudshift.net/products?version=2</code></pre><h3 id="versioning-by-media-type-1">Versioning by media-type</h3><p>To enable versioning by media type we use the <code>MediaTypeApiVersionReader</code> and optionally pass a name for the parameter, which defaults to <code>v</code>.</p><pre><code class="language-csharp">builder.Services.AddApiVersioning(o =&gt;
{
    // paramter name defaults to "v"
    o.ApiVersionReader = new MediaTypeApiVersionReader();
});</code></pre><p>Example request</p><pre><code>https://api.kloudshift.net/products -H "Accept: application/json; charset=utf-8; v=1 "</code></pre><h3 id="version-discovery">Version discovery </h3><p>Most likely you want to communicate to a client which versions each endpoint supports. This can be achieved by using the <code>ReportApiVersions</code> option, which adds the <code>api-supported-versions</code> header to the response.</p><pre><code>builder.Services.AddApiVersioning(o =&gt;
{
    o.ApiVersionReader = new HeaderApiVersionReader("X-Api-Version");
    o.ReportApiVersions = true;
});</code></pre><p>As you can see in the example request/response below, the client requests version 2 by setting the header <code>x-api-version: 2</code>. The client response then contains the <code>api-supported-versions</code> header indicating it supports both version 1, and 2.</p><pre><code>POST http://api.kloudshift.net/products HTTP/1.1
X-Api-Version: 2

{
  "Name": "Backend development",
  "Price": 42.4,
  "Description": "Some description"
}
 
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Tue, 11 Nov 2025 14:30:39 GMT
Server: Kestrel
Transfer-Encoding: chunked
api-supported-versions: 1, 2</code></pre><p>This option can also be set on the <code>ApiVersionSet</code>. In the example, only the products endpoint will advertise available versions. </p><pre><code class="language-csharp">var products = app.NewApiVersionSet()
    .HasApiVersion(new(1))
    .HasApiVersion(new(2))
    .ReportApiVersions()
    .Build();

var customers = app.NewApiVersionSet()
    .HasApiVersion(new(1))
    .HasApiVersion(new(2))
    .HasApiVersion(new(3))
    .Build();

app.MapPost("/api/products", ...)
   .WithApiVersionSet(products)
   .MapToApiVersion(2);

app.MapPost("/api/customers", ...)
   .WithApiVersionSet(customers)
   .MapToApiVersion(3);</code></pre><h3 id="handling-requests-without-a-specified-api-version">Handling requests without a specified API version</h3><p>The last feature, that I'd like to highlight is the <code>ApiVersionSelector</code> option. This setting defines the behavior of how an API version is selected by the backend, when a client has not requested an explicit API version and you don't want such calls to fail with a <code>400 Bad Request</code> response. </p><p>There are four types of <code>ApiVersionSelectors</code>, these are: </p><p><code>DefaultApiVersionSelector</code></p><p>This selector always selects the configured <code>DefaultApiVersion</code> regardless of the available API version available. </p><pre><code class="language-csharp">builder.Services.AddApiVersioning(o =&gt;
{
    o.ApiVersionReader = new HeaderApiVersionReader("X-Api-Version");
    
    o.DefaultApiVersion = new ApiVersion(2);
    o.AssumeDefaultVersionWhenUnspecified = true;
    o.ApiVersionSelector = new DefaultApiVersionSelector(o);
});</code></pre><p><code>ConstantApiVersionSelector</code></p><p>Always selects the defined API version.</p><pre><code class="language-csharp">builder.Services.AddApiVersioning(o =&gt;
{
    o.ApiVersionReader = new HeaderApiVersionReader("X-Api-Version");
    
    o.AssumeDefaultVersionWhenUnspecified = true;
    o.ApiVersionSelector = new ConstantApiVersionSelector(o);
});</code></pre><p><code>CurrentImplementationApiVersionSelector</code></p><p>Selects the maximum API version available which doesn't have a version status. For example, if the version <code>1</code>, <code>2</code> and <code>3-alpha</code> are available, then <code>2</code> will be selected. </p><pre><code class="language-csharp">builder.Services.AddApiVersioning(o =&gt;
{
    o.ApiVersionReader = new HeaderApiVersionReader("X-Api-Version");
    
    o.AssumeDefaultVersionWhenUnspecified = true;
    o.ApiVersionSelector = new CurrentImplementationApiVersionSelector(o);
});</code></pre><p><code>LowestImplementedApiVersionSelector</code></p><p>Selects the minimum API version available which does not have a version status. For example, if the version <code>0.9-beta</code>, <code>1</code>, <code>2</code> and <code>3-alpha</code> are available, then <code>1</code> will be selected. </p><pre><code class="language-csharp">builder.Services.AddApiVersioning(o =&gt;
{
    o.ApiVersionReader = new HeaderApiVersionReader("X-Api-Version");
    
    o.AssumeDefaultVersionWhenUnspecified = true;
    o.ApiVersionSelector = new LowestImplementedApiVersionSelector(o);
});</code></pre><h2 id="conclusion">Conclusion</h2><ul><li>There are four common approaches to version your API, each having it's advantage and disadvantage. </li><li>Selecting the right versioning approach depends on the clients your writing your API for.</li><li>I'd recommend implementing API versioning right from the beginning of your project. Sooner or later you'll need to implement breaking changes, but need to keep backwards compatibility. </li><li>My personal favorite is the header-based versioning, since I like to keep endpoints stable. YMMV.</li></ul><p>That's it for today. Happy hacking 😎</p><h2 id="further-reading">Further reading</h2><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/dotnet/aspnet-api-versioning?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - dotnet/aspnet-api-versioning: Provides a set of libraries which add service API versioning to ASP.NET Web API, OData with ASP.NET Web API, and ASP.NET Core.</div><div class="kg-bookmark-description">Provides a set of libraries which add service API versioning to ASP.NET Web API, OData with ASP.NET Web API, and ASP.NET Core. - dotnet/aspnet-api-versioning</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://matthiasguentert.net/content/images/icon/pinned-octocat-093da3e6fa40-14.svg" alt=""><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">dotnet</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://matthiasguentert.net/content/images/thumbnail/aspnet-api-versioning" alt="" onerror="this.style.display = 'none'"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/dotnet/aspnet-api-versioning/wiki?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Home</div><div class="kg-bookmark-description">Provides a set of libraries which add service API versioning to ASP.NET Web API, OData with ASP.NET Web API, and ASP.NET Core. - dotnet/aspnet-api-versioning</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://matthiasguentert.net/content/images/icon/pinned-octocat-093da3e6fa40-15.svg" alt=""><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">dotnet</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://matthiasguentert.net/content/images/thumbnail/aspnet-api-versioning-1" alt="" onerror="this.style.display = 'none'"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.lexicalscope.com/blog/2012/03/12/how-are-rest-apis-versioned/?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">How are REST APIs versioned?</div><div class="kg-bookmark-description">I am currently working on a REST API, and the question was raised, how are, and how should, REST APIs be versioned? Here are the results of my research. It seems that there are a number of people r…</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://matthiasguentert.net/content/images/icon/favicon-6.ico" alt=""><span class="kg-bookmark-author">Lexicalscope</span><span class="kg-bookmark-publisher">Tim Wood</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://matthiasguentert.net/content/images/thumbnail/blank.jpg" alt="" onerror="this.style.display = 'none'"></div></a></figure>]]></content:encoded>
                </item>
                <item>
                    <title><![CDATA[Comparing self-hostable PaaS solutions: CapRover, Coolify &amp; Dokploy reviewed]]></title>
                    <description><![CDATA[Introduction

For many small and medium-sized businesses, the cost of running workloads on hyperscalers can be too high, while data sovereignity and control remain non-negotiable priorities - this is especially true for the finance and health-care industry in Switzerland.

At the same time, operating a full Kubernetes cluster—even with]]></description>
                    <link>https://kloudshift.net/blog/comparing-self-hostable-paas-solutions-caprover-coolify-dokploy-reviewed/</link>
                    <guid isPermaLink="false">68dd79d7d78a570001a367f5</guid>


                        <dc:creator><![CDATA[Matthias Güntert]]></dc:creator>

                    <pubDate>Wed, 01 Oct 2025 20:58:31 +0200</pubDate>

                        <media:content url="https://images.unsplash.com/photo-1626278664285-f796b9ee7806?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;M3wxMTc3M3wwfDF8c2VhcmNofDN8fGZpZ2h0fGVufDB8fHx8MTc1OTM0MjQwNXww&amp;ixlib&#x3D;rb-4.1.0&amp;q&#x3D;80&amp;w&#x3D;2000" medium="image"/>

                    <content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1626278664285-f796b9ee7806?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;M3wxMTc3M3wwfDF8c2VhcmNofDN8fGZpZ2h0fGVufDB8fHx8MTc1OTM0MjQwNXww&amp;ixlib&#x3D;rb-4.1.0&amp;q&#x3D;80&amp;w&#x3D;2000" alt="Comparing self-hostable PaaS solutions: CapRover, Coolify &amp; Dokploy reviewed"/> <h2 id="introduction">Introduction</h2><p>For many small and medium-sized businesses, the cost of running workloads on hyperscalers can be too high, while data sovereignity and control remain non-negotiable priorities - this is especially true for the finance and health-care industry in Switzerland.</p><p>At the same time, operating a full Kubernetes cluster—even with a managed control plane—brings more overhead than most teams can justify. Not everybody is running a fully-fledged micro-service application that needs global scaling capabilities.</p><p>Integrating a PaaS platform like Azure App Service into your CICD toolchain often adds unncessary complexity and feels more suited for large enterprises than for SMB needs. It doesn't provide the unified Developer Experience (DX) you might be looking for.</p><p>If you've been nodding throughout these first paragraphes, and you are looking for a <strong>self-hostable alternative to services like Heroku, Vercel, Netliy, Azure App Service</strong>, etc. than this article is for you. </p><p>I'll compare <strong><em>CapRover, Coolify and Dokploy</em></strong>, three <strong>off-the-shelf Internal Developer Platforms</strong>, that make it simple for your team to deploy and manage applications and databases with simple effort. </p><p>I've deployed and tested these platforms on a VPS running <code>Ubuntu 24.04</code> As of writing the latest versions where <code>CapRover 1.14.0</code>, <code>Coolify v4.0.0-beta.431</code>, and <code>DokPloy v0.25.4</code>. All solutions are based on Docker &amp; Docker Swarm. </p><p>These are the aspects I had a closer look at</p><ul><li>Maturity &amp; community support</li><li>Installation, upgrading &amp; maintenance</li><li>Docker Compose &amp; Docker Swarm support</li><li>API &amp; CLI</li><li>Multi server setup &amp; clustering</li><li>Domain name management &amp; automatic HTTPs support (ACME)</li><li>User, Teams &amp; Permission Management</li><li>Git Push Deployments with GitHub</li><li>Logging, Monitoring &amp; Notifications</li><li>Backup capabilities of platform configuration &amp; volumes</li><li>Multi environment support (PROD, STAGING, DEV, ...)</li><li> Preview deployments</li></ul><p>Let's start with an overview describing CapRover, Coolify and Dokploy.</p><h2 id="a-brief-description-of-the-projects">A brief description of the projects</h2><p><strong><em>CapRover</em></strong></p><p>CapRover claims to be an extremely easy to use app/database deployment &amp; web server manager for your Node.js, Python, PHP, ASP.NET, Ruby, MySQL, MongoDB, Postgres, WordPress, ... applications.</p><p>It's blazingly fast and very robust as it uses Docker, nginx, LetsEncrypt and NetData under the hood behind its simple-to-use interface.</p><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/2VSKM3h.png" class="kg-image" alt="" loading="lazy" width="1268" height="821"></figure><p><strong><em>Coolify</em></strong></p><p>The platform carries the claim of "self-hosting with superpowers". An open-source &amp; self-hostable alternative to Vercel, Heroku, Netlify and Railway for easily deploying websites, databases, web applications and 280+ one-click services to your own server.</p><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/5mtL5t5.png" class="kg-image" alt="" loading="lazy" width="2204" height="1528"></figure><p><strong><em>Dokploy</em></strong></p><p>The web dsite states "Dokploy is a stable, easy-to-use deployment solution designed to simplify the application management process. Think of Dokploy as a free alternative self-hostable solution to platforms like Heroku, Vercel, and Netlify."</p><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/WAhwIhf.png" class="kg-image" alt="" loading="lazy" width="1040" height="799"></figure><h2 id="maturity-community-support">Maturity &amp; community support</h2><p>When deciding for a critical platform solution, we don't want to be left without community support.</p><p>So, to get an understanding of how active a GitHub project is, I usually glance at the last commit date, and compare other metrics, e.g. amount of sponsors, stars, repo creation timestamp (<code>created_at</code>) and the number of contributors. </p><p>Of course, this is only a high-level view on the activness of a project, but still provides some sense on how mature and allive a code base is. </p><p>Here are the numbers.</p><table>
<thead>
<tr>
<th></th>
<th>Coolify</th>
<th>CapRover</th>
<th>Dokploy</th>
</tr>
</thead>
<tbody>
<tr>
<td>Contributors</td>
<td>474</td>
<td>65</td>
<td>211</td>
</tr>
<tr>
<td>Stars</td>
<td>45.4k</td>
<td>14.5k</td>
<td>24.8k</td>
</tr>
<tr>
<td>Repo created at</td>
<td>2021-01-25</td>
<td>2017-10-25</td>
<td>2024-04-19</td>
</tr>
<tr>
<td>Sponsors</td>
<td>34+</td>
<td>137</td>
<td>50+</td>
</tr>
<tr>
<td>Downloads</td>
<td>100K+</td>
<td>100M+</td>
<td>1M+</td>
</tr>
</tbody>
</table>
<h2 id="installation-upgrading-maintenance">Installation, upgrading &amp; maintenance</h2><p><strong><em>CapRover</em></strong></p><p>Before installing CapRover, you need to manually set up Docker CE, which is a straightforward process. Next, you need to create a wildcard domain, e.g. <code>*.mydomain.com</code>, and install the CLI tool on your developer machine via npm. Once prepared, installation is as simple as running a single docker run command via SSH.</p><p>Since CapRover itself runs in a container, <strong><em>in-place upgrades</em></strong> are easy and can be triggered directly from the Web UI. According to the documentation, upgrades cause only a brief interruption of running apps.</p><p>CapRover also provides <strong>scheduled Docker cleanup tasks</strong>, helping keep disk usage under control.</p><p><strong><em>Coolify</em></strong></p><p>To install <strong>Coolify</strong>, start with a fresh VPS with at least 2 GB RAM and 2 cores, and run the installation script — Docker CE will be installed automatically. After the first login, a user account is created and a setup wizard guides you through the initial configuration.</p><p>Coolify supports automatic (scheduleable) and manual updating. A test upgrade from <code>v4.0.0-beta.431</code> to <code>v4.0.0-beta.432</code> when smoothly without any interruption.</p><p>Another nice feature is that it allows you to patch your servers straight from the Coolify UI (running e.g. <code>apt update</code> , currently experimental)</p><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/fUlqyAm.png" class="kg-image" alt="" loading="lazy" width="2230" height="1310"></figure><p>It also includes a <strong>scheduled Docker cleanup</strong> feature that is capable of:</p><ul><li>Removing stopped containers managed by Coolify</li><li>Deleting unused images</li><li>Clearing the build cache</li><li>Removing old Coolify helper images</li><li>Optionally removing stale volumes and networks</li></ul><p><strong><em>Dokploy</em></strong></p><p>To set up Dokploy, your VPS should have at least <strong>2 GB RAM and 30 GB disk space</strong>. Installation is very simple: just run the script provided in the documentation. It will automatically install Docker CE. After the first login, a user account is created.</p><p>Upgrades are handled via a short shell script, though the documentation does not clarify whether they cause downtime.</p><p>In addition to <strong>scheduled Docker cleanup tasks</strong>, Dokploy also supports running <strong>custom scripts </strong>on a <strong>cron-based schedule</strong>.</p><h2 id="docker-compose-docker-swarm-support">Docker Compose &amp; Docker Swarm support</h2><p><strong><em>CapRover</em></strong></p><p>Since CapRover uses a custom format for it's deployments (captain files/one-click templates), it provides only limited Docker Compose support. Only a subselection of Docker Compose parameters can be used.</p><p><strong><em>Coolify</em></strong></p><p>The platform fully supports native Docker Compose syntax and displays environment variables from the Compose file directly in the Web UI.</p><p>You can create Docker Compose based Resources either from a public or a private repositoriy. Coolify also allows copying and pasting Docker Compose definitions into a text field. </p><p>I liked that in all cases the Docker Compose file serves as the single source of truth, ensuring no unexpected side effects. At the time of writing I encountered a validation logic bug documented here. </p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/coollabsio/coolify/issues/6208?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">[Bug]: Invalid docker-compose file. Undefined array key “volumes” · Issue #6208 · coollabsio/coolify</div><div class="kg-bookmark-description">Error Message and Logs When clicking the “Validate” button I get this error. The docker compose file works, everything is fine and the container is running and accessible but I noticed that in the…</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://matthiasguentert.net/content/images/icon/pinned-octocat-093da3e6fa40-12.svg" alt=""><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">kevincam3</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://matthiasguentert.net/content/images/thumbnail/6208" alt="" onerror="this.style.display = 'none'"></div></a></figure><p>Coolify supports an <strong>experimental</strong> Docker Swarm mode: you register a Swarm manager (and optionally workers) with Coolify, allowing it to coordinate deployments across nodes.&nbsp; </p><p>In this setup, your Swarm must use an external container registry so that all worker nodes can pull the images the manager builds.&nbsp; &nbsp;</p><p><strong><em>Dokploy</em></strong></p><p>Integrates well with Docker Compose and Docker Stack. You can point to a public Git Repo holding a <code>docker-compose.yaml</code> or pull a definition from your private GitHub repo. Alternatively, you can input raw compose definitions into a text field. Feels mature and well integrated.</p><h2 id="api-cli">API &amp; CLI</h2><p><strong><em>CapRover</em></strong></p><p>CapRover does not directly expose an API, but it provides experimental API access through the CLI command <code>caprover api</code>. The JavaScript-based CLI allows you to:</p><ul><li>Perform actions to prepare CapRover on a server</li><li>Deploy your app to a specific CapRover machine</li><li>Call a generic API (experimental)</li></ul><p><strong><em>Coolify</em></strong></p><p>It provides an API secured by a Bearer token, which can be generated from the UI and supports different permission levels such as <code>root</code>, <code>write</code>, <code>deploy</code>, <code>read</code>, and <code>read:sensitive</code>. However, it does not include a CLI.</p><p><strong><em>Dokploy</em></strong></p><p>Dokploy exposes a feature-rich API secured with JWT authentication, where API keys can be scoped by organizations. Rate limiting is supported, and a Swagger interface is available. It doesn't provide granular token or API permissions. </p><p>In addition Dokploy provides a JavaScript-based CLI-tool via npm, that allows you to create, deploy and manage applications, databases, environments and projects.</p><h2 id="multi-server-setup-clustering">Multi server setup &amp; clustering</h2><p><strong><em>CapRover</em></strong></p><p>This platform supports clustering through Docker Swarm, though it also requires the use of an external container registry. Nodes can be joined to the swarm as either workers or managers via the UI, which involves generating SSH keys, but in my experience it was simpler to add nodes manually. </p><p>One limitation is that the documentation lags behind, making some steps less straightforward than they should be.</p><p><strong><em>Coolify</em></strong></p><p>Coolify allows deploying the same application to <a href="https://coolify.io/docs/knowledge-base/server/multiple-servers?ref=kloudshift.net">multiple servers</a>. You can add a load balancer in front of them to enable HA scenarios (requires manual setup). This feature is currently marked as experimental.</p><p>You can also add a dedicated <a href="https://coolify.io/docs/knowledge-base/server/build-server?ref=kloudshift.net">build server</a> to offload the build process from the machine actually hosting the applications. This keeps the load separated, so it doesn't affect the applications performance. This feature requires a container registry where the build server can push his images to.</p><p><strong><em>Dokploy</em></strong></p><p>When installing Dokploy on a single VPS, the same server handles application builds, hosts the applications, and serves the management UI simultaneously. This doesn't scale well for larger setups, why Dokploy supports a <a href="https://docs.dokploy.com/docs/core/multi-server?ref=kloudshift.net">multi server setup</a>. </p><p>With the multi-server setup, you can separate the management UI from building &amp; hosting. The integration is straight forward, and can be carried out mostly from the UI (besides adding the public SSH key). You won't need an additional container registry.</p><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/ORDD1Ko.png" class="kg-image" alt="" loading="lazy" width="2600" height="2002"></figure><p>After successful setup, you can select a server when creating a new service.</p><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/Q9l1XVa.png" class="kg-image" alt="" loading="lazy" width="1176" height="1268"></figure><p>I didn't find a way to further separate building from hosting with the multi-server feature.</p><p>But Dokploy also offers a <a href="https://docs.dokploy.com/docs/core/cluster?ref=kloudshift.net">clustering feature</a>. The idea of using clusters is to allow each server to host a different application and, using Traefik along with the load balancer, redirect the traffic from the dokploy server to the servers you choose. The reverse proxy remains on the manager node.</p><p>It allows to deploy multiple replicas of an application and distribute them across worker nodes (Docker Stack and Applications only, doesn't work for Compose). </p><p>For this setup to work, a container registry is required, and ideally the Docker Swarm nodes should communicate over a private network with each other.</p><p>For high-availability, you can scale Traefik to multiple replicas and use an external load balancer for that (not tested, maybe for another article)</p><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">💡</div><div class="kg-callout-text">I came across a tiny bug in the UI, where the generated join command suggested a wrong private IP address. It required manual replacment.</div></div><h2 id="domain-name-management-automatic-https">Domain name management &amp; automatic HTTPs</h2><p><strong><em>CapRover</em></strong></p><p>By default every new deployed application is reachable under the wildcard domain,  that got created prior installation (e.g. <code>my-app.subdomain.domain.tld</code>). New domains can be bound and configured from within the UI. CapRover comes with builtin support for Let's Encrypt and supports HTTP to HTTPS redirection.</p><p><strong><em>Coolify</em></strong> </p><p>The platform supports configuring wildcard domains, e.g. <code>*.example.com</code>. This allows you to use generated domain names for each application under that domain, e.g. <code>my-app.example.com</code>. </p><p>If you don't configure a wildcard domain, the domain auto-generation process will generate domain names under the service/domain <code>sslip.io</code> for quick access &amp; testing. But of course you can also set a custom domain <code>www.somethingelse.com</code></p><p>Coolify also supports automatic redirection of e.g. <code>example.com</code> to <code>www.example.com</code> and allows enforcing HTTPs</p><p><strong><em>Dokploy</em></strong></p><p>Dokploy provides two ways to add domains, free domains and domains you bought. Free domains are provided by <code>traefik.me</code> but are limited to HTTP only.</p><p>The UI also supports uploading your own x509 certificates, this can also be used to enable HTTPs for <code>traefik.me</code> . It further supports automatic SSL certificate provisioning via Let's Encrypt (ACME). You can also use other custom certificate providers</p><h2 id="user-teams-permission-management">User, Teams &amp; Permission Management</h2><p><strong><em>CapRover</em></strong></p><p>It offers a Pro plan that includes two-factor authentication, but otherwise only supports single-user mode.</p><p><strong><em>Coolify</em></strong></p><p>Coolify supports creating multiple teams (though this feature did not work in my test with version <code>v4.0.0-beta.431</code>), inviting new users, and assigning them one of three roles: admin, owner, or member. Additionally, it offers two-factor authentication for logins.</p><p><strong><em>Dokploy</em></strong></p><p>It allows organizing resources by organizations and supports two-factor authentication. New members can be invited, though in the self-hosted version this requires manually sharing the invitation link, while in the cloud version it works directly. See <a href="https://github.com/Dokploy/dokploy/issues/1834?ref=kloudshift.net">this issue</a> for further details.</p><p>Invitations are scoped to organizations, meaning users only gain access to resources within the org they are invited to, unless they receive separate invitations for additional orgs. </p><p>Currently, only one admin role is allowed per instance, but multiple permission levels are available to manage users effectively.</p><h2 id="git-push-deployment-with-github">Git Push Deployment with GitHub</h2><p><strong><em>CapCover</em></strong></p><p>It supports push-based deployments by configuring a webhook in your repository that triggers on a branch push. Once CapRover receives the webhook call, it pulls the code, builds it, and deploys it automatically, though the UI for this feature appears limited.</p><p><strong><em>Coolify</em></strong></p><p>Supports automatic deployments on commits and pull requests, working with both public and private repositories. For private repos, integration is possible via either a GitHub App or deploy keys. When using the GitHub App, enabling auto-deploy is as simple as ticking the <strong><em>Auto Deploy</em></strong> box in the resource configuration.</p><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/uK7joky.png" class="kg-image" alt="" loading="lazy" width="1386" height="626"></figure><p><strong><em>Dokploy</em></strong></p><p>Automatic deployment can be set up using either webhooks or the Dokploy API, though this is limited to Applications and Docker Compose. </p><p>For GitHub, autodeploy works out of the box with no additional configuration, and once Git integration is complete, you only need to set the trigger type on the application provider.</p><h2 id="logging-monitoring-notifications">Logging, Monitoring &amp; Notifications</h2><p><strong><em>CapRover</em></strong> </p><p>It provides a monitoring dashboard and a server-side Nginx log analyzer, with metrics powered by Netdata (though only available on the leader node). </p><p>Logs can be viewed per app, but the functionality is quite limited. Notifications are handled through the Netdata module, supporting delivery via email, Slack, Telegram, and Pushbullet.</p><p><strong><em>Coolify</em></strong></p><p>Notifications can be triggered for deployments, backups, scheduled tasks, and server events such as cleanups or disk usage. </p><p>They can be sent via email, Discord, Telegram, Slack, or Pushover, and logs can also be drained to third-party applications like Axiom and New Relic.</p><p><strong><em>Dokploy</em></strong></p><p>It provides basic graphs for CPU, memory, and disk usage, along with notifications for events such as app deployments, build errors, database backups, Docker cleanup tasks, and Dokploy restarts. </p><p>Notifications can be sent through Slack, Telegram, Discord, email, Gotify, or ntfy, but external log drains are not supported.</p><h2 id="configuration-backup-volume-backup">Configuration Backup &amp; Volume Backup</h2><p><strong><em>CapRover</em></strong></p><p>Backup and restore functionality is still experimental. While it works for most resources, images require using a Docker registry and volumes need a custom solution—both approaches having their own pros and cons..</p><p><strong><em>Coolify</em></strong> </p><p>Coolify offers two ways to back up the instance itself: automatic backups to S3 storage or manual backups triggered on demand. However, it does not provide an automatic or UI-integrated solution for backing up and restoring volumes, though the documentation offers clear guidance on how to handle this manually.</p><p><strong><em>Dokploy</em></strong></p><p>Dokploy supports configuration backups to an S3 destination, covering the entire file system and database, with the option to schedule them regularly. </p><p>It also provides integrated database backups and supports volume backups for applications and Docker Compose. Named Docker volumes can be selected from a list and backed up to S3, with the option to temporarily stop containers during the process to avoid file locks or corruption, making it a very convenient solution.</p><h2 id="multi-environment-support">Multi environment support</h2><p><strong><em>CapRover</em></strong></p><p>Doesn't support environments</p><p><strong><em>Coolify</em></strong></p><p>It supports three types of shared variables: team-based, project-based, and environment-based (such as prod, staging, or dev). However, environment-level variables were unclear in usage, and team-based variables did not work in testing.</p><p><strong><em>Dokploy</em></strong></p><p>You can configure environment variables at different scopes: project-wide variables that apply to all applications, environment-level variables limited to a specific environment, and application-scoped variables that affect only a single application.</p><h2 id="preview-deployments">Preview Deployments</h2><p><strong><em>CapRover</em></strong></p><p>Doesn't support preview deployments </p><p><strong><em>Coolify</em></strong></p><p>It offers a great way to test applications before merging into the main branch by creating preview deployments that act like a staging environment. </p><p>Preview URLs can be templated with identifiers such as the pull request ID (e.g., PR 123 becomes 123.example.com). Automated preview deployments can also be enabled, making each new pull request instantly available at its own preview URL by simply ticking <strong>Preview Deployments</strong> under <em>Configuration → Advanced</em>.</p><p><strong><em>Dokploy</em></strong></p><p>Preview deployments only work with applications sourced from GitHub and linked via a GitHub App, and they should be used exclusively with private repositories to prevent external users from triggering builds and deployments. </p><p>You can limit the number of preview deployments per application, use either auto-generated or custom domains, and apply label filters to control which pull requests trigger a preview deployment.</p><h2 id="verdict">Verdict</h2><p>A key capability missing across all three platforms is <strong>robust observability integration</strong>.</p><p><strong><em>CapRover</em></strong></p><p>I found that CapRover lacks in documentation. I also found its UI quite basic and sometimes buggy. E.g. when I removed a worker node from the Swarm, the UI was not updating. Also the featureset is quite limited compared to the Coolify &amp; Dokploy. </p><p>Besides that it still a respected and widely used solution according to high download metric of 100M+ on Docker Hub. </p><p><strong><em>Coolify</em></strong></p><p>I found the UX sometimes a bit difficult to understand intuitively, e.g. when looking for the HTTPs settings. Also when create new applications which are sourced via a private Git repositoriy, it would be nice to have search function since the list of repos can get long. Also having backup support for volumes would be aweseome to have. </p><p>Other than that, the platform is quite mature with a lot of features at hand. I liked the multi-server setup to horizontally scale the installation. Also the documentation is in a very good shape!</p><p><strong><em>Dokploy</em></strong></p><p>Although being the youngest of these three projects, Dokploy is my personal favorite. </p><p>I find it a feature-rich PaaS platform with a pretty good documentation. Installation is dead simple and it provides full Docker Compose support. The UX is well designed and I liked the support for multiple organizations allowing for clear separation between stacks. A highlight is the nicely integrated backup feature allowing for scheduled backups of volumes, databases and configuration.</p><h2 id="further-reading">Further reading</h2><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.dokploy.com/docs/core?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Welcome to Dokploy | Dokploy</div><div class="kg-bookmark-description">Dokploy is a open source alternative to Heroku, Vercel, and Netlify.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://matthiasguentert.net/content/images/icon/icon.svg" alt=""><span class="kg-bookmark-author">Dokploy Docs</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://matthiasguentert.net/content/images/thumbnail/logo.png" alt="" onerror="this.style.display = 'none'"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://coolify.io/docs/?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Coolify Docs</div><div class="kg-bookmark-description">Self hosting with superpowers: An open-source &amp; self-hostable Heroku / Netlify / Vercel alternative.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://matthiasguentert.net/content/images/icon/coolify-logo-transparent-1.png" alt=""><span class="kg-bookmark-author">Get Started</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://matthiasguentert.net/content/images/thumbnail/og-image-docs-1.png" alt="" onerror="this.style.display = 'none'"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://caprover.com/docs/get-started.html?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Getting Started · CapRover</div><div class="kg-bookmark-description">## Simple Setup</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://matthiasguentert.net/content/images/icon/favicon-5.ico" alt=""></div></div><div class="kg-bookmark-thumbnail"><img src="https://matthiasguentert.net/content/images/thumbnail/logo-1.png" alt="" onerror="this.style.display = 'none'"></div></a></figure>]]></content:encoded>
                </item>
                <item>
                    <title><![CDATA[Secure your codeless REST API with automatic HTTPS using Data API Builder and Caddy]]></title>
                    <description><![CDATA[Introduction

In my previous article, I demonstrated how we can build a codeless REST API with Data API Builder and how the endpoints can be write-protected by introducing roles with the help of Azure AD.

Creating and securing a codeless REST API on Azure using Data API BuilderThis article describes]]></description>
                    <link>https://kloudshift.net/blog/secure-your-codeless-rest-api-with-automatic-https-using-data-api-builder-and-caddy/</link>
                    <guid isPermaLink="false">68c1941a24d7360001b2f717</guid>


                        <dc:creator><![CDATA[Matthias Güntert]]></dc:creator>

                    <pubDate>Wed, 10 Sep 2025 17:07:06 +0200</pubDate>

                        <media:content url="https://matthiasguentert.net/content/images/2023/03/caddy-and-data-api-builder.png" medium="image"/>

                    <content:encoded><![CDATA[<img src="https://matthiasguentert.net/content/images/2023/03/caddy-and-data-api-builder.png" alt="Secure your codeless REST API with automatic HTTPS using Data API Builder and Caddy"/> <h2 id="introduction">Introduction </h2><p>In my previous article, I demonstrated how we can build a codeless REST API with <a href="https://github.com/Azure/data-api-builder?ref=kloudshift.net">Data API Builder</a> and how the endpoints can be write-protected by introducing roles with the help of Azure AD.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://matthiasguentert.net/creating-and-securing-a-codeless-rest-api-on-azure-using-data-api-builder/?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Creating and securing a codeless REST API on Azure using Data API Builder</div><div class="kg-bookmark-description">This article describes how we can build a codeless REST API using Data API Builder and host it securely on Azure Container Instances</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://matthiasguentert.net/content/images/2020/10/favicon-96x96.ico" alt=""><span class="kg-bookmark-author">Matthias' Blog</span><span class="kg-bookmark-publisher">Matthias Güntert</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://matthiasguentert.net/content/images/2023/03/dab-architecture-overview.png" alt="" onerror="this.style.display = 'none'"></div></a></figure><p>Unfortunately, the described architecture doesn't provide HTTPS out of the box, which makes its use insecure. This is especially true for any operations requiring an access token. </p><p>So in this article, I'll describe an architecture that will protect our codeless REST API with a reverse proxy providing <strong>automatic </strong>HTTPS to further reduce maintenance! </p><p>For this article, the REST API will build on the famous AdventureWorksLT data set, that we host on a slim Azure SQL database. </p><p>Then, we will use Caddy as a sidecar to the Data API Builder runtime, and host the container group on Azure Container Instances. </p><blockquote>💡 Caddy is a powerful, enterprise-ready, open source web server with automatic HTTPS written in Go. Some benchmarks promise 4 times higher performance than Nginx. </blockquote><h3 id="features-components">Features &amp; Components</h3><ul><li>Caddy 2, acting as a reverse proxy and providing automatic HTTPS</li><li>Data API Builder </li><li>An Azure Container Instance Group</li><li>Azure SQL Server &amp; Database</li><li>Azure Storage Account hosting configuration files</li><li>Let's Encrypt and the ACME protocol</li></ul><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://matthiasguentert.net/content/images/2023/03/caddy-and-data-api-builder-1.png" class="kg-image" alt="" loading="lazy" width="907" height="641"><figcaption>The target architecture featuring automatic HTTPS</figcaption></figure><p>Without further ado, let's get started 🧪</p><h2 id="step-by-step">Step by step... </h2><blockquote>🔎 In the sections below, readers of my <a href="https://matthiasguentert.net/creating-and-securing-a-codeless-rest-api-on-azure-using-data-api-builder/?ref=kloudshift.net">previous article</a> on the Data API Builder might realize some repetiting parts. I want my articles to be as easy as possible to follow, that's why I decided to list the required sub-steps again...</blockquote><h3 id="azure-sql-database">Azure SQL Database</h3><p>🪛 First, we'll need an Azure SQL server to host our demo database</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">az group create `
  --location westeurope `
  --name rg-demo

az sql server create `
  --name sql-azureblue `
  --resource-group rg-demo `
  --admin-password "your-password" `
  --admin-user "sqladmin"</code></pre><figcaption>Create an Azure SQL server</figcaption></figure><p>🪛 Next, let's set up the database and use the Adventure Works LT sample data.</p><figure class="kg-card kg-code-card"><pre><code class="language-bash"> az sql db create `
   --name sqldb-adventureworks `
   --resource-group rg-demo `
   --server sql-azureblue `
   --backup-storage-redundancy Local `
   --edition Basic `
   --capacity 5 `
   --max-size 2GB `
   --sample-name AdventureWorksLT</code></pre><figcaption>Create Azure SQL demo database</figcaption></figure><p>🪛 Finally, we need to make sure, that all Azure Services are able to access our database.</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">az sql server firewall-rule create `
  --server sql-azureblue `
  --resource-group rg-demo `
  --name AllowAzureServices `
  --start-ip-address 0.0.0.0 ` 
  --end-ip-address 0.0.0.0</code></pre><figcaption>Allow Azure Services to access the Azure SQL Server</figcaption></figure><p>Don't worry because of the IP range. The command won't open up the server to the entire Internet. Instead, it ticks the checkbox saying <em>Allow Azure service and resources to access this server</em>. </p><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/yAgVoR7.png" class="kg-image" alt="" loading="lazy"></figure><h3 id="azure-storage-account-file-shares">Azure Storage Account &amp; File Shares </h3><p>For the purpose of this article, we'll need four file shares, which are</p><ul><li><code>dab-config</code></li><li><code>proxy-caddyfile</code></li><li><code>proxy-config</code></li><li><code>proxy-data</code></li></ul><p>The <code>dab-config</code> file share will host the <code>dab-config.json</code> file providing input to the Data API Builder runtime. The <code>proxy-caddyfile</code> file share will host the <code>Caddyfile</code>, which configures our reverse proxy, <code>proxy-config</code> will host the <a href="https://caddyserver.com/docs/conventions?ref=kloudshift.net#configuration-directory">Caddy configuration directory</a> and last but not least, <code>proxy-data</code> will persist the <a href="https://caddyserver.com/docs/conventions?ref=kloudshift.net#data-directory">Caddy data directory</a>. </p><blockquote>🔎 From the Caddy docs: [...] The Caddy data directory stores TLS certificates, private keys, OCSP staples, and other necessary information to the data directory. It should not be purged without an understanding of the implications.</blockquote><p>🪛 Okay, let's create the storage account named <code>stdabtlsdemo</code> and the beforementioned file shares. </p><figure class="kg-card kg-code-card"><pre><code class="language-bash"># Create storage account 
az storage account create `
  --name stdabtlsdemo `
  --resource-group rg-demo `
  --location westeurope

# Store connection string 
$env:AZURE_STORAGE_CONNECTION_STRING = $(az storage account show-connection-string --name stdabtlsdemo --resource-group rg-demo --output tsv)

# Create file shares
az storage share create `
  --name dab-config `
  --account-name stdabtlsdemo
  
az storage share create `
  --name proxy-caddyfile `
  --account-name stdabtlsdemo

az storage share create `
  --name proxy-config `
  --account-name stdabtlsdemo
  
  az storage share create `
  --name proxy-data `
  --account-name stdabtlsdemo</code></pre><figcaption>Create a new storage account and file share</figcaption></figure><p>Before we move on and create the Azure Container Instance Group, let's have a closer look at the configuration files.</p><blockquote>💡 You can find all configuration files in <a href="https://github.com/matthiasguentert/data-api-builder-article/tree/main/dab-with-caddy-and-tls?ref=kloudshift.net">my GitHub repository</a>.</blockquote><h3 id="data-api-builder-runtime-configuration">Data API Builder Runtime Configuration</h3><p>Here is a basic runtime configuration, that exposes a single endpoint <code>$baseUrl/api/product</code> for public read access. The endpoint gets fed by data coming from the table <code>SalesLT.Product</code>. </p><p>Further, the connection string is injected by an environment variable called <code>DATABASE_CONNECTION_STRING</code>. </p><figure class="kg-card kg-code-card"><pre><code class="language-json">{
    "$schema": "https://dataapibuilder.azureedge.net/schemas/v0.5.35/dab.draft.schema.json",
    "data-source": {
        "database-type": "mssql",
        "options": {
            "set-session-context": false
        },
        "connection-string": "@env('DATABASE_CONNECTION_STRING')"
    },
    "runtime": {
        "rest": {
            "enabled": true,
            "path": "/api"
        },
        "graphql": {
            "allow-introspection": true,
            "enabled": true,
            "path": "/graphql"
        },
        "host": {
            "mode": "development",
            "cors": {
                "origins": [],
                "allow-credentials": false
            },
            "authentication": {
                "provider": "StaticWebApps"
            }
        }
    },
    "entities": {
        "product": {
            "source": "SalesLT.Product",
            "permissions": [
                {
                    "role": "anonymous",
                    "actions": [
                        "read"
                    ]
                }
            ]
        }
    }
}</code></pre><figcaption>dab-config.json</figcaption></figure><p>🪛 Now is a good time to copy the file <a href="https://github.com/matthiasguentert/data-api-builder-article/blob/main/dab-with-caddy-and-tls/dab-config.json?ref=kloudshift.net"><code>dab-config.json</code></a> to the share called <code>dab-config</code>. </p><h3 id="the-caddy-runtime-configuration">The Caddy runtime configuration </h3><p>At a first glance, configuring Caddy as a reverse proxy seems straightforward. However, there are some implications worth mentioning.</p><figure class="kg-card kg-code-card"><pre><code>dab-tls-demo-api.westeurope.azurecontainer.io {
	reverse_proxy http://localhost:5000
}</code></pre><figcaption>Caddyfile</figcaption></figure><p>As we'll run Caddy as a sidecar to the DAB runtime, both containers need to communicate with each other. By default, DAB runs on <code>5000/TCP</code> and doesn't provide SSL. This is why the <code>reverse_proxy</code> directive is prefixed with <code>http</code>.</p><p>Also, containers within an ACI group can only communicate via localhost with each other. This contrasts with the configuration you might be familiar with when creating <code>docker-compose.yaml</code> files.  There, you can reference the containers by name, which is not possible with ACI groups. </p><p>Further, the Caddy runtime (read <em>ACME client</em>) needs to be reachable by the  defined domain name (<code>dab-tls-demo-api.westeurope.azurecontainer.io</code> in my example), otherwise, <em>Let's Encrypt</em> won't be able to issue certificates. </p><p>🪛 Now copy the file <a href="https://github.com/matthiasguentert/data-api-builder-article/blob/main/dab-with-caddy-and-tls/Caddyfile?ref=kloudshift.net"><code>Caddyfile</code></a> to the share called <code>proxy-cadyfile</code>. </p><h3 id="the-aci-group-yaml-configuration">The ACI Group YAML configuration </h3><p>The configuration defines two containers called <code>reverse-proxy</code> and <code>data-api-builder</code>, from which only Caddy gets exposed to the Internet by <code>80/TCP</code> and <code>443/TCP</code>. The instance binds to the hostname <code>dab-tls-demo-api</code>, which later will be reachable via <code>dab-tls-demo-api.westeurope.azurecontainer.io</code>. </p><p>Then, we mount the beforementioned Azure File Shares to the container and inject the database connection string as a secret environment variable into the <code>data-api-builder</code> container. </p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">name: ci-adventureworks-tls-api
apiVersion: "2021-10-01"
location: westeurope
properties:
  containers:
    - name: reverse-proxy
      properties:
        image: caddy:2.6
        ports:
          - protocol: TCP
            port: 80
          - protocol: TCP
            port: 443
        resources:
          requests:
            memoryInGB: 1
            cpu: 1
          limits:
            memoryInGB: 1
            cpu: 1
        volumeMounts:
          - name: proxy-caddyfile
            mountPath: /etc/caddy
          - name: proxy-data
            mountPath: /data
          - name: proxy-config
            mountPath: /config

    - name: data-api-builder
      properties:
        image: mcr.microsoft.com/azure-databases/data-api-builder:0.5.35
        resources:
          requests:
            memoryInGB: 1
            cpu: 1
          limits:
            memoryInGB: 1
            cpu: 1
        volumeMounts:
          - name: dab-config
            mountPath: /dab-config
        environmentVariables:
          - name: DATABASE_CONNECTION_STRING
            secureValue: "&lt;your-connection-string&gt;"
          - name: ASPNETCORE_LOGGING__CONSOLE__DISABLECOLORS
            value: true
        command:
          - dotnet
          - Azure.DataApiBuilder.Service.dll
          - --ConfigFileName
          - /dab-config/dab-config.json

  ipAddress:
    ports:
      - protocol: TCP
        port: 80
      - protocol: TCP
        port: 443
    type: Public        
    dnsNameLabel: dab-tls-demo-api

  osType: Linux

  volumes:
    - name: proxy-caddyfile
      azureFile: 
        shareName: proxy-caddyfile
        storageAccountName: stdabtlsdemo 
        storageAccountKey: "&lt;your-key&gt;"
    - name: proxy-data
      azureFile: 
        shareName: proxy-data
        storageAccountName: stdabtlsdemo 
        storageAccountKey: "&lt;your-key&gt;"
    - name: proxy-config
      azureFile: 
        shareName: proxy-config
        storageAccountName: stdabtlsdemo 
        storageAccountKey: "&lt;your-key&gt;"
    - name: dab-config
      azureFile: 
        shareName: dab-config
        storageAccountName: stdabtlsdemo 
        storageAccountKey: "&lt;your-key&gt;"
</code></pre><figcaption>ci-adventureworks-tls-api.yaml</figcaption></figure><p>🪛 Obviously, you need to replace the placeholders with your values. You can retrieve the storage account keys as follows...</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">az storage account keys list `
  --resource-group rg-demo `
  --account-name stdabtlsdemo `
  --query [0].value `
  --output tsv</code></pre><figcaption>Retrieve the storage account key</figcaption></figure><p>... and the connection string.</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">az sql db show-connection-string `
  --client ado.net `
  --name sqldb-adventureworks `
  --server sql-azureblue `
  --output tsv </code></pre><figcaption>Retrieve the connection string</figcaption></figure><blockquote>🔎 Unfortunately, for now, we can't mount subfolders of a single Azure File Share into containers, and therefore need a dedicated file share for every configuration file 😞</blockquote><h3 id="%E2%9C%85-checkpoint">✅ Checkpoint</h3><p>By now, your File Shares should like as follows. </p><figure class="kg-card kg-code-card"><pre><code>.
└── File shares/
    ├── dab-config/
    │   └── dab-config.json
    ├── proxy-caddyfile/
    │   └── Caddyfile
    ├── proxy-config/
    │   └── (empty)
    └── proxy-data /
        └── (empty)</code></pre><figcaption>File Share structure</figcaption></figure><h3 id="azure-container-instance-group">Azure Container Instance Group</h3><p>Now that you have replaced the values we can finally fire up the container group.</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">az container create `
  --resource-group rg-demo `
  --file ci-adventureworks-tls-api.yaml</code></pre><figcaption>Create the Azure Container Instance Group</figcaption></figure><h2 id="testing-it-out">Testing it out</h2><p>Give the containers some time to start and then verify if TLS 1.3 is properly set up for our API. The easiest way is to use a browser, so go ahead and copy the URL <code>https://dab-tls-demo-api.westeurope.azurecontainer.io/api/product/ProductID/680</code> to your favorite browser...</p><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/k0xUttH.png" class="kg-image" alt="" loading="lazy"></figure><p>Success!!! 🚀🚀🚀As depicted in the screenshot, the certificate got issued by <em>Let's Encrypt, </em>and our data in transit is secured from prying eyes. 💪🏼</p><h2 id="considerations">Considerations </h2><ul><li>For production environments, you'd usually create a <code>CNAME</code> record, e.g. <code>api.mydomain.io</code>, which points to <code>dab-tls-demo-api.westeurope.azurecontainer.io</code>. If you do so, remember to use this CNAME record in your Caddyfile, and not the DNS label managed by ACI!</li><li>Also, you'd usually build your own docker image, instead of mounting the Caddyfile from an Azure Storage Account.</li></ul><h2 id="conclusion">Conclusion </h2><p>With the help of the Data API Builder and Caddy, we have provisioned a TLS-secured, codeless REST API ready to serve our requirements!</p><p>Both key components, DAB and Caddy, have helped us to keep development time and maintenance low. 🚀</p><p>Again, that was fun! 😎 Stay tuned for more articles! </p><h2 id="further-reading">Further reading</h2><figure class="kg-card kg-bookmark-card kg-card-hascaption"><a class="kg-bookmark-container" href="https://hub.docker.com/_/caddy?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Docker</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://hub.docker.com/favicon.ico" alt=""></div></div></a><figcaption>Official Caddy Docker Image</figcaption></figure><figure class="kg-card kg-bookmark-card kg-card-hascaption"><a class="kg-bookmark-container" href="https://caddyserver.com/docs/?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Welcome - Caddy Documentation</div><div class="kg-bookmark-description">Caddy is a powerful, enterprise-ready, open source web server with automatic HTTPS written in Go</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://caddyserver.com/resources/images/favicon.png" alt=""><span class="kg-bookmark-publisher">Caddy Web Server</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://caddyserver.com/resources/images/caddy-open-graph.jpg" alt="" onerror="this.style.display = 'none'"></div></a><figcaption>Caddy Documentation</figcaption></figure><figure class="kg-card kg-bookmark-card kg-card-hascaption"><a class="kg-bookmark-container" href="https://learn.microsoft.com/en-us/azure/container-instances/container-instances-reference-yaml?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">YAML reference for container group - Azure Container Instances</div><div class="kg-bookmark-description">Reference for the YAML file supported by Azure Container Instances to configure a container group</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://learn.microsoft.com/favicon.ico" alt=""><span class="kg-bookmark-author">Microsoft Learn</span><span class="kg-bookmark-publisher">tomvcassidy</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://learn.microsoft.com/en-us/media/logos/logo-ms-social.png" alt="" onerror="this.style.display = 'none'"></div></a><figcaption>Azure Container Instances YAML Reference</figcaption></figure><figure class="kg-card kg-bookmark-card kg-card-hascaption"><a class="kg-bookmark-container" href="https://github.com/matthiasguentert/data-api-builder-article/tree/main/dab-with-caddy-and-tls?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">data-api-builder-article/dab-with-caddy-and-tls at main · matthiasguentert/data-api-builder-article</div><div class="kg-bookmark-description">Contribute to matthiasguentert/data-api-builder-article development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt=""><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">matthiasguentert</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/58fff0bcd1ed49ecfa2c78bbe23bef743263da6bef29868b748b95d5ffb35e56/matthiasguentert/data-api-builder-article" alt="" onerror="this.style.display = 'none'"></div></a><figcaption>Supplement repository with example files</figcaption></figure>]]></content:encoded>
                </item>
                <item>
                    <title><![CDATA[How AKS authentication integrates &amp; works with Entra ID]]></title>
                    <description><![CDATA[Introduction

In production &amp; enterprise-grade setups an AKS cluster gets usually configured to authenticate users against Azure Entra ID and perform authorziation decisions based on the Kubernetes RBAC model.

This makes sense since Entra ID is usually the central identity provider with on-premises Active Directory, while the Kubernetes API still]]></description>
                    <link>https://kloudshift.net/blog/how-aks-authentication-integrates-and-works-with-microsoft-entra-id/</link>
                    <guid isPermaLink="false">68c189e4dfb58800015958c4</guid>


                        <dc:creator><![CDATA[Matthias Güntert]]></dc:creator>

                    <pubDate>Wed, 10 Sep 2025 16:23:32 +0200</pubDate>

                        <media:content url="https://matthiasguentert.net/content/images/2024/01/aks-entra-id-integration-oidc.png" medium="image"/>

                    <content:encoded><![CDATA[<img src="https://matthiasguentert.net/content/images/2024/01/aks-entra-id-integration-oidc.png" alt="How AKS authentication integrates &amp; works with Entra ID"/> <h2 id="introduction">Introduction </h2><p>In production &amp; enterprise-grade setups an AKS cluster gets usually configured to authenticate users against Azure Entra ID and perform authorziation decisions based on the Kubernetes RBAC model.</p><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/UdTSbnw.png" class="kg-image" alt="" loading="lazy" width="755" height="230"></figure><p>This makes sense since Entra ID is usually the central identity provider with on-premises Active Directory, while the Kubernetes API still manages authorization decisions.</p><p>But have you ever wondered how the Azure Entra ID integration works and why the additional helper binary called <code>kubelogin</code> is required?</p><p>In this post, we'll look at Kubernetes and its authentication mechanism and see how it's connected and integrated into Entra ID. Let's get started!</p><h2 id="how-kubernetes-authentication-works-without-entra-id">How Kubernetes authentication works without Entra ID</h2><p>As you might know, Kubernetes has no objects representing normal user accounts, unlike it has for service accounts. </p><p>Instead, any HTTP request hitting the Kubernetes API presenting a valid certificate signed by the cluster's CA is considered authenticated. </p><p>In this case, the user's identity is derived from the common name field in the <code>subject</code> of the certificate.</p><figure class="kg-card kg-code-card"><pre><code>Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 1234
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN = ca
        Validity
            Not Before: Jan 10 19:41:52 2024 GMT
            Not After : Jan 10 19:51:52 2026 GMT
        Subject: O = system:masters, CN = masterclient
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                RSA Public-Key: (4096 bit)
                Modulus: ...
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage:
                TLS Web Client Authentication
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Authority Key Identifier:
                keyid:...            
    Signature Algorithm: sha256WithRSAEncryption
         ...</code></pre><figcaption><p><span style="white-space: pre-wrap;">A user certificate for the masterclient identity</span></p></figcaption></figure><p>After authentication, the Kubernetes RBAC sub-system takes care of authorization decisions and passes the request on to the admission controllers.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://i.imgur.com/JDWPPgJ.png" class="kg-image" alt="" loading="lazy" width="1345" height="426"><figcaption><span style="white-space: pre-wrap;">Kubernetes request processing</span></figcaption></figure><p>Running <code>kubectl auth whoami</code>, will return the following</p><pre><code class="language-bash">ATTRIBUTE   VALUE
Username    masterclient
Groups      [system:masters system:authenticated]</code></pre><p>The depicted user certificate from above has been taken and decoded from the kubeconfig file. It gets pulled and installed by Azure CLI, amongst a private key and the CA certificate, when you execute <code>az aks get-credentials</code>.</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">apiVersion: v1
clusters:
    - cluster:
        certificate-authority-data: &lt;base64-encoded-ca-certificate&gt;
        server: https://your-aks-cluster-dns-yd6y5kxt.hcp.switzerlandnorth.azmk8s.io:443
      name: your-aks-cluster
contexts:
    - context:
        cluster: your-aks-cluster
        user: clusterUser_your-resource-group_your-aks-cluster
      name: your-aks-cluster
current-context: your-aks-cluster
kind: Config
preferences: {}
users:
    - name: clusterUser_your-resource-group_your-aks-cluster
      user:
        client-certificate-data: &lt;base64-encoded-user-certificate&gt;
        client-key-data: &lt;base64-encoded-user-key&gt;
        token: &lt;token&gt;
</code></pre><figcaption><p><span style="white-space: pre-wrap;">.kube/config</span></p></figcaption></figure><p>The <code>certificate-authority-data</code> key holds the cluster certificate, while the <code>client-certificate-data</code> and <code>client-key-data</code> keys hold the user's certificate and corresponding private key.</p><p>Okay, this is all nice and swell 🧐, but how does Entra ID and kubelogin fit in here?</p><h2 id="the-thing-with-kubectl-and-kubelogin">The thing with kubectl and kubelogin</h2><p>Regarding authentication and authorization, OAuth 2.0 and OpenID Connect are the go-to protocols and defacto standards on the Internet.</p><p>However, these protocols are not natively understood by <code>kubectl</code>, which can only work with certificates and bearer tokens. </p><p>To be more precise, kubectl can <em>attach</em> a bearer token to its requests towards the Kubernetes API, but it can't <em>fetch</em> or <em>refresh</em> any bearer tokens.</p><blockquote>⚠️ Please be aware that there are three projects on GitHub with the name <code>kubelogin</code> 🤯 This article refers to Microsoft implementation found <a href="https://github.com/Azure/kubelogin?ref=kloudshift.net">here</a>.</blockquote><p>So this is why the kubelogin binary is required to execute any OAuth flows and pass the retrieved tokens on to <code>kubectl</code>. The following diagram gives a high-level overview of the process. </p><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/NG0j6Pc.png" class="kg-image" alt="" loading="lazy" width="2011" height="618"></figure><p>After executing <code>az aks get-credentials</code> on an Entra ID integrated cluster, your kubeconfig will carry a key called <code>exec</code> in a <code>user</code> object. This entry defines which client-go credential plugin to call and which arguments should be used.</p><pre><code class="language-yaml">apiVersion: v1
clusters: ...
contexts: ...
current-context: your-aks-cluster
kind: Config
preferences: {}
users:
    - name: clusterUser_your-resource-group_your-aks-cluster
      user:
        exec:
            apiVersion: client.authentication.k8s.io/v1beta1
            args:
                - get-token
                - --environment
                - AzurePublicCloud
                - --server-id
                - 6dae42f8-4368-4678-94ff-3960e28e3630
                - --client-id
                - 80faf920-1908-4b52-b5ef-a8e7bedfc67a
                - --tenant-id
                - &lt;your-tenant-id&gt;
                - --login
                - interactive
            command: kubelogin
            env: null
            installHint: ...
            provideClusterInfo: false</code></pre><p>From the example above, the entire <code>kubelogin</code> , including its arguments, looks as follows.</p><pre><code class="language-bash">kubelogin get-token \
  --environment AzurePublicCloud \
  --server-id 6dae42f8-4368-4678-94ff-3960e28e3630 \
  --client-id 80faf920-1908-4b52-b5ef-a8e7bedfc67a \
  --tenant-id &lt;your-tenant-id&gt; \
  --login interactive</code></pre><p>These two Entra ID app registrations, client and server app, are managed by the AKS resource provider for you and configured when you execute:</p><pre><code class="language-bash">az aks create -g &lt;group&gt; -n &lt;cluster&gt; --enable-aad --aad-admin-group-object-ids &lt;id&gt; [--aad-tenant-id &lt;id&gt;]</code></pre><p>Okay, so <code>kubelogin</code> covers the client-side functionality of the authentication process. But how do things work on the Kubernetes API side, and what does the big picture look like? </p><h2 id="kubelogin-entra-id-openid-connect">Kubelogin, Entra ID &amp; OpenID Connect</h2><p>Before we get to the big picture, let's have a look the following diagram, that depicts the relation between the components and how they relate to each other in OAuth 2.0 terminology.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://i.imgur.com/aeAf9yH.png" class="kg-image" alt="" loading="lazy" width="902" height="676"><figcaption><span style="white-space: pre-wrap;">AKS &amp; Azure Entra ID OAuth 2.0 relationship</span></figcaption></figure><p>The user (or resource owner) delegates the rights to access it's identifying data to the client, kubectl, and kubelogin.  </p><p>Further, the client must be registered with Entra ID (the authorization server), which authorizes the token requests.</p><p>Last but not least, AKS, the resource server, has a trust relation to Azure Entra ID. This is required so that AKS can validate the received bearer tokens against Entra ID.</p><h2 id="the-entire-picture">The entire picture</h2><p>Now that we better understand the components, we can merge the diagrams and add more details.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://i.imgur.com/9Jo5NCS.png" class="kg-image" alt="" loading="lazy" width="1451" height="711"><figcaption><span style="white-space: pre-wrap;">The entire picture</span></figcaption></figure><p>Let's walk through each of the steps.</p><h3 id="steps-1-2">Steps 1, 2</h3><p>The user executes <code>kubectl</code> which in turn invokes <code>kubelogin</code> with the <code>server-id</code>, <code>client-id</code> and <code>tenant-id</code>.</p><h3 id="steps-37">Steps 3 - 7</h3><p>The browser opens, and the user is asked to authenticate. The credentials are verified, and an authorization code is requested by communicating with the Entra IDs authorization endpoint, which is then passed back to <code>kubelogin</code>. </p><h3 id="steps-8-9-10">Steps 8, 9, 10</h3><p>Kubelogin exchanges the received authorization code for a bearer token by communicating with the token endpoint. Then, the token returns to <code>kubectl</code>, which attaches it to the <em>users' </em>HTTP request to the Kubernetes API. </p><pre><code>GET /api/v1/namespaces/test/pods
Authorization: Bearer eyJ0eXAiOiJKV7QiLCJhbGciOiJSUzI1N...</code></pre><h3 id="step-11">Step 11</h3><p>At this point, the Kubernetes API must check that the token signature is valid. </p><p>Therefore, it discovers the JWK URI endpoint by using a well-known URL and fetches the public key published at the <code>jwks_uri</code> key.</p><pre><code>https://sts.windows.net/&lt;tenant-id&gt;/.well-known/openid-configuration</code></pre><p>Then, it decodes the token and validates if...</p><ul><li>... the intended recipient match (<code>aud</code> audience claim)</li><li>... the token time window is valid (<code>exp</code> expiration time, <code>nbf</code> not before)</li><li>... the client ID matches (<code>appid</code>)</li></ul><h3 id="step-12">Step 12 </h3><p>After all authentication conditions are met, the user is considered authenticated, and the request is passed on to the kube-api's authorization decision...</p><h2 id="conclusion">Conclusion</h2><p>Let's wrap up 📑</p><ul><li>Kubernetes has no user objects, and they can't be created by the API.</li><li>The authentication mechanism uses OpenID Connect.</li><li><code>kubectl</code> does not implement any OAuth flows.</li><li>A client-go exec plugin called <code>kubelogin</code> is required, which implements OAuth.</li><li>There are three different <code>kubelogin</code> projects on GitHub; don't get confused.</li><li>The client and server IDs are static and reused across multiple (all?) AKS deployments that are Entra ID integrated.</li><li>The client and server app registrations in Entra ID are managed for you by Microsoft.</li></ul><p>That's it for today. I hope you enjoyed reading this article. Always happy to receive feedback! Happy hacking 👨🏽‍💻🤓</p><h2 id="further-reading">Further reading</h2><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://learn.microsoft.com/en-us/azure/aks/enable-authentication-microsoft-entra-id?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Enable managed identity authentication on Azure Kubernetes Service - Azure Kubernetes Service</div><div class="kg-bookmark-description">Learn how to enable Microsoft Entra ID on Azure Kubernetes Service with kubelogin and authenticate Azure users with credentials or managed roles.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://learn.microsoft.com/favicon.ico" alt=""><span class="kg-bookmark-author">Microsoft Learn</span><span class="kg-bookmark-publisher">MGoedtel</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://learn.microsoft.com/en-us/media/open-graph-image.png" alt="" onerror="this.style.display = 'none'"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/kubernetes/client-go?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - kubernetes/client-go: Go client for Kubernetes.</div><div class="kg-bookmark-description">Go client for Kubernetes. Contribute to kubernetes/client-go development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/assets/pinned-octocat-093da3e6fa40.svg" alt=""><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">kubernetes</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/ebf1c2201e226f2d1464023d38f3e8c4ff7bfd055b2267036dc5fbc390feab9f/kubernetes/client-go" alt="" onerror="this.style.display = 'none'"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://azure.github.io/kubelogin/index.html?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Introduction - Azure Kubelogin</div><div class="kg-bookmark-description">A Kubernetes credential (exec) plugin implementing azure authentication</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://azure.github.io/kubelogin/favicon.svg" alt=""><span class="kg-bookmark-author">Azure Kubelogin</span></div></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Authenticating</div><div class="kg-bookmark-description">This page provides an overview of authentication.
Users in Kubernetes All Kubernetes clusters have two categories of users: service accounts managed by Kubernetes, and normal users.
It is assumed that a cluster-independent service manages normal users in the following ways:
an administrator distribu…</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://kubernetes.io/favicons/apple-touch-icon-180x180.png" alt=""><span class="kg-bookmark-author">Kubernetes</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://kubernetes.io/images/kubernetes-horizontal-color.png" alt="" onerror="this.style.display = 'none'"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://learn.microsoft.com/en-us/entra/identity-platform/v2-protocols-oidc?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">OpenID Connect (OIDC) on the Microsoft identity platform - Microsoft identity platform</div><div class="kg-bookmark-description">Sign in Microsoft Entra users by using the Microsoft identity platform’s implementation of the OpenID Connect extension to OAuth 2.0.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://learn.microsoft.com/favicon.ico" alt=""><span class="kg-bookmark-author">Microsoft Learn</span><span class="kg-bookmark-publisher">OwenRichards1</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://learn.microsoft.com/en-us/media/open-graph-image.png" alt="" onerror="this.style.display = 'none'"></div></a></figure>]]></content:encoded>
                </item>
                <item>
                    <title><![CDATA[ASP.NET Core Integration Tests with Test Containers &amp; Postgres]]></title>
                    <description><![CDATA[Introduction

In this post, I will demonstrate how test containers can be leveraged for proper DAL integration testing of ASP.NET Core, EF Core, and Postgres.

I will outline why you will want to use it over other common integration testing scenarios and demonstrate how it can be used together]]></description>
                    <link>https://kloudshift.net/blog/asp-net-core-integration-tests-with-test-containers-and-postgres/</link>
                    <guid isPermaLink="false">68c189e4dfb58800015958c2</guid>


                        <dc:creator><![CDATA[Matthias Güntert]]></dc:creator>

                    <pubDate>Wed, 10 Sep 2025 16:23:32 +0200</pubDate>

                        <media:content url="https://images.unsplash.com/photo-1600132806370-bf17e65e942f?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;MnwxMTc3M3wwfDF8c2VhcmNofDIyfHx0ZXN0aW5nfGVufDB8fHx8MTY1MzQyNjQwNQ&amp;ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;w&#x3D;2000" medium="image"/>

                    <content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1600132806370-bf17e65e942f?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;MnwxMTc3M3wwfDF8c2VhcmNofDIyfHx0ZXN0aW5nfGVufDB8fHx8MTY1MzQyNjQwNQ&amp;ixlib&#x3D;rb-1.2.1&amp;q&#x3D;80&amp;w&#x3D;2000" alt="ASP.NET Core Integration Tests with Test Containers &amp; Postgres"/> <h2 id="introduction">Introduction</h2><p>In this post, I will demonstrate how test containers can be leveraged for proper DAL integration testing of ASP.NET Core, EF Core, and Postgres. </p><p>I will outline why you will want to use it over other common integration testing scenarios and demonstrate how it can be used together with the <code>WebApplicationFactory</code> to fully run your ASP.NET Core application in memory and create a reusable fixture for your testbed. </p><h2 id="test-containers-dal-testing-scenarios">Test Containers &amp; DAL testing scenarios</h2><p>If you are a Spring Boot/Java developer, you might have heard of a library called <a href="https://www.testcontainers.org/?ref=kloudshift.net">testcontainers</a>. The Java library provides <em>"... lightweight, throwaway instances of common databases, Selenium web browsers, or anything else that can run in a Docker container." </em></p><p>For this post, I will use its C# counterpart, called .NET Testcontainers. This <a href="https://www.nuget.org/packages/Testcontainers/?ref=kloudshift.net">NuGet package</a> follows the idea of its Java predecessor and provides throwaway Docker instances for testing purposes. </p><p>It is built on top of the .NET Docker remote API and comes with a couple of pre-configured configurations, e.g., Postgres, Microsoft SQL Server, MySQL, Redis, RabbitMQ, and a couple more.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/testcontainers/testcontainers-dotnet?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - testcontainers/testcontainers-dotnet: A library to support tests with throwaway instances of Docker containers for all compatible .NET Standard versions.</div><div class="kg-bookmark-description">A library to support tests with throwaway instances of Docker containers for all compatible .NET Standard versions. - GitHub - testcontainers/testcontainers-dotnet: A library to support tests with ...</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt=""><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">testcontainers</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/d5b19a09c8c7d795681dabba4c3b33cd4f23d24a20d555234c131c18d615b52b/testcontainers/testcontainers-dotnet" alt="" onerror="this.style.display = 'none'"></div></a></figure><h3 id="ef-core-testing-strategies-test-containers">EF Core Testing Strategies &amp; Test Containers</h3><p>You can follow two paths when choosing an EF Core testing strategy. You either use a <strong>test double</strong> or run your test against a <strong>production database</strong>. </p><p>There are <a href="https://docs.microsoft.com/en-us/ef/core/testing/choosing-a-testing-strategy?ref=kloudshift.net#different-types-of-test-doubles">different kinds of test doubles</a> you can choose from, which are: </p><ul><li>SQLite (in-memory mode)</li><li>EF Core in-memory provider</li><li>Mock/stub the <code>DBContext</code> and <code>DBSet</code></li><li>Introduce a repository layer between EF Core and your application code and mock or stub that layer.</li></ul><p>These strategies have pros and cons, which I will not fully elaborate on here. </p><p>However, they all share one important drawback: The test doubles do not behave exactly like your production database. Let me name a few important points:</p><ul><li>The same LINQ query may return different results on different providers due to differences in case sensitivity</li><li>Provider-specific methods cannot be tested</li><li>Limited testing of referential integrity</li><li>Limited raw SQL support</li></ul><p>So, depending on the complexity of your application, these difficulties will sooner or later result in <em>false-positive </em>test results (functionality is broken, but the test passes) or will leave some functionality untested.</p><p>This inevitably leads to the point where you want to test against a <strong>production database</strong>. However, involving a production database also has its hurdles. </p><p>First, you must set up an RDBMS on your developer machine and a build server (and maintain it). </p><p>Second, since the database is a shared dependency on the testing code, special effort is required to manage test database instances and their states.</p><blockquote>A shared dependency is a dependency that is shared between tests and provides means for those tests to affect each other’s outcome. - (Khorikov, 2020, p.28)</blockquote><h3 id="enter-test-containers">Enter test containers</h3><p>Using ephemeral containers relieves you of both of the aforementioned burdens. </p><p>First, there is no need for a complex RDBMS setup, and second, your tests will always start with a known state since each test can use a fresh container.</p><p>Using test containers instead of a fully-fledged RDBMS installation makes the database an out-of-process dependency since tests no longer work with the same instance. </p><blockquote>An out-of-process dependency is a dependency that runs outside the application’s execution process; it’s a proxy to data that is not yet in the memory. - (Khorikov, 2020, p.28)</blockquote><p>Last, your integration tests benefit from the full feature set of the involved RDBMS.</p><p>This is what a basic test setup looks like. It uses xUnits <code>IAsyncLifetime</code> interface to ensure the container is ready to serve requests before the test runs. </p><figure class="kg-card kg-code-card"><pre><code class="language-csharp">public sealed class PostgreSqlTest : IAsyncLifetime
{
    private readonly PostgreSqlContainer _postgreSqlContainer = new PostgreSqlBuilder()
        .WithImage("postgres:14.7")
        .WithDatabase("db")
        .WithUsername("postgres")
        .WithPassword("postgres")
        .WithCleanUp(true)
        .Build(); 
    
    [Fact]
    public void ExecuteCommand()
    {
        using var connection = new NpgsqlConnection(_postgreSqlContainer.GetConnectionString());
        using var command = new NpgsqlCommand();
        connection.Open();
        command.Connection = connection;
        command.CommandText = "SELECT 1";
        command.ExecuteReader();
    }

    public Task InitializeAsync()
    {
        return _postgreSqlContainer.StartAsync();
    }

    public Task DisposeAsync()
    {
        return _postgreSqlContainer.DisposeAsync().AsTask();
    }
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">PostgreSqlTest.cs</span></p></figcaption></figure><p>You'll need to add the testcontainers and the module Nuget packages to your xUnit project.</p><pre><code class="language-bash">dotnet add package Testcontainers
dotnet add package Testcontainers.PostgreSql</code></pre><p>Now that the stage is set, let's move on and introduce ASP.NET Cores <code>WebApplicationFactory</code> before we put everything to work in the last chapter. </p><h2 id="aspnet-core-webapplicationfactory">ASP.NET Core &amp; WebApplicationFactory</h2><p>The <code>WebApplicationFactory</code> is a class that allows running an in-memory version of your real application by using a test web host and a test web server. </p><p><code>The NuGet package provides the typeMicrosoft.AspNetCore.Mvc.Testing</code> and is using your application's real configuration, DI service registration, and middleware pipeline.</p><p>Here is a basic integration test making use of the <code>WebApplicationFactory</code> together with <a href="https://xunit.net/docs/shared-context?ref=kloudshift.net#class-fixture">xUnits</a> <code>IClassFixture</code> interface, which is a marker interface. It tells xUnit to build an instance of <code>T</code> before building the test class and inject the instance into the test class' constructor.</p><figure class="kg-card kg-code-card"><pre><code class="language-csharp">public class IntegrationTest : IClassFixture&lt;WebApplicationFactory&lt;Program&gt;&gt;
{
    private readonly WebApplicationFactory&lt;Program&gt; _factory;

    public IntegrationTest(WebApplicationFactory&lt;Program&gt; factory)
    {
        _factory = factory;
    }

    [Fact]
    public async Task Should_return_weather_forecast_on_http_get()
    {
        var client = _factory.CreateClient();

        var response = await client.GetAsync("/WeatherForecast");

        response.EnsureSuccessStatusCode();
    }
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">IntegrationTest.cs</span></p></figcaption></figure><p>To make this test work, you'll have to add a reference from your test project to your ASP.NET Core project and add <code>public partial class Program {}</code> to it. </p><figure class="kg-card kg-code-card"><pre><code class="language-csharp">var builder = WebApplication.CreateBuilder(args);
builder.Services.AddControllers();
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();

var app = builder.Build();

if (app.Environment.IsDevelopment())
{
    app.UseSwagger();
    app.UseSwaggerUI();
}
app.UseHttpsRedirection();
app.UseAuthorization();
app.MapControllers();

app.Run();

public partial class Program { }</code></pre><figcaption><p><span style="white-space: pre-wrap;">Program.cs</span></p></figcaption></figure><p>By running the real application in memory, you can keep as much distance as possible between your tests and your application's inner workings, which eases the testing of the observable behavior. This approach reduces test fragility by focusing on the <em>whats </em>instead of the <em>hows</em>.</p><h3 id="custom-webapplicationfactory-dependencies">Custom WebApplicationFactory &amp; dependencies</h3><p>Now let's see how we can create a custom <code>WebApplicationFactory</code> and how to replace dependencies. </p><p>As mentioned at the beginning of this section, the factory allows running the application in memory just as it would in production. This implies that EF Core will also connect to your <strong>productive </strong>database if you don't replace this shared dependency (<code>DbContext</code>) with one pointing to your test container. </p><p>Following the simple example above, we would have to replace this dependency for each and every integration test. Instead, we will create a custom factory. This is as simple as inheriting from <code>WebApplicationFactory</code>. </p><p>Next, we will remove the database context from the DI container, register a new one pointing to the test container and make sure the database schema gets properly initialized by calling <code>context.Database.EnsureCreated()</code>. </p><figure class="kg-card kg-code-card"><pre><code class="language-csharp">public class CustomFactory : WebApplicationFactory&lt;Program&gt;
{
    // Gives a fixture an opportunity to configure the application before it gets built.
    protected override void ConfigureWebHost(IWebHostBuilder builder)
    {
        builder.ConfigureTestServices(services =&gt;
        {
            // Remove AppDbContext
            var descriptor = services.SingleOrDefault(d =&gt; d.ServiceType == typeof(DbContextOptions&lt;AppDbContext&gt;));
            if (descriptor != null) services.Remove(descriptor);
            
            // Add DB context pointing to test container
            services.AddDbContext&lt;AppDbContext&gt;(options =&gt; { options.UseNpgsql("the new connection string"); });
            
            // Ensure schema gets created
            var serviceProvider = services.BuildServiceProvider();

            using var scope = serviceProvider.CreateScope();
            var scopedServices = scope.ServiceProvider;
            var context = scopedServices.GetRequiredService&lt;AppDbContext&gt;();
            context.Database.EnsureCreated();
        });
    }
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">CustomFactory.cs</span></p></figcaption></figure><p>This is not the most beautiful code in the world... let's move the removal- and schema creation part to an extension method. </p><pre><code class="language-csharp">public static class ServiceCollectionExtensions
{
    public static void RemoveDbContext&lt;T&gt;(this IServiceCollection services) where T : DbContext
    {
        var descriptor = services.SingleOrDefault(d =&gt; d.ServiceType == typeof(DbContextOptions&lt;T&gt;));
        if (descriptor != null) services.Remove(descriptor);
    }

    public static void EnsureDbCreated&lt;T&gt;(this IServiceCollection services) where T : DbContext
    {
        var serviceProvider = services.BuildServiceProvider();

        using var scope = serviceProvider.CreateScope();
        var scopedServices = scope.ServiceProvider;
        var context = scopedServices.GetRequiredService&lt;T&gt;();
        context.Database.EnsureCreated();
    }
}</code></pre><p>This results in a much cleaner factory class. </p><figure class="kg-card kg-code-card"><pre><code class="language-csharp">public class CustomFactory : WebApplicationFactory&lt;Program&gt;
{
    // Gives a fixture an opportunity to configure the application before it gets built.
    protected override void ConfigureWebHost(IWebHostBuilder builder)
    {
        builder.ConfigureTestServices(services =&gt;
        {
            // Remove AppDbContext
            services.RemoveDbContext&lt;AppDbContext&gt;();
            
            // Add DB context pointing to test container
            services.AddDbContext&lt;AppDbContext&gt;(options =&gt; { options.UseNpgsql("the new connection string"); });
            
            // Ensure schema gets created
            services.EnsureDbCreated&lt;AppDbContext&gt;();
        });
    }
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">CustomFactory.cs</span></p></figcaption></figure><p>Pay close attention to call <code>builder.ConfigureTestServices()</code> and not <code>builder.ConfigureServices()</code> when testing an ASP.NET Core application prior to version 6.</p><p>The last method will be executed <strong>before</strong> the <code>WebApplicationFactory</code> calls <code>Startup.ConfigureServices()</code>, which means your production DI registration code, will override your changes, and you might test against your production database! </p><p><strong>☝🏻Order of execution </strong>💣</p><ol><li><code>builder.ConfigureServices()</code> inside your <code>WebApplicationFactory</code></li><li><code>Startup.ConfigureServices()</code> from your application code</li><li><code>builder.ConfigureTestServices()</code> inside <code>WebApplicationFactory</code> </li></ol><h2 id="putting-everything-to-work">Putting everything to work</h2><p>The only thing left is to merge everything together. I have introduced generics to make it reusable across different projects in a solution. </p><figure class="kg-card kg-code-card"><pre><code class="language-csharp">public class IntegrationTestFactory&lt;TProgram, TDbContext&gt; : WebApplicationFactory&lt;TProgram&gt;, IAsyncLifetime
    where TProgram : class where TDbContext : DbContext
{
    private readonly TestcontainerDatabase _container;

    public IntegrationTestFactory()
    {
        _container = new TestcontainersBuilder&lt;PostgreSqlTestcontainer&gt;()
            .WithDatabase(new PostgreSqlTestcontainerConfiguration
            {
                Database = "test_db",
                Username = "postgres",
                Password = "postgres",
            })
            .WithImage("postgres:11")
            .WithCleanUp(true)
            .Build();
    }

    protected override void ConfigureWebHost(IWebHostBuilder builder)
    {
        builder.ConfigureTestServices(services =&gt;
        {
            services.RemoveProdAppDbContext&lt;TDbContext&gt;();
            services.AddDbContext&lt;TDbContext&gt;(options =&gt; { options.UseNpgsql(_container.ConnectionString); });
            services.EnsureDbCreated&lt;TDbContext&gt;();
        });
    }

    public async Task InitializeAsync() =&gt; await _container.StartAsync();

    public new async Task DisposeAsync() =&gt; await _container.DisposeAsync();
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">IntegrationTestFactory.cs</span></p></figcaption></figure><p>And here is a basic test making use of the custom factory. </p><figure class="kg-card kg-code-card"><pre><code class="language-csharp">public class Tests : IClassFixture&lt;IntegrationTestFactory&lt;Program, AppDbContext&gt;&gt;
{
    private readonly IntegrationTestFactory&lt;Program, AppDbContext&gt; _factory;

    public Tests(IntegrationTestFactory&lt;Program, AppDbContext&gt; factory) =&gt; _factory = factory;

    [Fact]
    public async Task Foo()
    {
        var client = _factory.CreateClient();

        var response = await client.GetAsync("/weatherforecast");
        
        response.EnsureSuccessStatusCode();
    }
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">IntegrationTest.cs</span></p></figcaption></figure><h2 id="final-thoughts">Final thoughts</h2><p>This solution is nice because it balances resistance to refactoring, protection against regressions, and fast feedback (see Khorikov, 2020, p. 88).</p><p>Last but not least, the tests are runnable on GitHub without further modifications to the virtual environments 🤓</p><h2 id="further-reading">Further reading</h2><figure class="kg-card kg-bookmark-card kg-card-hascaption"><a class="kg-bookmark-container" href="https://docs.microsoft.com/en-us/aspnet/core/test/integration-tests?view=aspnetcore-6.0&ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Integration tests in ASP.NET Core</div><div class="kg-bookmark-description">Learn how integration tests ensure that an app’s components function correctly at the infrastructure level, including the database, file system, and network.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.microsoft.com/favicon.ico" alt=""><span class="kg-bookmark-author">Microsoft Docs</span><span class="kg-bookmark-publisher">Rick-Anderson</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.microsoft.com/en-us/media/logos/logo-ms-social.png" alt="" onerror="this.style.display = 'none'"></div></a><figcaption><p><span style="white-space: pre-wrap;">Integration tests in ASP.NET Core</span></p></figcaption></figure><figure class="kg-card kg-bookmark-card kg-card-hascaption"><a class="kg-bookmark-container" href="https://andrewlock.net/converting-integration-tests-to-net-core-3/?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Converting integration tests to .NET Core 3.0: Upgrading to ASP.NET Core 3.0 - Part 5</div><div class="kg-bookmark-description">In this post I discuss the changes required to upgrade integration tests that use WebApplicationFactory or TestServer to ASP.NET Core 3.0.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://andrewlock.net/apple-touch-icon.png?v=QEMBRv9w7P" alt=""><span class="kg-bookmark-author">Andrew Lock | .NET Escapades</span><span class="kg-bookmark-publisher">Andrew Lock</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://andrewlock.net/content/images/2019/exam.jpg" alt="" onerror="this.style.display = 'none'"></div></a><figcaption><p><span style="white-space: pre-wrap;">How to use IOutputHelper in a custom WebApplicationFactory</span></p></figcaption></figure><figure class="kg-card kg-bookmark-card kg-card-hascaption"><a class="kg-bookmark-container" href="https://github.com/actions/virtual-environments/blob/main/images/linux/Ubuntu2004-Readme.md?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">virtual-environments/Ubuntu2004-Readme.md at main · actions/virtual-environments</div><div class="kg-bookmark-description">GitHub Actions virtual environments. Contribute to actions/virtual-environments development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt=""><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">actions</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/e0ebcf64b2e83d6c53a9fb53052797ec7cc9f8ada3afd16b942d8df55664e630/actions/virtual-environments" alt="" onerror="this.style.display = 'none'"></div></a><figcaption><p><span style="white-space: pre-wrap;">List of available packages on GitHub-hosted runners</span></p></figcaption></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://dotnet.testcontainers.org/?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Testcontainers for .NET</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://dotnet.testcontainers.org/favicon.ico" alt=""><span class="kg-bookmark-author">logo</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://dotnet.testcontainers.org/banner.png" alt="" onerror="this.style.display = 'none'"></div></a></figure>]]></content:encoded>
                </item>
                <item>
                    <title><![CDATA[How to use Auth0 with ASP.NET Core 9 Blazor Server]]></title>
                    <description><![CDATA[Introduction

Currently, I am evaluating different Identity and Access Management solutions as an alternative to ASP.NET Core Identity. Besides FusionAuth I wanted to test-drive Auth0 together with ASP.NET Core Blazor.

Although Auth0 provides rich documentation, no easy-to-follow example was available for ASP.NET Core 9.

This article fills]]></description>
                    <link>https://kloudshift.net/blog/how-to-use-auth0-with-asp-net-core-9-blazor-server/</link>
                    <guid isPermaLink="false">68c189e3dfb58800015958c1</guid>


                        <dc:creator><![CDATA[Matthias Güntert]]></dc:creator>

                    <pubDate>Wed, 10 Sep 2025 16:23:32 +0200</pubDate>

                        <media:content url="https://images.unsplash.com/photo-1615130104765-c140bd3c2c45?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;M3wxMTc3M3wwfDF8c2VhcmNofDIzfHxhdXRoZW50aWNhdGlvbnxlbnwwfHx8fDE3NDY2MDQzNTh8MA&amp;ixlib&#x3D;rb-4.1.0&amp;q&#x3D;80&amp;w&#x3D;2000" medium="image"/>

                    <content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1615130104765-c140bd3c2c45?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;M3wxMTc3M3wwfDF8c2VhcmNofDIzfHxhdXRoZW50aWNhdGlvbnxlbnwwfHx8fDE3NDY2MDQzNTh8MA&amp;ixlib&#x3D;rb-4.1.0&amp;q&#x3D;80&amp;w&#x3D;2000" alt="How to use Auth0 with ASP.NET Core 9 Blazor Server"/> <h2 id="introduction">Introduction </h2><p>Currently, I am evaluating different Identity and Access Management solutions as an alternative to ASP.NET Core Identity. Besides <code>FusionAuth</code> I wanted to test-drive <code>Auth0</code> together with ASP.NET Core Blazor. </p><p>Although Auth0 provides rich documentation, no easy-to-follow example was available for ASP.NET Core 9. </p><p>This article fills this gap and runs you through the required steps to get started with Auth0. </p><h2 id="tldr">TL;DR?</h2><p>Here is the direct link to my demo repository</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/matthiasguentert/auth0-blazor-server-net9?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - matthiasguentert/auth0-blazor-server-net9</div><div class="kg-bookmark-description">Contribute to matthiasguentert/auth0-blazor-server-net9 development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://matthiasguentert.net/content/images/icon/pinned-octocat-093da3e6fa40-1.svg" alt=""><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">matthiasguentert</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://matthiasguentert.net/content/images/thumbnail/auth0-blazor-server-net9" alt="" onerror="this.style.display = 'none'"></div></a></figure><h2 id="configure-auth0">Configure Auth0</h2><p>First, we need to create a new application. For a Blazor Server, we need to select <code>Regular Web Applications</code>. </p><figure class="kg-card kg-image-card"><img src="https://matthiasguentert.net/content/images/2025/05/image.png" class="kg-image" alt="" loading="lazy" width="1628" height="1460"></figure><p>After successful creation, make a note of your <code>Domain</code> and <code>Client ID</code> found under Settings. </p><h3 id="configure-callback-urls">Configure Callback URLs</h3><p>Next, we need to adjust the list of <code>Allowed Callback URLs</code> and add <a href="https://localhost:7063/callback?ref=kloudshift.net">https://localhost:7063/callback</a> for local testing. </p><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">💡</div><div class="kg-callout-text">Please note, that you need to adjust the port to match your project configuration. Also make sure to use https and not http. </div></div><h3 id="configure-logout-urls">Configure Logout URLs </h3><p>For the logout to work, we need to add <a href="https://localhost:7063/?ref=kloudshift.net">https://localhost:7063</a></p><h2 id="install-and-configure-the-sdk">Install and configure the SDK</h2><h3 id="register-middleware">Register middleware</h3><p>Add the required NuGet package <code>Auth0.AspNetCore.Authentication</code> to your project and register the Auth0 middleware with the DI container. </p><figure class="kg-card kg-code-card"><pre><code class="language-csharp">builder.Services.AddAuth0WebAppAuthentication(options =&gt;
{
    options.Domain = builder.Configuration["Auth0:Domain"];
    options.ClientId = builder.Configuration["Auth0:ClientId"];
    options.Scope = "openid profile email";
});</code></pre><figcaption><p dir="ltr"><span style="white-space: pre-wrap;">Program.cs</span></p></figcaption></figure><h3 id="add-configuration">Add configuration</h3><p>As you can see from above, the setup expects the following keys to exist in your <code>appsettings.json</code>. Paste the information accordingly. </p><figure class="kg-card kg-code-card"><pre><code class="language-json">  "Auth0": {
    "Domain": "&lt;your-domain&gt;",
    "ClientId": "&lt;your-client-id&gt;"
  }</code></pre><figcaption><p dir="ltr"><span style="white-space: pre-wrap;">appsettings.json</span></p></figcaption></figure><h3 id="add-the-login-endpoint">Add the login endpoint</h3><p>Now it's time to create a minimal API endpoint to provide the login functionality. </p><p>This is where we call <code>ChallengeAsync</code> and pass the authentication properties created by the <code>LoginAuthenticationPropertiesBuilder</code> and the Auth0 authentication schema. The latter also defines the default callback path <code>/callback</code>.</p><p>From the official documentation </p><blockquote>After successfully calling&nbsp;<code>HttpContext.ChallengeAsync()</code>, the user will be redirected to Auth0 and signed in to both the OIDC middleware and the cookie middleware upon being redirected back to your application. This will allow the users to be authenticated on subsequent requests.</blockquote><figure class="kg-card kg-code-card"><pre><code class="language-csharp">app.MapGet("/Login", async Task (HttpContext httpContext, string returnUrl = "/") =&gt;
{
    var authenticationProperties = new LoginAuthenticationPropertiesBuilder()
        .WithRedirectUri(returnUrl)
        .Build();

    await httpContext.ChallengeAsync(Auth0Constants.AuthenticationScheme, authenticationProperties);
});
</code></pre><figcaption><p dir="ltr"><span style="white-space: pre-wrap;">Program.cs</span></p></figcaption></figure><h3 id="add-the-logout-endpoint">Add the logout endpoint </h3><p>Logging out happens by calling <code>SignOutAsync</code> on the <code>HttpContext</code>. As you can see, this call was made twice. </p><p>The first call will log the user out of Auth0, and the second will log the user out of your application. This will also log a user out of other applications that rely on SSO (single sign-on)</p><figure class="kg-card kg-code-card"><pre><code class="language-csharp">app.MapGet("/Logout", async (HttpContext httpContext) =&gt;
{
    var authenticationProperties = new LogoutAuthenticationPropertiesBuilder()
        .WithRedirectUri("/")
        .Build();

    await httpContext.SignOutAsync(Auth0Constants.AuthenticationScheme, authenticationProperties);
    await httpContext.SignOutAsync(CookieAuthenticationDefaults.AuthenticationScheme);
});</code></pre><figcaption><p dir="ltr"><span style="white-space: pre-wrap;">Program.cs</span></p></figcaption></figure><h3 id="adjust-routing">Adjust routing</h3><p>Next, we'll modify the routing configuration defined at <code>Components/Routes.razor</code>. Make sure to add the required using statements to <code>_Imports.razor</code>. </p><figure class="kg-card kg-code-card"><pre><code>...
@using Microsoft.AspNetCore.Authorization
@using Microsoft.AspNetCore.Components.Authorization</code></pre><figcaption><p dir="ltr"><span style="white-space: pre-wrap;">_Imports.razor</span></p></figcaption></figure><p>Add <code>&lt;CascadingAuthenticationState&gt;</code>, which provides the authentication state (e.g., whether the user is logged in) to all components below it in the hierarchy. It wraps the entire router, so any page can access the current authentication state using the <code>AuthenticationStateProvider</code>.</p><figure class="kg-card kg-code-card"><pre><code class="language-xml">&lt;CascadingAuthenticationState&gt;
  &lt;Router AppAssembly="typeof(Program).Assembly"&gt;
    &lt;Found Context="routeData"&gt;
      &lt;AuthorizeRouteView RouteData="routeData" DefaultLayout="@typeof(Layout.MainLayout)"&gt;
        &lt;Authorizing&gt;
          &lt;p&gt;Authorization is in progress, please wait!&lt;/p&gt;
        &lt;/Authorizing&gt;
        &lt;NotAuthorized&gt;
          &lt;p&gt;You're not authorized, please log in!&lt;/p&gt;
        &lt;/NotAuthorized&gt;
      &lt;/AuthorizeRouteView&gt;
      &lt;FocusOnNavigate RouteData="routeData" Selector="h1"/&gt;
    &lt;/Found&gt;
  &lt;/Router&gt;
&lt;/CascadingAuthenticationState&gt;</code></pre><figcaption><p dir="ltr"><span style="white-space: pre-wrap;">Routes.razor</span></p></figcaption></figure><p>The router uses the <code>AuthorizedRouteView</code> element, which is like the regular <code>RouteView</code> but with built-in support for authentication and authorization. </p><p>The <code>Authorizing</code> element, defines content that will be rendered while asynchronous authorization is in progress, and <code>NotAuthorized</code> defines the content that will be displayed if the user is not authorized.</p><p><code>FocusOnNavigate</code> will focus on the first <code>&lt;h1&gt;</code> element, which is helpful for accessibility and screen readers.</p><h2 id="conclusion">Conclusion</h2><p>That's all that is required for the basic setup. For demo purposes, I have added a Profile page and adjusted the <code>NavMenu</code>. </p><figure class="kg-card kg-image-card"><img src="https://matthiasguentert.net/content/images/2025/05/image-1.png" class="kg-image" alt="" loading="lazy" width="1106" height="728"></figure><h3 id="to-summarize">To summarize</h3><ul><li>Create an Auth0 application</li><li>Add the required NuGet package</li><li>Add configuration settings</li><li>Add middleware and endpoints</li><li>Adjusts routing</li></ul><h2 id="further-reading">Further reading </h2><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/matthiasguentert/auth0-blazor-server-net9?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - matthiasguentert/auth0-blazor-server-net9</div><div class="kg-bookmark-description">Contribute to matthiasguentert/auth0-blazor-server-net9 development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://matthiasguentert.net/content/images/icon/pinned-octocat-093da3e6fa40-2.svg" alt=""><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">matthiasguentert</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://matthiasguentert.net/content/images/thumbnail/auth0-blazor-server-net9-1" alt="" onerror="this.style.display = 'none'"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.components.routing.router?view=aspnetcore-9.0&ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Router Class (Microsoft.AspNetCore.Components.Routing)</div><div class="kg-bookmark-description">A component that supplies route data corresponding to the current navigation state.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://matthiasguentert.net/content/images/icon/favicon-2.ico" alt=""><span class="kg-bookmark-author">Microsoft Learn</span><span class="kg-bookmark-publisher">dotnet-bot</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://matthiasguentert.net/content/images/thumbnail/open-graph-image-2.png" alt="" onerror="this.style.display = 'none'"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/auth0/auth0-aspnetcore-authentication?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - auth0/auth0-aspnetcore-authentication: SDK for integrating Auth0 in ASPNET Core</div><div class="kg-bookmark-description">SDK for integrating Auth0 in ASPNET Core. Contribute to auth0/auth0-aspnetcore-authentication development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://matthiasguentert.net/content/images/icon/pinned-octocat-093da3e6fa40-3.svg" alt=""><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">auth0</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://matthiasguentert.net/content/images/thumbnail/389b17ad-3f05-44ee-8922-afaa24e7cad4" alt="" onerror="this.style.display = 'none'"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://auth0.com/docs/api/authentication?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Auth0 Authentication API</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://matthiasguentert.net/content/images/icon/auth0-favicon-onlight.png" alt=""></div></div></a></figure>]]></content:encoded>
                </item>
                <item>
                    <title><![CDATA[Automating DNS registration &amp; certificate management in AKS: A step-by-step guide]]></title>
                    <description><![CDATA[Introduction

With this blog post, I&#39;ll demonstrate how we can automatically register Ingress resources running on an AKS cluster with a public Azure DNS zone so they can be easily reached outside your cluster.

Further, I&#39;ll demonstrate how certificates can be automatically obtained from Let&#39;]]></description>
                    <link>https://kloudshift.net/blog/automating-dns-registration-and-certificate-management-in-aks-a-step-by-step-guide/</link>
                    <guid isPermaLink="false">68c189e3dfb58800015958c0</guid>


                        <dc:creator><![CDATA[Matthias Güntert]]></dc:creator>

                    <pubDate>Wed, 10 Sep 2025 16:23:31 +0200</pubDate>

                        <media:content url="https://matthiasguentert.net/content/images/2023/09/feature-image.png" medium="image"/>

                    <content:encoded><![CDATA[<img src="https://matthiasguentert.net/content/images/2023/09/feature-image.png" alt="Automating DNS registration &amp; certificate management in AKS: A step-by-step guide"/> <h2 id="introduction">Introduction</h2><p>With this blog post, I'll demonstrate how we can automatically register Ingress resources running on an AKS cluster with a <strong>public Azure DNS</strong> zone so they can be easily reached outside your cluster. </p><p>Further, I'll demonstrate how <strong>certificates </strong>can be <strong>automatically obtained</strong> from Let's Encrypt and then be <strong>assigned </strong>to those Ingress resources<em>.</em></p><p>For that purpose, I'll use two Kubernetes controllers: <em>ExternalDNS </em>and <em>Cert-Manager. </em></p><p><em>ExternalDNS </em>for Kubernetes is a controller that automates the management of DNS records for Kubernetes resources by syncing them with various DNS providers, like Azure DNS zones. </p><p>On the other hand, <em>Cert-Manager </em>automates the management and issuance of  SSL/TLS certificates for applications running within a Kubernetes cluster.</p><p>For enhanced security, I'll use a Managed Identity with federated credentials, which requires using the Workload Identity feature on my AKS cluster.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://i.imgur.com/3Igv7XS.png" class="kg-image" alt="" loading="lazy"><figcaption>High-level design of the proposed solution</figcaption></figure><p>This setup comes with a couple of compelling benefits... 🏆</p><h3 id="why-you-want-to-automate-dns-and-certificate-management">Why you want to automate DNS and certificate management</h3><ol><li><strong>End-to-end Automation:</strong> Combining ExternalDNS and Cert-Manager provides end-to-end automation for exposing services securely over the internet. Cert-Manager handles the issuance and renewal of SSL/TLS certificates, while ExternalDNS manages DNS records, ensuring that the services are secure and reachable.</li><li><strong>Reducing the risk of certificate expiration:</strong> Cert-Manager supports the great ACME protocol. Combined with a Certificate Authority like Let's Encrypt, it streamlines obtaining and renewing SSL/TLS certificates. This eliminates the risk of expired certificates!</li><li><strong>Self-Service Capabilities:</strong> Developers can use annotations on Ingress resources to define DNS and certificate requirements for their services. This empowers development teams to manage their service configurations within established policies, reducing the burden on central IT or operations teams.</li><li><strong>Logging and Auditing:</strong> Both tools provide logging and auditing capabilities, making monitoring and tracking changes to DNS records and certificates easier. This is valuable for compliance, troubleshooting, and security purposes.</li></ol><p>By combining <em>Cert-Manager</em> and <em>ExternalDNS</em>, we create a robust and automated solution that ensures our Ingress resources are secured with up-to-date certificates and that DNS records are always synchronized with the underlying services. </p><p>This approach greatly simplifies the operation of an Azure Kubernetes Cluster and helps accelerate the development cycle of developers using the Azure Kubernetes Service. 🚀</p><h3 id="high-level-steps-%E2%9C%8D%F0%9F%8F%BC">High-Level Steps ✍🏼</h3><p>These are the high-level steps. I assume you already have an AKS cluster and Azure DNS Zone deployed. Also, I assume you have a running NGINX Ingress Controller setup.</p><ol><li>Enable Azure AD Workload Identities on AKS Cluster.</li><li>Create and configure a Managed Identity and assign the required privileges.</li><li>Install and/or configure NGINX Ingress Controller</li><li>Install and configure External-DNS.</li><li>Install and configure Cert-Manager.</li></ol><p>Now that the context is set let's get to work👷🏽‍♂️🚧</p><h2 id="steps">Steps</h2><h3 id="enable-azure-ad-workload-identities">Enable Azure AD Workload Identities</h3><p>First, we must update our existing AKS cluster to support workload identities. Make sure your Azure CLI is least on version 2.47.0 or later.</p><figure class="kg-card kg-code-card"><pre><code class="language-powershell">az aks update `
  --name &lt;cluster-name&gt; `
  --resource-group &lt;cluster-rg&gt; `
  --subscription "&lt;subscription&gt;" `
  --enable-workload-identity `
  --enable-oidc-issuer</code></pre><figcaption>Enable Azure AD Workload Identities on an existing AKS cluster</figcaption></figure><blockquote>💡 What is Azure AD Workload Identity?<br><br><em>Azure AD Workload Identity with AKS is a feature that</em> bridges Kubernetes-native service accounts to Azure AD identities. This mapping allows a service account to access Azure AD-protected resources.</blockquote><p>Afterward, we need to retrieve and store the issuer URL. The URL comes in the form of <code>https://westeurope.oic.prod-aks.azure.com/&lt;some-guid&gt;/&lt;another-guid&gt;/</code>.</p><figure class="kg-card kg-code-card"><pre><code class="language-powershell">$issuerUrl = az aks show `
  --name aks-azureblue `
  --resource-group rg-kubernetes `
  --subscription "&lt;subscription&gt;" `
  --query oidcIssuerProfile.issuerUrl `
  --output tsv</code></pre><figcaption>Retrieve OIDC Issuer URL</figcaption></figure><h3 id="creating-and-configuring-a-managed-identity">Creating and configuring a Managed Identity</h3><p>Let's start by creating and storing the Managed ID in the same resource group as the AKS cluster (YMMV). </p><p>Alternatively, you can reuse the existing one named <code>&lt;cluster-name&gt;-agentpool</code>. This gets deployed into the infrastructure resource group at cluster-creation time. For this article, I'll create a dedicated one. </p><pre><code class="language-powershell">az identity create `
  --name id-aks-azureblue-workload-identity `
  --resource-group rg-kubernetes `
  --subscription "&lt;subscription&gt;"</code></pre><h3 id="configure-federated-credentials">Configure Federated Credentials</h3><p>Now that the managed identity is in place, we must configure the federated credentials. </p><p>Later, we will install <em>ExternalDNS</em> and <em>cert-manager</em> on the Kubernetes cluster. Each of the helm charts will create a separate service account, which means we have to create <strong>two </strong>federated credentials to map both service accounts to the managed identity.</p><p>Let's start with the federated credential for the <em>ExternalDNS</em> service account, which will live in a namespace with the same name. The subject parameter follows the following format: <code>system:serviceaccount:&lt;namespace&gt;:&lt;serviceaccountname&gt;</code></p><figure class="kg-card kg-code-card"><pre><code class="language-powershell">az identity federated-credential create `
  --identity-name id-aks-azureblue-workload-identity `
  --name external-dns-credentials `
  --resource-group rg-kubernetes `
  --subscription "&lt;subscription&gt;" `
  --issuer $issuerUrl `
  --subject "system:serviceaccount:external-dns:external-dns"</code></pre><figcaption>Creating a federated credential for the External-DNS service account</figcaption></figure><p>Okay, only one left for the cert-manager service account. </p><figure class="kg-card kg-code-card"><pre><code class="language-powershell">az identity federated-credential create `
  --identity-name id-aks-azureblue-workload-identity `
  --name cert-manager-credentials `
  --resource-group rg-kubernetes `
  --subscription "&lt;subscription&gt;" `
  --issuer $issuerUrl `
  --subject "system:serviceaccount:cert-manager:cert-manager"</code></pre><figcaption>Creating a federated credential for the cert-manager service account</figcaption></figure><h3 id="role-assignments">Role Assignments</h3><p>So far, so good. Now, it is time to equip the managed identity with some privileges. </p><p><em>Cert-Manager</em> requires the <em>DNS Zone Contributor</em> role on the DNS Zone to create TXT records (DNS-01 challenge). This is needed to prove the ownership of the DNS domain to Let's Encrypt. You can read more about it here.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://letsencrypt.org/docs/challenge-types/?ref=kloudshift.net#dns-01-challenge"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Challenge Types</div><div class="kg-bookmark-description">When you get a certificate from Let’s Encrypt, our servers validate that you control the domain names in that certificate using “challenges,” as defined by the ACME standard. Most of the time, this validation is handled automatically by your ACME client, but if you need to make some more complex con…</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://letsencrypt.org/favicon.ico" alt=""><span class="kg-bookmark-author">Let's Encrypt</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://letsencrypt.org/images/le-logo-twitter-noalpha.png" alt="" onerror="this.style.display = 'none'"></div></a></figure><p>Then, <em>ExternalDNS</em> also requires extended privileges on the DNS zone to create and optionally remove A records at run-time. In my case, the DNS Zone is called <code>dev.azureblue.io</code>. In addition, ExternalDNS requires the Reader role to the resource group holding the DNS zone.</p><figure class="kg-card kg-code-card"><pre><code class="language-powershell"># Retrieve and store object id of managed identity
$assigneeObjectId = az identity show `
  --name id-aks-azureblue-workload-identity `
  --resource-group rg-kubernetes `
  --subscription "&lt;subscription&gt;" `
  --query principalId `
  --output tsv
  
# Retrieve and store id of dns zone
$dnsZoneId = az network dns zone show `
  --name dev.azureblue.io `
  --resource-group rg-kubernetes `
  --subscription "&lt;subscription&gt;" `
  --query id `
  --output tsv

# Retrieve and store id of resource group
$resourceGroupId = az group show `
  --name rg-kubernetes `
  --subscription "&lt;subscription&gt;" `
  --query id `
  --output tsv

# Assign the DNS Zone Contributor role on the zone
az role assignment create `
  --role "DNS Zone Contributor" `
  --assignee-object-id $assigneeObjectId `
  --assignee-principal-type ServicePrincipal `
  --scope $dnsZoneId
  
# Assign the Reader role on the resource group
az role assignment create `
  --role "Reader" `
  --assignee-object-id $assigneeObjectId `
  --assignee-principal-type ServicePrincipal `
  --scope $resourceGroupId</code></pre><figcaption>PowerShell script using Azure CLI to assign required roles</figcaption></figure><h3 id="checkpoint-%E2%9C%85">Checkpoint ✅</h3><p>Up until this point, we should have:</p><ul><li>✅ Enabled workload identity on the existing AKS cluster</li><li>✅ Created and configured a managed identity</li><li>✅ Assigned the required roles </li></ul><p>Until now, everything is configured on the Azure side, and we can move on to install the required Kubernetes components. </p><h3 id="nginx-ingress-controller-configuration">NGINX Ingress Controller Configuration</h3><p>Ensure that your nginx-ingress deployment has the following argument added to it. This is required for <em>ExternalDNS</em>.</p><pre><code>- --publish-service=&lt;ingress-nginx-namespace&gt;/&lt;nginx-controller-service-name&gt;</code></pre><p>I used the following steps to deploy ingress-nginx...</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm upgrade ingress-nginx ingress-nginx/ingress-nginx \
  --install \
  --namespace ingress-nginx \
  --create-namespace \
  --values values.yaml</code></pre><figcaption>Installing NGINX ingress controller</figcaption></figure><p>... with this value file.</p><pre><code class="language-yaml">controller:
  service:
    annotations:
      service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: "/healthz"
    extraArgs:
      # Required for ExternalDNS
      publish-service: nginx-system/ingress-nginx-controller</code></pre><h3 id="installing-configuring-externaldns">Installing &amp; Configuring ExternalDNS</h3><p>Now that the Nginx Ingress Controller is properly configured let's install ExternalDNS.</p><p>This is the Helm Chart Values file I've used for configuration. Of course, replace the values accordingly.</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">fullnameOverride: external-dns
policy: sync
serviceAccount:
  annotations:
    azure.workload.identity/client-id: &lt;your-client-id&gt;
podLabels:
  azure.workload.identity/use: "true"
provider: azure
secretConfiguration:
  enabled: true
  mountPath: "/etc/kubernetes/"
  data:
    azure.json: |
      {
        "subscriptionId": "&lt;subscription-id&gt;",
        "resourceGroup": "rg-kubernetes",
        "useWorkloadIdentityExtension": true
      }</code></pre><figcaption>values.yaml</figcaption></figure><p>The configuration does a couple of important things. </p><ol><li><code>serviceAccount.annotations: azure.workload.identity/client-id: ...</code> &amp; <code>podLabels: azure.workload.identity/use: "true"</code> enables workload identity for the service account used by ExternalDNS.</li><li><code>policy: sync</code> ensures that the deletion of Ingress resources also results in the deletion of the corresponding A record.</li></ol><p>The client ID can be retrieved as follows.</p><figure class="kg-card kg-code-card"><pre><code class="language-powershell">az identity show `
  --name id-aks-azureblue-workload-identity `
  --resource-group rg-kubernetes `
  --subscription "&lt;subscription&gt;" `
  --query clientId `
  --output tsv</code></pre><figcaption>Retrieving the client ID</figcaption></figure><p>Now, let's install the corresponding Helm Chart and ensure we use the correct namespace, as our previously created federated credential needs to match it.</p><pre><code class="language-powershell">helm repo add external-dns https://kubernetes-sigs.github.io/external-dns/
helm repo update
helm upgrade external-dns external-dns/external-dns `
  --install `
  --namespace external-dns `
  --create-namespace `
  --values values.yaml</code></pre><h3 id="checkpoint-%E2%9C%85-1">Checkpoint ✅</h3><p>Before moving on, testing that the ExternalDNS setup works as expected is advisable. So go ahead and deploy the following manifest to a namespace called <code>demo</code>. Of course, you need to adjust the <code>Ingress</code> resource according to your environment.</p><pre><code class="language-bash">kubectl create namesapce demo
kubectl apply -f deployment.yaml </code></pre><figure class="kg-card kg-code-card"><pre><code class="language-yaml">---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: demo
  name: echo-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: demo
  template:
    metadata:
      labels:
        app: demo
    spec:
      containers:
      - name: echo-pod
        image: hashicorp/http-echo
        args:
          - "-text=Hello World!"
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 250m
            memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
  namespace: demo
  name: demo-service
spec:
  type: ClusterIP
  selector:
    app: demo
  ports:
    - port: 5678
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  namespace: demo
  name: demo-ingress
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/use-regex: "true"
spec:
  ingressClassName: nginx
  rules:
    - host: demo1.dev.azureblue.io
      http:
        paths:
          - path: /(.*)
            pathType: Prefix
            backend:
              service:
                name: demo-service
                port:
                  number: 5678  </code></pre><figcaption>deployment.yaml</figcaption></figure><p>As can be seen from the manifest, we are creating a <code>Deployment</code> and a <code>Service</code>. Then, we expose it to the Internet through an <code>Ingress</code> using the hostname <code>demo1.dev.azureblue.io</code>. </p><p>As we haven't set up any certificates yet, disabling SSL-Redirection on the Ingress is important so it can be tested using the HTTP protocol.</p><p>After a few seconds, you should see an A and a TXT record in your public DNS zone. Navigating to <code>http://demo1.dev.azureblue.io</code> should display <code>Hello World!</code>.</p><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/fL70K03.png" class="kg-image" alt="" loading="lazy"></figure><p>As depicted in the screenshot above, the default TTL is 300 seconds. Optionally, this can be adjusted to your requirements by annotating the Ingress resource with <code>external-dns.alpha.kubernetes.io/ttl: "valueInSeconds"</code>. </p><blockquote>💡 The default TTL value of 300 seconds can be adjusted by annotating the Ingress with <code>external-dns.alpha.kuberntes.io/ttl: "valueInSeconds"</code></blockquote><pre><code class="language-yaml">apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  namespace: demo
  name: demo-ingress
  annotations:
    ... 
    external-dns.alpha.kubernetes.io/ttl: "60"
spec:
...</code></pre><p>When configuring ExternalDNS, we used the parameter <code>policy: sync</code>, remember? This parameter keeps your DNS Zone in sync with your Ingress annotations. </p><p>If this isn't set, the default behavior is to <em>upsert </em>DNS record changes without deleting them if an Ingress gets deleted, leaving stale DNS records over time.</p><blockquote>💡 To keep your Azure DNS zone in sync with your Ingress controller use the "policy: sync" setting.</blockquote><p>So, let's also test that functionality and delete the previously created deployment (or only the Ingress resource).</p><pre><code class="language-bash">kubectl delete namespace demo</code></pre><p>After a few seconds, the TXT and A record should be removed from the DNS zone.</p><h3 id="cert-manager-installation-configuration">Cert-Manager Installation &amp; Configuration</h3><p>Now that we have successfully automated the DNS registration process, we can get to automate the certificate management.</p><p>In a previous step, we prepared the Azure Managed Identity with a federated credential for Cert-Manager. Now, we need to enable it for the cert-manager deployment itself. This is done by the label <code>azure.workload.identity/use: "true"</code> that must be applied to the pod.</p><blockquote>💡 The azure-workload-identity mutating admission webhook will only modify pods with this label to inject the service account token volume projection and Azure-specific environment variables (<a href="https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview?tabs=dotnet&ref=kloudshift.net#pod-labels">Microsoft</a>, 2023).</blockquote><p>Also, the following Custom Resource Definitions need to be installed. </p><ul><li><code>ClusterIssuer</code>, <code>Issuer</code></li><li><code>CertificateRequests</code>, <code>Certificates</code></li><li><code>Challenges</code></li><li><code>Orders</code></li></ul><p>So here is a basic configuration I've used with the official Helm Chart.</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">podLabels:
  azure.workload.identity/use: "true"
serviceAccount:
  labels:
    azure.workload.identity/use: "true"
installCRDs: "true"   </code></pre><figcaption>values.yaml</figcaption></figure><p>When installing, make sure to use the namespace <code>cert-manager</code>, since it needs to match the configured federated credentials in earlier steps.</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">helm repo add jetstack https://charts.jetstack.io
helm repo update
helm upgrade cert-manager jetstack/cert-manager \
  --install \
  --namespace cert-manager \
  --create-namespace \
  --values values.yaml</code></pre><figcaption>Installing Cert-Manager using Helm</figcaption></figure><h3 id="create-cluster-issuer">Create Cluster Issuer</h3><p>Now, we need to define an <code>Issuer</code> resource. These types of resources represent certificate authorities (CAs), which can sign certificates in response to certificate signing requests. We can create either one of the type <code>ClusterIssuer</code> or one of the type <code>Issuer</code> , which then will be scoped to a namespace.</p><p>Cert-Manager supports various types of issuers, such as HashiCorp Vault, Venafi, and others. Since the goal is to automate certificate management fully, I'll use one called <code>acme</code>. </p><p>Obviously, this issuer represents Certificate Authority servers implementing the Automated Certificate Management Environment (ACME) protocol, such as <em>Let's Encrypt</em>. </p><p>ACME requires a challenge to be solved to verify that we own the domain; details can be found in the link below. For this demo, I'll stick with the <code>DNS01</code> challenge.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://letsencrypt.org/docs/challenge-types/?ref=kloudshift.net#dns-01-challenge"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Challenge Types</div><div class="kg-bookmark-description">When you get a certificate from Let’s Encrypt, our servers validate that you control the domain names in that certificate using “challenges,” as defined by the ACME standard. Most of the time, this validation is handled automatically by your ACME client, but if you need to make some more complex con…</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://letsencrypt.org/favicon.ico" alt=""><span class="kg-bookmark-author">Let's Encrypt</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://letsencrypt.org/images/le-logo-twitter-noalpha.png" alt="" onerror="this.style.display = 'none'"></div></a></figure><p>The following manifest will create two <code>ClusterIssuers</code>, one for the staging and another for the prod environment of Let's Encrypt.</p><p>You'll have to replace and adjust <code>email</code>, <code>hostedZoneName</code>, <code>resourceGroupName</code>, <code>subscriptionId</code> and <code>clientId</code> to your environment.</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    email: &lt;your-mail-address&gt;
    privateKeySecretRef:
      name: letsencrypt-staging
    solvers:
    - dns01:
        azureDNS:
          hostedZoneName: dev.azureblue.io
          resourceGroupName: rg-kubernetes
          subscriptionID: &lt;subscriptionId&gt;
          environment: AzurePublicCloud
          managedIdentity:
            clientID: &lt;clientId&gt;
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: &lt;your-mail-address&gt;
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
    - dns01:
        azureDNS:
          hostedZoneName: dev.azureblue.io
          resourceGroupName: rg-kubernetes
          subscriptionID: &lt;subscriptionId&gt;
          environment: AzurePublicCloud
          managedIdentity:
            clientID: &lt;clientId&gt;</code></pre><figcaption>ClusterIssuer for the Let's Encrypt staging environment</figcaption></figure><p>After applying, make sure both are in a ready state.</p><pre><code>kubectl get clusterissuer -o wide

NAME                  READY   STATUS                                                 AGE
letsencrypt-prod      True    The ACME account was registered with the ACME server   1h
letsencrypt-staging   True    The ACME account was registered with the ACME server   1h</code></pre><h2 id="testing-the-setup-with-a-demo-deployment">Testing the setup with a demo deployment</h2><p>Finally, we reached the point where we can test the entire setup. 🏆Go ahead and adjust the demo deployment to your environment, especially the hostnames from the <code>Ingress</code> resource. </p><pre><code class="language-yaml">---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: demo
  name: echo-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: demo
  template:
    metadata:
      labels:
        app: demo
    spec:
      containers:
      - name: echo-pod
        image: hashicorp/http-echo
        args:
          - "-text=Hello World!"
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 250m
            memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
  namespace: demo
  name: demo-service
spec:
  type: ClusterIP
  selector:
    app: demo
  ports:
    - port: 5678 # The port that will be exposed by this service
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  namespace: demo
  name: demo-ingress
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/use-regex: "true"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    external-dns.alpha.kubernetes.io/ttl: "60"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - demo1.dev.azureblue.io
      secretName: demo-tls
  rules:
    - host: demo1.dev.azureblue.io
      http:
        paths:
          - path: /(.*)
            pathType: Prefix
            backend:
              service:
                name: demo-service
                port:
                  number: 5678  </code></pre><p>After applying, give Cert-Manager some time to solve the DNS01 challenge. It will create a secret called <code>demo-tls</code> for you, holding the certificate and private key for the Ingress. </p><pre><code>kubectl describe secret demo-tls -n demo

Name:         demo-tls
Namespace:    demo
Labels:       controller.cert-manager.io/fao=true
Annotations:  cert-manager.io/alt-names: demo1.dev.azureblue.io
              cert-manager.io/certificate-name: demo-tls
              cert-manager.io/common-name: demo1.dev.azureblue.io
              cert-manager.io/ip-sans:
              cert-manager.io/issuer-group: cert-manager.io
              cert-manager.io/issuer-kind: ClusterIssuer
              cert-manager.io/issuer-name: letsencrypt-prod
              cert-manager.io/uri-sans:

Type:  kubernetes.io/tls

Data
====
tls.crt:  5534 bytes
tls.key:  1675 bytes</code></pre><p>Eh voilà!! 🎉🎊🥂</p><figure class="kg-card kg-image-card"><img src="https://i.imgur.com/BdEhvuD.png" class="kg-image" alt="" loading="lazy"></figure><h2 id="recap-conclusion">Recap &amp; conclusion </h2><ul><li>With ExternalDNS, we can customize the TTL value by using an annotation <code>external-dns.alpha.kuberntes.io/ttl: "valueInSeconds"</code></li><li>ExternalDNS can be configured to keep the Azure DNS zone tidy. The relevant parameter is called <code>policy: sync</code>.</li><li>There are two major ACME challenge types to prove ownership of a domain. These are HTTP01 and DNS01. Both come with pros and cons. Here, we have been using the DNS-01 challenge type.</li><li>We created a Managed Identity and configured federated credential, so that the two Kubernetes service accounts (cert-manager and external-dns) can be  authorized and authenticated against Azure services (Azure DNS zone in our case)</li><li>Automating DNS &amp; certificate management reduces the burden of manual tasks carried out by your platform team 😎</li></ul><p>Thanks for reading. As always, feedback is welcome! </p><p>Happy hacking! Matthias 🤓</p><h2 id="further-reading">Further reading</h2><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview?tabs=dotnet&ref=kloudshift.net#pod-labels"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Use an Azure AD workload identity on Azure Kubernetes Service (AKS) - Azure Kubernetes Service</div><div class="kg-bookmark-description">Learn about Azure Active Directory workload identity for Azure Kubernetes Service (AKS) and how to migrate your application to authenticate using this identity.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://learn.microsoft.com/favicon.ico" alt=""><span class="kg-bookmark-author">Microsoft Learn</span><span class="kg-bookmark-publisher">MGoedtel</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://learn.microsoft.com/en-us/media/open-graph-image.png" alt="" onerror="this.style.display = 'none'"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/kubernetes-sigs/external-dns?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - kubernetes-sigs/external-dns: Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services</div><div class="kg-bookmark-description">Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services - GitHub - kubernetes-sigs/external-dns: Configure external DNS servers (AWS Route53, ...</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt=""><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">kubernetes-sigs</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/bda4001058f1c2e514d350b0453b070a6b82396f93d6624c6fa02c805912266f/kubernetes-sigs/external-dns" alt="" onerror="this.style.display = 'none'"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://cert-manager.io/?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">cert-manager</div><div class="kg-bookmark-description">Cloud native X.509 certificate management for Kubernetes and OpenShift</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://cert-manager.io/favicons/apple-touch-icon-180x180.png" alt=""><span class="kg-bookmark-author">cert-manager</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://cert-manager.io/images/og1.png" alt="" onerror="this.style.display = 'none'"></div></a></figure>]]></content:encoded>
                </item>
                <item>
                    <title><![CDATA[Tagging Azure resources with Git metadata using Terraform &amp; Azure DevOps]]></title>
                    <description><![CDATA[Introduction

When dealing with enterprise-grade Azure deployments, tracking a specific resource down to the Terraform code it stems from can be difficult...

Also, finding the person responsible who can be asked questions regarding the configuration used is not always obvious.

This is especially true in setups where multiple teams maintain]]></description>
                    <link>https://kloudshift.net/blog/azure-resource-tagging/</link>
                    <guid isPermaLink="false">68c1895ddfb58800015958b5</guid>


                        <dc:creator><![CDATA[Matthias Güntert]]></dc:creator>

                    <pubDate>Wed, 10 Sep 2025 16:21:17 +0200</pubDate>

                        <media:content url="https://images.unsplash.com/photo-1556075798-4825dfaaf498?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;M3wxMTc3M3wwfDF8c2VhcmNofDF8fGdpdHxlbnwwfHx8fDE3MDc4NTcwMTZ8MA&amp;ixlib&#x3D;rb-4.0.3&amp;q&#x3D;80&amp;w&#x3D;2000" medium="image"/>

                    <content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1556075798-4825dfaaf498?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;M3wxMTc3M3wwfDF8c2VhcmNofDF8fGdpdHxlbnwwfHx8fDE3MDc4NTcwMTZ8MA&amp;ixlib&#x3D;rb-4.0.3&amp;q&#x3D;80&amp;w&#x3D;2000" alt="Tagging Azure resources with Git metadata using Terraform &amp; Azure DevOps"/> <h2 id="introduction">Introduction</h2><p>When dealing with enterprise-grade Azure deployments, tracking a specific resource down to the Terraform code it stems from can be difficult...</p><p>Also, finding the person responsible who can be asked questions regarding the configuration used is not always obvious. </p><p>This is especially true in setups where multiple teams maintain Azure deployments with Terraform. </p><p>So, when looking at an Azure resource within the Azure portal, I find it helpful to get a quick answer to the following questions: </p><ul><li><em>Who can I contact if you have questions regarding the resource configuration? </em></li><li><em>Where can I find the Terraform code that describes the resource? </em></li><li><em>At which specific commit hash do I need to look?</em></li><li><em>When was the last time a change was made to that resource?</em></li></ul><p>You might be aware of a tool called <a href="https://github.com/bridgecrewio/yor?ref=kloudshift.net" rel="noreferrer"><code>yor</code></a>  an <em>extensible auto-tagger for IaC files</em> written in Go. Unfortunately, this tool didn't work for me and produced many git blame warnings I couldn't solve.</p><p>Instead, we will leverage a Terraform provider called <code>metio/git</code> to tag Azure resources for the purpose described and let an Azure DevOps pipeline run <code>terraform apply</code> for us. </p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/metio/terraform-provider-git?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - metio/terraform-provider-git: Terraform provider for local Git operations</div><div class="kg-bookmark-description">Terraform provider for local Git operations. Contribute to metio/terraform-provider-git development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/assets/pinned-octocat-093da3e6fa40.svg" alt=""><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">metio</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/af56de54f281debaa78d4a2277ade1c70281898f6a7dc54c37e1f49630aeea1e/metio/terraform-provider-git" alt="" onerror="this.style.display = 'none'"></div></a></figure><p>Let's get to it! 😎</p><h2 id="discussion">Discussion</h2><p>These are the tags we will add to our Azure resources</p><ul><li><code>git_repo</code>, <code>git_branch</code> and <code>git_commit_hash</code> holding the shortened 8-digit version of the latest commit. So we know where to find the code defining the resources.</li><li><code>git_commit_message</code> and <code>git_commit_timestamp</code> so we get a hint about when the code got committed and what has changed</li><li>Tags called <code>git_author_name</code> and <code>git_author_email</code> so we know who wrote the deployed code.</li></ul><p>We can pull all of this information by using three data sources, which are <code>git_commit</code>, <code>git_repository</code>, and <code>git_remote</code>. </p><figure class="kg-card kg-code-card"><pre><code class="language-hcl">data "git_commit" "head" {
  directory = var.git_directory
  revision  = "@"
}

data "git_repository" "repo" {
  directory = var.git_directory
}

data "git_remote" "remote" {
  directory = var.git_directory
  name      = "origin"
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">data.tf</span></p></figcaption></figure><p>It's worth noting that I am using the head shortcut <code>@</code> , which refers to the latest commit in the branch and represents the current state of the working directory.</p><p>Further, I feed the repository folder by a variable called <code>var.git_directory</code>. This will be required later when we execute <code>terraform apply</code> by an Azure DevOps pipeline.</p><p>For ease of use, we can create a map like this and put it into a <code>locals</code> block. See the <a href="https://registry.terraform.io/providers/metio/git/latest/docs?ref=kloudshift.net" rel="noreferrer">documentation</a> for more details regarding the data source schema.</p><figure class="kg-card kg-code-card"><pre><code class="language-hcl">locals {
  git_tags = {
    git_commit_hash      = substr(data.git_commit.head.sha1, 0, 8)
    git_commit_message   = data.git_commit.head.message
    git_commit_timestamp = data.git_commit.head.author.timestamp
    git_author_name      = data.git_commit.head.author.name
    git_author_email     = data.git_commit.head.author.email
    git_repo             = data.git_remote.remote.urls[0]
    git_branch           = data.git_repository.repo.branch
  }
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">data.tf</span></p></figcaption></figure><p>Later, we can add the gathered information to a resource like this: </p><figure class="kg-card kg-code-card"><pre><code class="language-hcl">resource "azurerm_resource_group" "foobar" {
  name     = "rg-tagging-demo-1"
  location = "switzerlandnorth"

  tags = local.git_tags
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">resources.tf</span></p></figcaption></figure><p>So far, so good. Let's move on to the Azure DevOps pipeline. I use the <em>Azure Pipelines Terraform Tasks</em> extension, which can be installed via Marketplace. </p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://marketplace.visualstudio.com/items?itemName=JasonBJohnson.azure-pipelines-tasks-terraform&targetId=57e99ed5-f78b-4a37-a459-dc65fb64028e&utm_source=vstsproduct&utm_medium=ExtHubManageList"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Azure Pipelines Terraform Tasks - Visual Studio Marketplace</div><div class="kg-bookmark-description">Extension for Azure DevOps - Tasks to install and execute terraform with Azure Pipelines for Azure and AWS.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://marketplace.visualstudio.com/favicon.ico" alt=""><span class="kg-bookmark-author">Visual Studio Marketplace</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://jasonbjohnson.gallerycdn.vsassets.io/extensions/jasonbjohnson/azure-pipelines-tasks-terraform/1.1.2.10/1710778533986/Microsoft.VisualStudio.Services.Icons.Default" alt="" onerror="this.style.display = 'none'"></div></a></figure><p>Here is the pipeline. </p><pre><code class="language-yaml">trigger: none

pool:
  vmImage: ubuntu-latest

steps:
  - checkout: self

  - task: TerraformInstaller@2
    displayName: "Terraform install"
    inputs:
      terraformVersion: "latest"

  - task: TerraformCLI@2
    displayName: "Terraform init"
    inputs:
      command: 'init'
      backendType: 'azurerm'
      backendServiceArm: 'service-connection-1'
      backendAzureRmResourceGroupName: 'rg-automation'
      backendAzureRmStorageAccountName: '&lt;hidden&gt;'
      backendAzureRmContainerName: 'tfstate'
      backendAzureRmKey: 'terraform.tfstate'
      allowTelemetryCollection: false

  - task: TerraformCLI@2
    displayName: "Terraform apply"
    inputs:
      command: 'apply'
      environmentServiceName: 'service-connection-1'
      commandOptions: '-var="git_directory=$(Build.SourcesDirectory)"'
      allowTelemetryCollection: false</code></pre><p>In the first step, I check out the repository with <code>checkout: self</code>. It's worth noting that Azure DevOps does a checkout on a commit, not a branch, which results in a detached head.  Next, I install the latest version of the <code>terraform</code> binary and initialize the backend. </p><p>The last step applies the terraform code. Please note that I passed the ADO variable <code>Build.SourcesDirectory</code> on to the terraform configuration. This way, the terraform git provider can pick up the proper directory from the build agent.</p><p>After a successful pipeline run, the resources carry the expected Azure tags, as depicted below. Cool, isn't it 😎</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://matthiasguentert.net/content/images/2024/04/hdk8qJ4.png" class="kg-image" alt="" loading="lazy" width="674" height="270"><figcaption><span style="white-space: pre-wrap;">Resulting Azure tags</span></figcaption></figure><p>💡Please note that commit hashes can get <em>lost </em>when executing terraform apply from a feature branch that later gets merged back to <code>main</code></p><p>Therefore, in the described situation, you should always apply your terraform code from the <code>main</code> branch (or another stable branch). Otherwise, you might be surprised not to find a matching commit in your repository🤯</p><h2 id="conclusion">Conclusion</h2><ul><li>Azure DevOps does a checkout on a commit, not a branch, resulting in a detached <code>HEAD</code></li><li>Commit hashes can get lost when applying from feature branches</li><li>We need to pass the git repository of the build agents to the terraform configuration by using a variable <code>git_directory=$(Build.SourcesDirectory)</code></li></ul><p>A proper Azure tagging strategy is crucial when navigating &amp; managing your Azure deployment. But especially in large enterprise setups where resources are being deployed utilizing IaC, adding git metadata makes everyone's life easier.</p><p>I hope you enjoyed reading my article and would gladly receive feedback! What does your Azure tagging strategy look like? </p><p>Happy hacking! 😎</p><h2 id="further-reading">Further reading</h2><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://registry.terraform.io/providers/metio/git/latest/docs?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Terraform Registry</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://registry.terraform.io/images/favicons/apple-touch-icon.png" alt=""></div></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://marketplace.visualstudio.com/items?itemName=JasonBJohnson.azure-pipelines-tasks-terraform&targetId=57e99ed5-f78b-4a37-a459-dc65fb64028e&utm_source=vstsproduct&utm_medium=ExtHubManageList"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Azure Pipelines Terraform Tasks - Visual Studio Marketplace</div><div class="kg-bookmark-description">Extension for Azure DevOps - Tasks to install and execute terraform with Azure Pipelines for Azure and AWS.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://marketplace.visualstudio.com/favicon.ico" alt=""><span class="kg-bookmark-author">Visual Studio Marketplace</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://jasonbjohnson.gallerycdn.vsassets.io/extensions/jasonbjohnson/azure-pipelines-tasks-terraform/1.1.2.10/1710778533986/Microsoft.VisualStudio.Services.Icons.Default" alt="" onerror="this.style.display = 'none'"></div></a></figure>]]></content:encoded>
                </item>
                <item>
                    <title><![CDATA[Why you should use Feature Toggles with Terraform]]></title>
                    <description><![CDATA[Introduction

If you come from a traditional software development background, you&#39;re likely familiar with using feature flags or toggles.

Put simply, a feature toggle acts as a switch to turn a specific feature on or off. It allows code to be released into production without activating it immediately—]]></description>
                    <link>https://kloudshift.net/blog/why-you-should-use-feature-toggles-with-terraform/</link>
                    <guid isPermaLink="false">68c1895ddfb58800015958b4</guid>


                        <dc:creator><![CDATA[Matthias Güntert]]></dc:creator>

                    <pubDate>Wed, 10 Sep 2025 16:21:17 +0200</pubDate>

                        <media:content url="https://images.unsplash.com/photo-1699040309386-11c615ed64d5?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;M3wxMTc3M3wwfDF8c2VhcmNofDJ8fHRvZ2dsZXxlbnwwfHx8fDE3MzE5NDk0OTN8MA&amp;ixlib&#x3D;rb-4.0.3&amp;q&#x3D;80&amp;w&#x3D;2000" medium="image"/>

                    <content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1699040309386-11c615ed64d5?crop&#x3D;entropy&amp;cs&#x3D;tinysrgb&amp;fit&#x3D;max&amp;fm&#x3D;jpg&amp;ixid&#x3D;M3wxMTc3M3wwfDF8c2VhcmNofDJ8fHRvZ2dsZXxlbnwwfHx8fDE3MzE5NDk0OTN8MA&amp;ixlib&#x3D;rb-4.0.3&amp;q&#x3D;80&amp;w&#x3D;2000" alt="Why you should use Feature Toggles with Terraform"/> <h2 id="introduction">Introduction</h2><p>If you come from a traditional software development background, you're likely familiar with using feature flags or toggles.</p><p>Put simply, a feature toggle acts as a switch to turn a specific feature on or off. It allows code to be released into production without activating it immediately—or only under certain conditions.</p><p>This approach enables gradual rollouts, making it possible to introduce changes incrementally and minimize risk. It also supports quick rollbacks, allowing you to disable problematic features without redeploying. </p><p>Additionally, feature toggles are commonly used for A/B testing scenarios, where new functionality is enabled for a subset of users to compare their responses with those of a control group.</p><p>However, the context is quite different when working with Terraform, an IaC language. So, why should you consider using feature flags with Terraform? </p><p>The reasons are <strong>flexibility &amp; backwards compatibility</strong>. Let's see how we can put them to work.</p><h2 id="toggling-with-count-meta-argument">Toggling with <code>count</code> meta-argument</h2><p>A fundamental feature toggle in HCL looks as follows. </p><pre><code class="language-hcl">variable "enable_feature" {
  type    = bool 
  default = false
}

resource "azurerm_resource_group" "rg" {
  count = var.enable_feature ? 1 : 0

  name     = "rg-feature-toggle-demo"
  location = "switzerlandnorth"
}</code></pre><p>Here, the meta-argument <code>count</code> is used to define how many instances of a resource the provider should create. In our scenario, we combine it with a conditional expression <code>condition ? true_value : false_value</code>. So, when the variable <code>enable_feature</code> becomes <code>true</code> then a single instance (<code>1</code>) of the resource gets created.</p><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">💡</div><div class="kg-callout-text">You can enable this demo feature from the command line like so: <code spellcheck="false" style="white-space: pre-wrap;">terraform apply -var enable_feature=true</code></div></div><p>This pattern is peculiar when we want to reference resources to each other. For example, the following code won't work! </p><pre><code class="language-hcl">variable "enable_feature" {
  type    = bool 
  default = false
}

resource "azurerm_resource_group" "rg" {
  count = var.enable_feature ? 1 : 0

  name     = "rg-feature-toggle-demo"
  location = "switzerlandnorth"
}

resource "azurerm_network_security_group" "nsg" {
  count = var.enable_nsg ? 1 : 0

  name                = "nsg-feature-toggle-demo"
  resource_group_name = azurerm_resource_group.rg.name
  location            = azurerm_resource_group.rg.location
}</code></pre><p>Trying to apply this configuration will result in the following error.</p><pre><code class="language-hcl">│ Error: Missing resource instance key
│ 
│   on main.tf line 39, in resource "azurerm_network_security_group" "nsg":
│   39:   resource_group_name = azurerm_resource_group.rg.name
│ 
│ Because azurerm_resource_group.rg has "count" set, its attributes must be accessed on specific instances.
│ 
│ For example, to correlate with indices of a referring resource, use:
│     azurerm_resource_group.rg[count.index]</code></pre><p>Since the <code>count</code> meta-argument creates <strong><em>instances,</em></strong> we need to reference the specific resource by its key. So this will fix it.</p><pre><code class="language-hcl">variable "enable_feature" {
  type    = bool
  default = false
}

resource "azurerm_resource_group" "rg" {
  count = var.enable_feature ? 1 : 0

  name     = "rg-feature-toggle-demo"
  location = var.location
}

resource "azurerm_network_security_group" "nsg" {
  count = var.enable_feature ? 1 : 0

  name                = "nsg-foobar"
  resource_group_name = azurerm_resource_group.rg[0].name
  location            = azurerm_resource_group.rg[0].location
}</code></pre><h2 id="using-foreach-with-a-map">Using <code>for_each</code> with a map</h2><p>An alternative approach to  <code>count</code> is using the <code>for_each</code> argument. Here is an example. </p><pre><code class="language-hcl">variable "enable_feature" {
  type    = bool
  default = false
}

resource "azurerm_resource_group" "rg" {
  for_each = var.enable_feature ? { "enabled" = "enabled" } : {}

  name     = "rg-feature-toggle-demo"
  location = "switzerlandnorth"
}

resource "azurerm_network_security_group" "nsg" {
  for_each = var.enable_feature ? { "enabled" = "enabled" } : {}

  name                = "nsg-foobar"
  resource_group_name = azurerm_resource_group.rg["enabled"].name
  location            = azurerm_resource_group.rg["enabled"].location
}</code></pre><p>Again, we are using a conditional expression <code>var.enable_feature ? { "enabled" = "enabled" } : {}</code>. If the expression becomes <code>true</code> a map with a single element is returned (the map has a key named <code>enabled</code> with a value of <code>enabled</code>), that <code>for_each</code> can iterate. If the expression becomes <code>false</code> an empty map is returned. </p><p>The version above can be slightly optimized for better readability.</p><pre><code class="language-hcl">resource "azurerm_resource_group" "rg" {
  for_each = var.enable_feature ? { "enabled" = "enabled" } : {}

  name     = "rg-feature-toggle-demo"
  location = "switzerlandnorth"
}

resource "azurerm_network_security_group" "nsg" {
  for_each = var.enable_feature ? { "enabled" = azurerm_resource_group.rg["enabled"] } : {}

  name                = "nsg-foobar"
  resource_group_name = each.value.name
  location            = each.value.location
}</code></pre><p>This time, we directly assign the referenced value to a key named <code>enabled</code> and access its attributes by the <code>each</code> object.</p><h2 id="toggling-specific-arguments">Toggling specific arguments</h2><p>This time, we don't want to toggle the entire resource; we only wish to switch a specific attribute on and off. </p><p>In the example below, again, we are using a conditional expression that returns the desired map of Azure tags we'd like to assign in case <code>var.enable_tags</code> is <code>true</code>, otherwise, we assign <code>null</code>. </p><pre><code class="language-hcl">variable "enable_tags" {
  type    = bool
  default = false
}

resource "azurerm_resource_group" "rg" {
  name     = "rg-feature-toggle-demo"
  location = "switzerlandnorth"
}

resource "azurerm_network_security_group" "nsg" {
  name                = "nsg-foobar"
  resource_group_name = azurerm_resource_group.rg.name
  location            = azurerm_resource_group.rg.location

  tags = var.enable_tags ? { environment = "dev" } : null
}</code></pre><div class="kg-card kg-callout-card kg-callout-card-yellow"><div class="kg-callout-emoji">💡</div><div class="kg-callout-text">Please note, that this only works for optional arguments, because we can't assign <code spellcheck="false" style="white-space: pre-wrap;">null</code> for required arguments!</div></div><h2 id="toggling-features-within-modules">Toggling features within modules </h2><p>Consider a fictional scenario where you'd like to create a vnet including subnet and dynamically toggle the creation of a network security group.</p><figure class="kg-card kg-code-card"><pre><code class="language-hcl">terraform {
... 
}

provider "azurerm" {
...
}

module "vnet" {
  source = "./vnet_module"

  enable_nsg = false
  
  ... Potentially more useful attributes here
}
</code></pre><figcaption><p><span style="white-space: pre-wrap;">The root module referecing the child</span></p></figcaption></figure><p>Nothing stops us from using the same <code>count</code> construct within a child module.</p><figure class="kg-card kg-code-card"><pre><code class="language-hcl">variable "enable_nsg" {
  type    = bool
  default = false
}

resource "azurerm_resource_group" "vnet" {
  name     = "rg-feature-toggle-demo"
  location = "switzerlandnorth"
}

resource "azurerm_virtual_network" "vnet" {
  name                = "vnet-foobar"
  resource_group_name = azurerm_resource_group.vnet.name
  location            = azurerm_resource_group.vnet.location

  address_space = ["10.0.0.0/16"]
}

resource "azurerm_subnet" "snet1" {
  name                 = "snet1"
  resource_group_name  = azurerm_resource_group.vnet.name
  virtual_network_name = azurerm_virtual_network.vnet.name
  address_prefixes     = ["10.0.1.0/24"]
}

resource "azurerm_network_security_group" "nsg" {
  count = var.enable_nsg ? 1 : 0

  name                = "nsg-foobar"
  resource_group_name = azurerm_resource_group.vnet.name
  location            = "switzerlandnorth"
}

resource "azurerm_subnet_network_security_group_association" "example" {
  count = var.enable_nsg ? 1 : 0

  subnet_id                 = azurerm_subnet.snet1.id
  network_security_group_id = azurerm_network_security_group.nsg[0].id
}
</code></pre><figcaption><p><span style="white-space: pre-wrap;">A simple child module</span></p></figcaption></figure><p>Again, we use the <code>count</code> meta-argument to dynamically create the network security group and its subnet association.</p><h2 id="another-benefit-that-comes-with-feature-toggles">Another benefit that comes with feature toggles</h2><p>Besides the flexibility that comes with this toggle, there is another benefit, which might not be so obvious - <em>backwards compatibility</em>.</p><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">💡</div><div class="kg-callout-text">Feature toggles can be used to provide backwards compatibility in your child modules.</div></div><p>Consider the case, where multiple root modules are using your vnet child module. That's what we write modules for, right? You might not even know, how many root modules in the enterprise are relaying on your shiny vnet module. </p><p>But still, you need to carry on and further enhance the module with a new feature, let's say you decide new vnets should use an <code>edge zone</code>. When looking  at the <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_network?ref=kloudshift.net#edge_zone-1">documentation</a>, you read ... </p><pre><code>edge_zone - (Optional) Specifies the Edge Zone within the Azure Region where this Virtual Network should exist. Changing this forces a new Virtual Network to be created.</code></pre><p>If we simply add the attribute to our child, the next <code>terraform apply</code> will re-create the vnet, which is not always what we want.</p><pre><code class="language-hcl">Terraform will perform the following actions:

  # module.vnet.azurerm_virtual_network.vnet must be replaced
-/+ resource "azurerm_virtual_network" "vnet" {
      ~ dns_servers             = [] -&gt; (known after apply)
      + edge_zone               = "switzerlandnorth" # forces replacement
    ...
    }

Plan: 1 to add, 0 to change, 1 to destroy.</code></pre><p>Instead we can toggle the desired attribute, provide a default value of <code>false</code> to the toggle variable, and don't have to worry that other users of the root module will have to recreate their resources.</p><pre><code>variable "enable_edge_zone" {
  type    = bool
  default = false
}

...

resource "azurerm_virtual_network" "vnet" {
  name                = "vnet-foobar"
  resource_group_name = azurerm_resource_group.vnet.name
  location            = azurerm_resource_group.vnet.location

  edge_zone     = var.enable_edge_zone ? "switzerlandnorth" : null
  address_space = ["10.0.0.0/16"]
}</code></pre><h2 id="summary">Summary</h2><ul><li>Feature toggles with Terraform provide flexibility but also provide backwards compatibility for child modules</li><li>We can use both <code>count</code> and <code>for_each</code> constructs to realize feature toggles</li><li>I prefer the <code>count</code> version since it enhances readability</li><li>The <code>count</code> meta-argument creates <em>instances</em> of resources, and therefor, when referenced by other resources, needs to be access by its index</li></ul><p>That's it for today, thanks for reading. 😎</p><h2 id="further-reading">Further reading</h2><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.hashicorp.com/blog/terraform-feature-toggles-blue-green-deployments-canary-test?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Feature Toggles, Blue-Green Deployments &amp; Canary Tests with Terraform</div><div class="kg-bookmark-description">In this post, we demonstrate some approaches to feature toggling, blue-green deployment, and canary testing of Terraform resources to mitigate impact to production infrastructure.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.hashicorp.com/favicon.svg" alt=""><span class="kg-bookmark-author">HashiCorp</span><span class="kg-bookmark-publisher">Rosemary Wang</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://www.datocms-assets.com/2885/1600071027-a9fd4d1b-6d62-4a23-8720-3433cd3b14a6.png?w=1200&amp;h=630&amp;fit=crop&amp;auto=format" alt="" onerror="this.style.display = 'none'"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://developer.hashicorp.com/terraform/language/meta-arguments/count?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">The count Meta-Argument - Configuration Language | Terraform | HashiCorp Developer</div><div class="kg-bookmark-description">Count helps you efficiently manage nearly identical infrastructure resources without writing a separate block for each one.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://developer.hashicorp.com/favicon.svg" alt=""><span class="kg-bookmark-author">The count Meta-Argument - Configuration Language | Terraform | HashiCorp Developer</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://developer.hashicorp.com/og-image/terraform.jpg" alt="" onerror="this.style.display = 'none'"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://developer.hashicorp.com/terraform/language/expressions/conditionals?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Conditional Expressions - Configuration Language | Terraform | HashiCorp Developer</div><div class="kg-bookmark-description">Conditional expressions select one of two values. You can use them to define defaults to replace invalid values.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://developer.hashicorp.com/favicon.svg" alt=""><span class="kg-bookmark-author">Conditional Expressions - Configuration Language | Terraform | HashiCorp Developer</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://developer.hashicorp.com/og-image/terraform.jpg" alt="" onerror="this.style.display = 'none'"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://developer.hashicorp.com/terraform/language/meta-arguments/for_each?ref=kloudshift.net"><div class="kg-bookmark-content"><div class="kg-bookmark-title">The for_each Meta-Argument - Configuration Language | Terraform | HashiCorp Developer</div><div class="kg-bookmark-description">The for_each meta-argument allows you to manage similar infrastructure resources without writing a separate block for each one.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://developer.hashicorp.com/favicon.svg" alt=""><span class="kg-bookmark-author">The for_each Meta-Argument - Configuration Language | Terraform | HashiCorp Developer</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://developer.hashicorp.com/og-image/terraform.jpg" alt="" onerror="this.style.display = 'none'"></div></a></figure>]]></content:encoded>
                </item>
    </channel>
</rss>