Setting Trace Flags and Configuring SQL Server in Kubernetes

In this blog post, we will walk through a few examples of configuring SQL Server in Kubernetes. First, we will create a Deployment for SQL Server, override the container’s command, and specify a Database Engine Service Startup Option. Second, we will create a Deployment for SQL Server using a ConfigMap to inject an mssql.conf configuration file.

Creating a SQL Server Deployment and Overriding the Container’s Command and Arguments

First up, let’s create a Deployment for SQL Server and override the container’s command specify a Database Engine Service Startup Option.

In Kubernetes, you can override the command of a container and specify argements for that command in the Pod Spec. This enables you to start up a container with the custom executable and parameters. Let’s check out an example…in the abbreviated Pod Spec below, when defining the containers configuration for the SQL Server container, you define the fields command with a path to the executable you want the container to start and args to specify any arguments (sometimes called parameters) for that command as a comma-separated list. So here you see command: ["/opt/mssql/bin/sqlservr"] and args: ["-T", "3226"] defined. When this Pod starts up this container, it will launch SQL Server with that trace flag configured.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mssql-deployment
spec:
  replicas: 1
    ...
    spec:
      ...
      containers:
      - name: mssql
        image: mcr.microsoft.com/mssql/server:2019-latest
        command: ["/opt/mssql/bin/sqlservr"]
        args: ["-T", "3226"]
        ports:
        - containerPort: 1433
        ...

The full deployment file is available here: sqlserver.yaml

So let’s do it; in the code below, I’m creating a secret for the SA password and deploying the sqlserver.yaml manifest in the Cluster.

kubectl create secret generic mssql --from-literal=SA_PASSWORD='S0methingS@Str0ng!'
kubectl apply -f sqlserver.yaml

Once the Pod is up and running in the Cluster, let’s get the IP address of our SQL Server’s Service. The Cluster I’m using for this demo is in AKS, so this Service is of type LoadBalancer. Once we have the IP address, let’s ask SQL Server which trace flags it has configured, and in the output below, you can see trace flag 3226 is configured.

SERVICEIP=$(kubectl get service mssql-deployment -o jsonpath='{ .status.loadBalancer.ingress[].ip }')

sqlcmd -S $SERVICEIP,31433 -U sa -Q 'DBCC TRACESTATUS;' -P 'S0methingS@Str0ng!'

TraceFlag Status Global Session
--------- ------ ------ -------
     3226      1      1       0

If you want to make a configuration change here, you will update manifest sqlserver.yaml and deploy that new code to the Cluster with kubectl apply -f sqlserver.yaml. This will create shut down the old Pod and create a new Pod with the new configuration. Once SQL Server starts up, it will get the new config.

Let’s clean up from this demo before moving on to the next and delete the resources we created.

kubectl delete secret mssql
kubectl delete -f sqlserver.yaml

Creating a SQL Server Deployment and using a ConfigMap to inject a Configuration File

Next, let’s configure SQL Server running in a Deployment with an mssql.conf configuration file saved as a ConfigMap. Using this method, you’re decoupling the configuration of the Pod from its definition in the Deployment manifest. ConfigMaps are stored in the Cluster as an object and can be referenced by one or more Deployments. Using ConfigMaps gives you the ability to use a standard configuration across Pods in your Cluster if desired.

Let’s get started…

First, we need an mssql.conf file and here is a straightforward example. You can make this as customized as needed. Check out the Docs for more details on what to configure and how to configure it. In our example here, we’re setting a trace flag again.

[traceflag]
traceflag0 = 3226

Next, let’s load that mssql.conf file into our Kubernetes Cluster as a ConfigMap. I’m using an imperative method here for demonstration purposes. ConfigMaps can also be defined in YAML as part of a Deployment manifest. There are several different techniques for creating ConfigMaps check out the docs for more details.

kubectl create configmap mssqlconf --from-file mssql.conf
configmap/mssqlconf created

Let’s get the ConfigMap as YAML to double-check its configuration. Looking good here.

kubectl get configmap mssqlconf -o yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: mssqlconf
data:
  mssql.conf: |
    [traceflag]
    traceflag0 = 3226  

Now, let’s walk through the code needed to get that ConfigMap mounted as a file inside our container. In the abbreviated manifest below, we’re defining a Deployment, and in the Pod Spec, you can see the containers field. I want to call out that we are not changing a command or specifying any args in the containers configuration. Instead, we’re going to expose the ConfigMap into the container as a file, and we do that using Volumes and VolumeMounts.

In the Volumes field, we’re defining two volumes: a Persistent Volume Claim mssqldb for a disk that this Pod will use for persistent storage. And the second Volume mssqlconf of type ConfigMap referencing the ConfigMap we just created, mssqlconf.

Once we have the Volumes defined, we need to expose them to the container using volumeMounts. The first volumeMounts is for the Persistent Volume Claim named mssqldb, and it’s mounted at /var/opt/mssql. The second is for our ConfigMap, referencing the volume mssqlconf and its mounted at /var/opt/mssql/mssql.conf as a file. You also need to define subPath so that the ConfigMap is created as the file mssql.conf is created in /var/opt/mssql

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mssql-deployment
spec:
    ...
    spec:
      ...
      containers:
      - name: mssql
        image: mcr.microsoft.com/mssql/server:2019-latest
        ...
        volumeMounts:
        - name: mssqldb
          mountPath: /var/opt/mssql/
        - name: mssqlconf
          mountPath: /var/opt/mssql/mssql.conf
          subPath: mssql.conf
      volumes:
      - name: mssqldb
        persistentVolumeClaim:
          claimName: mssql-data
      - name: mssqlconf 
        configMap:
          name: mssqlconf 

The full deployment file is available here: sqlserver-cm.yaml

So let’s do it; in the code below, I’m creating a secret for the SA password and deploy the sqlserver-cm.yaml manifest in the Cluster.

kubectl create secret generic mssql --from-literal=SA_PASSWORD='S0methingS@Str0ng!'
kubectl apply -f sqlserver-cm.yaml

Once the Pod is up and running in the Cluster, using cat let’s see if our config file is in the right place with the correct configuration in there…and it looks like it’s all good.

kubectl exec mssql-deployment-64fb867bf5-p4lbw -- cat /var/opt/mssql/mssql.conf
[traceflag]
traceflag0 = 3226

And next, let’s get the IP address of our SQL Server’s Service again and ask SQL Server which trace flags it has configured, and in the output below, you can see configured is trace flag 3226. All is good here too.

SERVICEIP=$(kubectl get service mssql-deployment -o jsonpath='{ .status.loadBalancer.ingress[].ip }')

sqlcmd -S $SERVICEIP,31433 -U sa -Q 'DBCC TRACESTATUS;' -P 'S0methingS@Str0ng!'
TraceFlag Status Global Session
--------- ------ ------ -------
     3226      1      1       0

Now that we’re finished, we can clean up the resources from this demo.

kubectl delete -f sqlserver-cm.yaml
kubectl delete ConfigMap mssqlconf
kubectl delete secret mssql

ConfigMaps offer a lot of flexibility to decoupling configuration from your deployment manifests. The configuration of your SQL instances running in Kubernetes can be stored as ConfigMap in your cluster and can be referenced by more than one SQL Server instance if needed.