+01 (414) 230 - 5550
Docker, SQL Server
Install and Run SQL Server Docker Container on Mac

Like most people, I use Mac , Windows as well Linux OS for development and testing purposes. Primarily I use Mac for Development purpose. I have few projects which uses SQL Server as Data Storage Layer. Setting up Docker Container on Mac and Opening up the ports was pretty easy and doesn’t take more than 10 Minutes.

 

Steps followed :
  • Install Docker
  • Pull SQL Server Docker Image
  • Run SQL Server Docker Image
  • Install mssql Client
  • Install Kitematic
  • Open the Ports to connect to SQL Server from the network
  • Setup port forwarding to enable access outside the network

 

Install Docker :

Get Docker dmg image and install. Just follow the prompts and its very straight forward. 
https://docs.docker.com/docker-for-mac/install/#download-docker-for-mac https://download.docker.com/mac/stable/Docker.dmg

 

Once you have installed docker , you can verify the installation and version.

 

Pull SQL Server Docker Image ( DEV Version )

 

Create SQL Server Container from the Image and Expose it on port 1433 ( Default Port )

-d: this launches the container in daemon mode, so it runs in the background


–name name_your_container (macsqlserver): give your Docker container a friendly name, which is useful for stopping and starting containers from the Terminal.


-e ‘ACCEPT_EULA=Y: this sets an environment variable in the container named ACCEPT_EULAto the value Y. This is required to run SQL Server for Linux.


-e ‘SA_PASSWORD=Passw1rd’: this sets an environment variable for the sa database password. Set this to your own strong password. Also required.


-e ‘MSSQL_PID=Developer’: this sets an environment variable to instruct SQL Server to run as the Developer Edition.


-p 1433:1433: this maps the local port 1433 to the container’s port 1433. SQL Server, by default, listens for connections on TCP port 1433.

microsoft/mssql-server-linux: this final parameter tells Docker which image to use

 

Install SQL Client for MAC

If you don’t have npm installed in Mac, install homebrew and node.

Connect to SQL Server Instance

 

Get External Tools to Manage Docker

Kitematic

https://kitematic.com/

 

Open Up the Firewall to connect to SQL Server from outside the Host

Ensure your firewall is configured to allow the connections to the SQL Server. I turned of “Block all incoming connections” and enabled “Automatically allow downloaded signed software to receive incoming connections”. Without proper firewall configurations, you won’t be able to connect to the SQL Server outside the host.

 

 

 

Connecting from the Internet ( Port forwarding Setup )

Lets say you want to connect to the SQL Server you setup from outside the network or from anywhere via internet,you can setup port forwarding.

Get your public facing IP and setup the port forwarding for Port 1433 ( SQL Server port you have setup your docker container ). If its setup correctly , you should be able to telnet into that port to verify the connectivity.

 Unless you absolutely require it , its very bad idea to expose the SQL Server to internet. It should be behind the network , only your webserver should be accessible via internet.

 

Troubleshooting :

While launching docker container , if you get the error saying that it doesn’t have enough memory to launch SQL Server Container, go ahead and modify the memory allocation for docker container.

  • This image requires Docker Engine 1.8+ in any of their supported platforms.
  • At least 3.25 GB of RAM. Make sure to assign enough memory to the Docker VM if you’re running on Docker for Mac or Windows.

I have setup this way.

 

If you don’t provision enough memory, you will error like this.

 

 

Look into Docker logs

Following command ( docker ps -a and docker logs mcsqlserver ) shows list of running processes and docker logs.

 

Security:

I highly recommend to create least privileged accounts and disable SA login. If you are exposing your SQL Server to internet, there are ton of hacking and pentest tools which uses sa login for brute force attack.

0

AWS, DevOps, Docker

A microservices-based architecture introduces agility, flexibility and supports a sustainable DEVOPS culture ensuring closer collaboration within businesses and the news is that it’s actually happening for those who embraced it.

 

True, monolith apps architectures have enabled businesses to benefit from IT all along as it is single coded, simple to develop, test and run. As they are also based on a logical modular hexagonal or layered architectures (Presentation Layer responsible for handling HTTP requests and responding with either HTML or JSON/XML, Business logic layer, Database access and Apps integration) they cover and tie all processes, functions and gaps to an extent.

Despite these ground level facts, monolith software, which is instrumental for businesses embrace IT in their initial stages and which even exists today, is seeing problems. The growing complex business operation conditions are purely to be blamed.

 

So, how do businesses today address new pressures caused by digitization, continuous technology disruptions, increased customer awareness & interceptions and sudden regulatory interventions? The answer lies in agility, flexibility and scalability of the underlying IT infrastructure- the pillars of rapid adaptability to changes.

 

Monolith Apps, even though it is based on a well-designed 3 tier architecture, in the long run, loses fluidity and turns rigid. Irrespective of its modularity, modules are still dependent on each other and any minimal change in one module needs generation and deployment of all artifacts in each server pool, touched across the distributed environment.

 

Besides whenever there is a critical problem, the blame game starts amongst the UI developers, business logic experts, backend developers, database programmers, etc as they are predominantly experts in their domains, but have little knowledge about other processes. As the complexity of business operations sets in, the agility, flexibility and scalability part of your software is highly tested in a monolithic environment.

 

Here’s where Microservices plays a huge role as the underlying architecture helps you break your software applications into independent loosely coupled services that can be deployed and managed solely at that level and needn’t have to depend on other services.

 

For example, if your project needs you to design and manage inventory, sales, shipping, and billing and UI shopping cart modules, you can break each service down as an independently deployable module. Each has its own database, where monitoring and maintenance of application servers are done independently as the architecture allows you to decentralize the database, reducing complexity. Besides it enables continuous delivery/deployment of large, complex applications which means technology also evolves along with the business.

 

The other important aspect is that microservices promotes a culture wherein whoever develops the service is also responsible to manage it. This avoids the handover concept and the following misunderstandings and conflicts whenever there is a crisis.

In line with the DevOps concept, Microservices enables easy collaboration between the development and operations team as they embrace and work on a common toolset that establishes common terminology, as well as processes for requirements, dependencies, and problems. There is no denying the fact that DevOps and microservices work better when applied together.

 

Perhaps that’s the reason companies like Netflix, Amazon, etc are embracing the concept of microservices in their products. And for other new businesses embracing it, a new environment where agility, flexibility and closer collaboration between business and technology becomes a reality providing the much-needed edge in these challenging times.

0

Docker, SQL Server

Windows Containers do not ship with Active Directory support and due to their nature can’t (yet) act as a full-fledged domain joined objects, but a certain level of Active Directory functionality can be supported through the use of Globally Managed Service Accounts (GMSA).

 

Windows Containers cannot be domain-joined, they can also take advantage of Active Directory domain identities similar to when a device is realm-joined. With Windows Server 2012 R2 domain controllers, we introduced a new domain account called a group Managed Service Account (GMSA) which was designed to be shared by services. 

 

https://blogs.technet.microsoft.com/askpfeplat/2012/12/16/windows-server-2012-group-managed-service-accounts/

https://technet.microsoft.com/en-us/library/hh831782(v=ws.11).aspx

 

We can authenticate to Active Directory resources from Windows container which is not part of your domain. For this to work certain prerequisites needs to be met.

 

For once your container hosts shall be part of Active Directory and you shall be able to utilize Group Managed Service Accounts.
https://technet.microsoft.com/en-us/library/hh831782%28v=ws.11%29.aspx?f=255&MSPPError=-2147217396

 

The following steps needed for communicate Windows container with on premise SQL server using GMSA.
Environments are used and described for this post.

  1. Active directory Domain Controller installed on server CloudIQDC1.
    • OS – Windows Server 2012/2016.
    • The domain name is cloudiq.local
  2. Below are the Domain members (Computers) joined in DC
    • CIQ-2012R2-DEV
    • CIQSQL2012
    • CIQ-WIN2016-DKR
    • cloud-2016
  3. SQL server installed on CIQSQL2012. This will be used for GMSA testing.
    • OS – Windows 2012
  4. cloud-2016 will be used to test GSMA connection.
    • This is the container host we are using to connect on premise SQL server using GMSA account.
  5. The GMSA account name is “container_gsma”. We will create this and configure it.

 

Step 1: Create the KDS Root Key
  1. We can generate this only once per domain.
  2. This is used by the KDS service on DCs (along with other information) to generate passwords.
  3. Login to domain controller.
  4. Open PowerShell and execute the below.
  5. Verify your key using the below command.

 

Step 2: Create GMSA account
  1. Create GSMA account using the below command.
  2. Use below command to verify the created GMSA account.
  3. If everything works as expected, you’ll notice a new gMSA object in your domain’s Managed Service Account.

 

Step 3: Add GMSA account to Servers where we are going to use.
  1. Open the Active directory Admin Center.
  2. Select the container_gmsa account and click on properties.
  3. Select the security and click on add.
  4. Select only Computers
  5. Select Computers you want to use GMSA. In our case we need to add CIQSQL2012 and cloud-2016.
  6. Reboot Domain controller first to these changes take effect.
  7. Reboot the computers who will be using GMSA. In our case we need to reboot CIQSQL2012 and cloud-2016.
  8. After reboots, login to Domain controller. Execute the below command.

 

Step 4: Install GMSA Account on Servers
  1. Login to the system where the GMSA account which will use it. In our case login to cloud-2016. This is the container host we are using to connect on premise SQL server using GMSA account.
  2. Execute the below command if AD features are not available.
  3. Execute the below commands
  4. If everything is working as expected then you need to create credential spec file which need passed to docker during container creation to utilize this service account. Run the below commands to downloads module which will create this file from Microsoft github account and will create a JSON file containing required data.

 

Step 5: SQL Server Configuration to allow GMSA
  1. On SQL server create login for GMSA account and add it to “Sysadmin” role. Based on your on premise DB access, you can add suitable roles.
0

Docker

This is a continuation of the previous blog post on GMSA setup.


Step 1: Create Docker Image
  1. I have created ASPNET MVC app and it accessing the SQL server using windows authentication.
  2. My Connection string looks like below.
  3. I have created the Docker file and necessary build folders using image2docker. Refer Image2Docker

  4. Docker file looks like below
  5. Move the necessary files to cloud-2016.
  6. Login to the cloud-2016 server.
  7. Create the image using the below commands. Refer Docker commands.
Step 2: Create Container
  1. when you are creating docker container you need to specify the additional configuration to utilize GMSA. Please execute below commands
  2. Or execute the commands below
  3. Browse the appropriate page, you can see DB records.
  4. You can test the Active directory communication below. 
    1. Login into running docker container using docker exec command and check if, in fact, you can communicate to Active Directory. Execute nltest /parentdomain to verify
0

Docker

This is a continuation of the previous posts that covered how to setup and run Image2Docker.

Docker Installation Status
  • Open PowerShell command and execute the following command.
  • docker info
  • Docker is already installed in the system If the command returns something like the below.


  • The docker is not installed in the machine if you see the error like below


Install Docker if not exists
  • Please follow the instructions below if docker is not installed in your machine.
  • Install the Docker-Microsoft PackageManagement Provider from the PowerShell Gallery.
    Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
  • Next, you use the PackageManagement PowerShell module to install the latest version of Docker.
    Install-Package -Name docker -ProviderName DockerMsftProvider
  • When PowerShell asks you whether to trust the package source ‘DockerDefault’, type A to continue the installation. When the installation is complete, reboot the computer.
    Restart-Computer -Force
        Tip: If you want to update Docker later:
        Check the installed version with

    Find the current version with    

    When you’re ready, upgrade with

  • Ensure your Windows Server system is up-to-date by running. Run the following command.
    • This shows a text-based configuration menu, where you can choose option 6 to Download and Install Updates.

       

    •  When prompted, choose option A to download all updates.

 

Create Containers from Imag2Docker Dockerfile.
  • Make sure that docker installed on your Windows 2016 or Windows 10 with Anniversary updates.
  • To build that Dockerfile into an image, run:
  • Here img2docker/aspnetwebsites is the name of the image. You can give your own name based on your needs.
  • When the build completes, we can run a container to start my ASP.NET sites.

  • This command runs a container in the background, exposes the app port, and stores the ID of the container.

    Here 81 is the host port number and 80 is the container port number.

  • When the site starts, we will see in the container logs that the IIS Service (W3SVC) is running:

    The Service ‘W3SVC’ is in the ‘Running’ state.

  • Now you can browse to the site running in IIS in the container, but because published ports on Windows containers don’t do loopback yet, if you’re on the machine running the Docker container, you need to use the container’s IP address:

     

That will launch your browser and you’ll see your ASP.NET Web application running in IIS, in Windows Server Core, in a Docker container.

0

Docker

This is a continuation of the blog post that covers how to setup and run Image2Docker on local machines.

Local Machines
  • This mode looks for the IIS installed on the local machine and converts the IIS sites /virtual directories/ applications to docker files associate artifacts.
  • Run the following command
  • Local parameter is used for iis discovery on local machines.
  • OutputPath parameter specifies the location to store the generated Dockerfile and associated artifacts.
  • Artifact parameter specifies what artifact to inspect. In our case this is IIS.
  • Verbose parameter is optional and it will give all the verbose logs.
  • Following is the sample command

When it completes, the cmdlet generates a Dockerfile which turns that web server into a Docker image. The Dockerfile has instructions to install IIS and ASP.NET, copy in the website content, and create the sites in IIS.

Disk Images
  • After installing the Image2Docker PowerShell module, you will need one or more valid .vhdx or .wim files (the “source image”). To perform a scan of a valid VHDX or WIM image file, simply call the ConvertTo-Dockerfile command and specify the -ImagePath parameter, passing in the fully-qualified filesystem path to the source image.
  • Run the following command
  • ImagePath parameter, specifying the location of the disk image. {{ImagePath}} -> Provide your valid .vhdx or .wim images path stored in the local machine. The disk image must be available locally.
  • OutputPath parameter specifies the location to store the generated Dockerfile and associated artifacts.
  • Artifact parameter specifies what artifact to inspect. In our case this is IIS.
  • Verbose parameter is optional and it will give all the verbose logs.
  • Following is the sample command

The qa-webserver-01.vhd contains Two websites. One is AspNet MVC app and another one is the WEB API.


When the docker commandlet completes, the cmdlet generates a Dockerfile which turns that web server into a Docker image. The Dockerfile has instructions to install IIS and ASP.NET, copy in the website content, and create the sites in IIS.

The Image2Docker creates the website contents for ASPNET MVC app & WEB API and extract the dockerfile containing the websites configured on the image file.

 

0

Docker

Image2Docker is a PowerShell module which ports existing Windows application workloads to Docker. It supports multiple application types, but the initial focus is on IIS and ASP.NET apps. You can use Image2Docker to extract ASP.NET websites from a VM – or from the local machine or a remote machine. Then so you can run your existing apps in Docker containers on Windows, with no application changes.


Image2Docker also supports Windows Server 2012, with support for 2008 and 2003 on its way. The websites on this VM are a mixture of technologies – ASP.NET WebForms, ASP.NET MVC, ASP.NET WebApi, together with a static HTML website.


To learn more about Image2Docker, please visit the following link

https://github.com/docker/communitytools-image2docker-win


Microsoft Windows 10 and Windows Server 2016 introduced new capabilities for containerizing applications.There are two types of container formats supported on the Microsoft Windows platform:

  • Hyper-V Containers – Containers with a dedicated kernel and stronger isolation from other containers
  • Windows Server Containers – application isolation using process and namespace isolation, and a shared kernel with the container host

Prerequisite
  • PowerShell 5.0 needs to be installed to use Image2Docker.

      Download URL: https://www.microsoft.com/en-us/download/details.aspx?id=50395

  • Image2Docker generates a Dockerfile which you can build into a Docker image. The system running the ConvertTo-Dockerfile command does not need Docker installed, but you will need Docker setup on Windows to build images and run containers.

Installation
    • Open PowerShell with administrative privileges. Run the following commands
    • You can validate the presence of the Install-Module command by running: Get-Command -Module PowerShellGet -Name Install-Module. If the PowerShellGet module or the Install-Module commands are not accessible, you may not be running a supported version of PowerShell. Make sure that you are running PowerShell 5.0 or later on a given machine.

Usage
  • Image2Docker can inspect web servers and extract a Dockerfile containing some or all of the websites configured on the server. ASP.NET is supported, and the generated Dockerfile will be correctly set up to run .NET 2.0, 3.5 or 4.x sites.
  • Image2Docker Supports the following source types.
    • Local Machines
    • Remote Path
    • Disk Images

The following commands show how to setup and run Image2Docker on local machines. Instructions on how to run it on remote path and disk images will be covered in future blog posts.


  • This mode looking for the IIS installed on the local machine and convert the IIS sites /virtual directories/ applications to docker files associate artifacts.
  • Run the following command
  • Local parameter used for iis discovery on local machines.
  • OutputPath parameter specifying the location to store the generated Dockerfile and associated artifacts.
  • Artifact parameter specifies what artifact to inspect. In our case this is IIS.
  • Verbose parameter is optional and it will give all the verbose logs.
  • Following is the sample command
  • 0