Welcome to Cloud View! The last couple of weeks we have been curating predictions for the coming year (and decade) from well regarded sources. Now it’s time to drill down deeper into specific areas and find out what experts in the field see in store for the future. Predictions Industry Speak AT&T integrating 5G with […]


LATEST BLOG

Windows Containers do not ship with Active Directory support and due to their nature can’t (yet) act as a full-fledged domain joined objects, but a certain level of Active Directory functionality can be supported through the use of Globally Managed Service Accounts (GMSA).

Windows Containers cannot be domain-joined, they can also take advantage of Active Directory domain identities similar to when a device is realm-joined. With Windows Server 2012 R2 domain controllers, we introduced a new domain account called a group Managed Service Account (GMSA) which was designed to be shared by services.

https://blogs.technet.microsoft.com/askpfeplat/2012/12/16/windows-server-2012-group-managed-service-accounts/

https://technet.microsoft.com/en-us/library/hh831782(v=ws.11).aspx

We can authenticate to Active Directory resources from Windows container which is not part of your domain. For this to work certain prerequisites needs to be met.

For once your container hosts shall be part of Active Directory and you shall be able to utilize Group Managed Service Accounts.
https://technet.microsoft.com/en-us/library/hh831782%28v=ws.11%29.aspx?f=255&MSPPError=-2147217396

The following steps needed for communicate Windows container with on premise SQL server using GMSA.
Environments are used and described for this post.

  1. Active directory Domain Controller installed on server CloudIQDC1.
    • OS – Windows Server 2012/2016.
    • The domain name is cloudiq.local
  2. Below are the Domain members (Computers) joined in DC
    • CIQ-2012R2-DEV
    • CIQSQL2012
    • CIQ-WIN2016-DKR
    • cloud-2016
  3. SQL server installed on CIQSQL2012. This will be used for GMSA testing.
    • OS – Windows 2012
  4. cloud-2016 will be used to test GSMA connection.
    • This is the container host we are using to connect on premise SQL server using GMSA account.

  5. The GMSA account name is “container_gsma”. We will create this and configure it.
Step 1: Create the KDS Root Key
  1. We can generate this only once per domain.
  2. This is used by the KDS service on DCs (along with other information) to generate passwords.
  3. Login to domain controller.
  4. Open PowerShell and execute the below.
                            Import-module ActiveDirectory
        Add-KdsRootKey –EffectiveTime ((get-date).addhours(-10));5.
         

  5. Verify your key using the below command.
            Get-KdsRootKey
         
Step 2: Create GMSA account
  1. Create GSMA account using the below command.
            
        New-ADServiceAccount -Name container_gmsa -DNSHostName cloudiq.local 
        -PrincipalsAllowedToRetrieveManagedPassword "Domain Controllers", "domain admins", 
        "CN=Container Hosts,CN=Builtin, DC=cloudiq, DC=local" -KerberosEncryptionType RC4, AES128, AES256
         

  2. Use below command to verify the created GMSA account.
            Get-ADServiceAccount -Identity container_gmsa 
  3. If everything works as expected, you’ll notice a new gMSA object in your domain’s Managed Service Account.
Step 3: Add GMSA account to Servers where we are going to use.
  1. Open the Active directory Admin Center.
  2. Select the container_gmsa account and click on properties.
  3. Select the security and click on add.
  4. Select only Computers
  5. Select Computers you want to use GMSA. In our case we need to add CIQSQL2012 and cloud-2016.
  6. Reboot Domain controller first to these changes take effect.
  7. Reboot the computers who will be using GMSA. In our case we need to reboot CIQSQL2012 and cloud-2016.
  8. After reboots, login to Domain controller. Execute the below command.
            
        Set-ADServiceAccount -Identity container_gmsa -PrincipalsAllowedToRetrieveManagedPassword 
        CloudIQDC1$,cloud-2016$, CIQSQL2012$
         

Step 4: Install GMSA Account on Servers
  1. Login to the system where the GMSA account which will use it. In our case login to cloud-2016. This is the container host we are using to connect on premise SQL server using GMSA account.
  2. Execute the below command if AD features are not available.
            
        Enable-WindowsOptionalFeature -FeatureName ActiveDirectory-Powershell -online -all
         
  3. Execute the below commands
            Get-ADServiceAccount -Identity container_gmsa
        Install-ADServiceAccount -Identity container_gmsa
        Test-AdServiceAccount -Identity container_gmsa 

  4. If everything is working as expected then you need to create credential spec file which need passed to docker during container creation to utilize this service account. Run the below commands to downloads module which will create this file from Microsoft github account and will create a JSON file containing required data.
            
        Invoke-WebRequest "https://raw.githubusercontent.com/Microsoft/Virtualization-Documentation/live/windows-server-container-tools/ServiceAccounts/CredentialSpec.psm1" 
        -UseBasicParsing -OutFile $env:TEMP\cred.psm1
        Import-Module $env:temp\cred.psm1
        New-CredentialSpec -Name Gmsa -AccountName container_gmsa
        #This will return location and name of JSON file
        Get-CredentialSpec 

Step 5: SQL Server Configuration to allow GMSA
  1. On SQL server create login for GMSA account and add it to “Sysadmin” role. Based on your on premise DB access, you can add suitable roles.
            CREATE LOGIN [cloudiq\container_gmsa$] FROM WINDOWS
        sp_addsrvRolemember "cloudiq\container_gmsa$", "sysadmin" 
Bulk Load Data Files in S3 Bucket into Aurora RDS

We typically get data feeds from our clients ( usually about ~ 5 – 20 GB) worth of data. We download these data files to our lab environment and use shell scripts to load the data into AURORA RDS . We wanted to avoid unnecessary data transfers and decided to setup data pipe line to automate the process and use S3 Buckets for file uploads from the clients.

In theory it’s very simple process of setting up data pipeline to load data from S3 Bucket into Aurora Instance .Even though it’s trivial , setting up this process is very convoluted multi step process . It’s not as simple as it sounds . Welcome to Managed services world.

STEPS INVOLVED :
  • Create ROLE and Attach S3 Bucket Policy :
  • Create Cluster Parameter Group
  • Modify Custom Parameter Groups to use ROLE
  • REBOOT AURORA INSTANCE
GRANT AURORA INSTANCE ACCESS TO S3 BUCKET

By default aurora cannot access S3 Buckets and we all know it’s just common sense default setup to reduce the surface area for better security.

For EC2 Machines you can attach a role and the EC2 machines can access other AWS services on behalf of role assigned to the Instance.Same method is applicable for AURORA RDS. You Can associate a role to AURORA RDS which has required permissions to S3 Bucket .

There are ton of documentation on how to create a role and attach policies . It’s pretty widely adopted best practice in AWS world. Based on AWS Documentation, AWS Rotates access keys attached to these roles automatically. From security aspect , its lot better than using hard coded Access Keys.

In Traditional Datacenter world , you would typically run few configuration commands to change configuration options .( Think of sp_configure in SQL Server ).

In AWS RDS World , its tricky . By default configurations gets attached to your AURORA Cluster . If you need to override any default configuration , you have to create your own DB Cluster Parameter Group and modify your RDS instance to use the custom DB Cluster Parameter Group you created.Now you can edit your configuration values .

The way you attach a ROLE to AURORA RDS is through Cluster parameter group .

These three configuration options are related to interaction with S3 Buckets.

  • aws_default_s3_role
  • aurora_load_from_s3_role
  • aurora_select_into_s3_role

Get the ARN for your Role and modify above configuration values from default empty string to ROLE ARN value.

Then you need to modify your Aurora instance and select to use the role . It should show up in the drop down menu in the modify role tab.

GRANT AURORA LOGIN LOAD FILE PERMISSION
 
        
        GRANT LOAD FROM S3 ON *.* TO user@domain-or-ip-address
        GRANT LOAD FROM S3 ON *.* TO 'aurora-load-svc'@'%' 
REBOOT AURORA INSTANCE

Without Reboot you will be spending lot of time troubleshooting. You need to reboot to the AURORA Instance for new cluster parameter values to take effect.

After this you will be be able to execute the LOAD FILE FROM S3 to AURORA .

Screen Shots :
Create ROLE and Attach Policy :


Attach S3 Bucket Policy :

Create Parameter Group :

Modify Custom Parameter Groups

Modify AURORA RDS Instance to use ROLE

Troubleshooting :
Errors :

Error Code: 1871. S3 API returned error: Missing Credentials: Cannot instantiate S3 Client 0.078 sec

Usually means , AURORA Instance can’t reach S3 Bucket. Make sure you have applied the role and rebooted the Instance.

Sample BULK LOAD Command :

You could use following sample scripts to test your Setup.

 
        
        LOAD DATA FROM S3 's3://yourbucket/allusers_pipe.txt'
        INTO TABLE ETLStage.users
        FIELDS TERMINATED BY '|'
        LINES TERMINATED BY '\n'
        (@var1, @var2, @var3, @var4, @var5, @var6, @var7, @var8, @var9, @var10, @var11, @var12, @var13, @var14, @var15, @var16, @var17, @var18)
        SET
        userid = @var1,
        username = @var2,
        firstname = @var3,
        lastname = @var4,
        city=@var5,
        state=@var6,
        email=@var7,
        phone=@var8,
        likesports=@var9,
        liketheatre=@var10,
        likeconcerts=@var11,
        likejazz=@var12,
        likeclassical=@var13,
        likeopera=@var14,
        likerock=@var15,
        likevegas=@var16,
        likebroadway=@var17,
        likemusicals=@var18 

Sample File in S3 Public Bucket : s3://awssampledbuswest2/tickit/allusers_pipe.txt

 
        
        SELECT * FROM ETLStage.users INTO OUTFILE S3's3-us-west-2://s3samplebucketname/outputestdata'
        FIELDS TERMINATED BY ','
        LINES TERMINATED BY '\n'
        MANIFEST ON
        OVERWRITE ON; 
 
        
        create table users_01(
        userid integer not null primary key,
        username char(8),
        firstname varchar(30),
        lastname varchar(30),
        city varchar(30),
        state char(2),
        email varchar(100),
        phone char(14),
        likesports varchar(100),
        liketheatre varchar(100),
        likeconcerts varchar(100),
        likejazz varchar(100),
        likeclassical varchar(100),
        likeopera varchar(100),
        likerock varchar(100),
        likevegas varchar(100),
        likebroadway varchar(100),
        likemusicals varchar(100)) 
Getting started with AWS Data Pipeline

AWS Data Pipeline is a web service that you can use to automate the movement and transformation of data. With AWS Data Pipeline, you can define data-driven workflows, so that tasks can be dependent on the successful completion of previous tasks.

AWS Data Pipe Line Sample Workflow


AWS Data Pipe Line Sample Workflow
Default IAM Roles

AWS Data Pipeline requires IAM roles to determine what actions your pipelines can perform and who can access your pipeline’s resources.

The AWS Data Pipeline console creates the following roles for you:

DataPipelineDefaultRole

DataPipelineDefaultResourceRole

DataPipelineDefaultRole:
 
                {
                "Version": "2012-10-17",
                "Statement": [
                 {
                 "Effect": "Allow",
                 "Action": [
                 "s3:List*",
                 "s3:Put*",
                 "s3:Get*",
                 "s3:DeleteObject",
                 "dynamodb:DescribeTable",
                 "dynamodb:Scan",
                 "dynamodb:Query",
                 "dynamodb:GetItem",
                 "dynamodb:BatchGetItem",
                 "dynamodb:UpdateTable",
                 "ec2:DescribeInstances",
                 "ec2:DescribeSecurityGroups",
                 "ec2:RunInstances",
                 "ec2:CreateTags",
                 "ec2:StartInstances",
                 "ec2:StopInstances",
                 "ec2:TerminateInstances",
                 "elasticmapreduce:*",
                 "rds:DescribeDBInstances",
                 "rds:DescribeDBSecurityGroups",
                 "redshift:DescribeClusters",
                 "redshift:DescribeClusterSecurityGroups",
                "sns:GetTopicAttributes",
                 "sns:ListTopics",
                 "sns:Publish",
                 "sns:Subscribe",
                 "sns:Unsubscribe",
                 "iam:PassRole",
                 "iam:ListRolePolicies",
                 "iam:GetRole",
                 "iam:GetRolePolicy",
                 "iam:ListInstanceProfiles",
                 "cloudwatch:*",
                 "datapipeline:DescribeObjects",
                 "datapipeline:EvaluateExpression"
                 ],
                 "Resource": [
                 "*"
                 ]
                 }
                ]
                } 
DataPipelineDefaultResourceRole:
 
                {
                "Version": "2012-10-17",
                "Statement": [
                {
                 "Effect": "Allow",
                 "Action": [
                 "s3:List*",
                 "s3:Put*",
                 "s3:Get*",
                 "s3:DeleteObject",
                 "dynamodb:DescribeTable",
                 "dynamodb:Scan",
                 "dynamodb:Query",
                 "dynamodb:GetItem",
                 "dynamodb:BatchGetItem",
                 "dynamodb:UpdateTable",
                 "rds:DescribeDBInstances",
                 "rds:DescribeDBSecurityGroups",
                 "redshift:DescribeClusters",
                 "redshift:DescribeClusterSecurityGroups",
                 "cloudwatch:PutMetricData",
                 "datapipeline:*"
                 ],
                 "Resource": [
                 "*"
                 ]
                }
                ]
                } 
Error Message:

Error MessageUnable to create resource for @EC2ResourceObj_2017-05-05T04:25:32 due to: No default VPC for this user (Service: AmazonEC2; Status Code: 400; Error Code: VPCIdNotSpecified; Request ID: bd2f3abb-d1c9-4c60-977f-6a83426a947d)

Resolution:

When you look at your VPC, you would notice Default VPC is not configured. While launching EC2 Instance on Data Pipeline, by default it can’t figure out which VPC to use and that needs to be explicitly specified in Configurations.

SubNetID for EC2 Resource

Default VaPC
Build Sample Data Pipeline to Load S3 File into MySQL Table :

Use Cases for AWS Data Pipeline
Setup sample Pipeline in our develop environment
Import Text file from AWS S3 Bucket to AURORA Instance
Send out notifications through SNS to i90runner@gmail.com
Export / Import Data Pipe Line Definition.

Prerequisites:

Have MySQL Instance
Access to Invoke Data Pipeline with appropriate permissions
Target Database and Target Table
SNS Notification setup with right configuration

Steps to Follow:

Create Data Pipeline with Name
Create MySQL Schema and Table
Configure Your EC2 Resource ( Make sure EC2 instance has access to MySQL Instance ).
If MySQL instance allows only certain IPS’s and VPC, then you need to configure your EC2 Resource in the same VPC or Subnet.
Configure Data Source and appropriate Data Format ( Notice this is Pipe Delimited File ant CSV File ).
Configure your SQL Insert Statement
Configure SNS Notification for PASS / FAIL Activity.
Run your Pipeline and Troubleshoot if errors occur.


Data Pipe Line JSON Definiton:

AWS_Data_PipeLine_S3_MySQL_Defintion.json

Create Table SQL :
 
                create table users_01(
                userid integer not null primary key,
                username char(8),
                firstname varchar(30),
                lastname varchar(30),
                city varchar(30),
                state char(2),
                email varchar(100),
                phone char(14),
                likesports varchar(100),
                liketheatre varchar(100),
                likeconcerts varchar(100),
                likejazz varchar(100),
                likeclassical varchar(100),
                likeopera varchar(100),
                likerock varchar(100),
                likevegas varchar(100),
                likebroadway varchar(100),
                likemusicals varchar(100)) 
 
                INSERT INTO `ETLStage`.`users_01`
                (`userid`,
                `username`,
                `firstname`,
                `lastname`,
                `city`,
                `state`,
                `email`,
                `phone`,
                `likesports`,
                `liketheatre`,
                `likeconcerts`,
                `likejazz`,
                `likeclassical`,
                `likeopera`,
                `likerock`,
                `likevegas`,
                `likebroadway`,
                `likemusicals`)
                VALUES
                (
                ?,
                ?,
                ?,
                ?,
                ?,
                ?,
                ?,
                ?,
                ?,
                ?,
                ?,
                ?,
                ?,
                ?,
                ?,
                ?,
                ?,
                ?
                ); 
Errors Encountered:

errorMessageQuote character must be defined in record format

https://stackoverflow.com/questions/26111111/data-pipeline-error-on-a-template-from-rds-to-s3-copy

You can use “TSV” type as your custom format type and provide:

  • “Column separator” as pipe(|),
  • “Record separator” as new line(\n),
  • “Escape Char” as backslash(\) or any other character you wa

errorId : ActivityFailed:SQLException
errorMessage : No value specified for parameter
errorMessage : Parameter index out of range (1 > number of parameters, which is 0).
errorMessage : Incorrect integer value: ‘FALSE’ for column ‘likesports’ at row 1

Ensure the Table Column Data Type set to correct . By Default MySQL Doesn’t covert TRUE / FALSE into Boolean Data Type.

errorMessage : Parameter index out of range (1 > number of parameters, which is 0).
errorMessage for Load script: ERROR 1227 (42000) at line 1: Access denied; you need (at least one of) the LOAD FROM S3 privilege(s) for this operation

AWS region codeAWS region nameNumber of AZsAZ names
us-west-2Oregon3us-west-2a, us-west-2b, us-west-2c
ap-southeast-2 Sydney2ap-southeast-2a, ap-southeast-2b, ap-southeast-2c
us-east-1 Virginia4us-east-1a, us-east-1b, us-east-1c, us-east-1e
us-west-1 N. California 2us-west-1a, us-west-1b
eu-west-1 Ireland3eu-west-1a, eu-west-1b, eu-west-1c
eu-central-1 Frankfurt2eu-central-1a, eu-central-1b
ap-southeast-1 Singapore 2ap-southeast-1a, ap-southeast-1b
ap-northeast-1 Tokyo2ap-northeast-1a, ap-northeast-1c
sa-east-1 Sao Paulo 3sa-east-1a, sa-east-1b, sa-east-1c
 
                
                C:\Users\Raju>AWS ec2 describe-regions
                C:\Users\Raju> aws ec2 describe-availability-zones 
AWS Regions for EC2

Availability Zones

This is a continuation of the previous blog post on GMSA setup.

Step 1: Create Docker Image
  1. I have created ASPNET MVC app and it accessing the SQL server using windows authentication.
  2. My Connection string looks like below.
     
                    
                    <connectionStrings>
                    <add name="AdventureWorks2012Entities"
                    connectionString="metadata=res://*/ManagerEmployeeModel.csdl|res://*/ManagerEmployee
                    Model.ssdl|res://*/ManagerEmployeeModel.msl;provider=System.Data.SqlClient;provider 
                    connection string=&quot;data source=CIQSQL2012;initial
                    catalog=AdventureWorks2012;integrated
                    security=True;MultipleActiveResultSets=True;App=EntityFramework&quot;"
                    providerName="System.Data.EntityClient" />
                    </connectionStrings>
                     
  3. I have created the Docker file and necessary build folders using image2docker. Refer Image2Docker
  4. Docker file looks like below
     
                    
                    # escape=` 
                    FROM microsoft/aspnet:3.5-windowsservercore-10.0.14393.1066 
                    SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; 
                    $ProgressPreference = 'SilentlyContinue';"] 
                     
                    # disable DNS cache so container addresses always fetched from Docker 
                    RUN Set-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Services\Dnscache\
                    Parameters' -Name ServerPriorityTimeLimit -Value 0 -Type DWord 
                     
                    RUN Remove-Website 'Default Web Site'; 
                     
                    RUN Enable-WindowsOptionalFeature -Online -FeatureName IIS-ApplicationDevelopment,
                    IIS-ASPNET,IIS-ASPNET45,IIS-CommonHttpFeatures,IIS-DefaultDocument,
                    IIS-DirectoryBrowsing,IIS-HealthAndDiagnostics,IIS-HttpCompressionStatic,
                    IIS-HttpErrors,IIS-HttpLogging,IIS-ISAPIExtensions,IIS-ISAPIFilter,
                    IIS-NetFxExtensibility,IIS-NetFxExtensibility45,IIS-Performance,IIS-RequestFiltering,
                    IIS-Security,IIS-StaticContent,IIS-WebServer,IIS-WebServerRole,NetFx4Extended-ASPNET45 
                     
                    # Set up website: MyGSMAMvc 
                    RUN New-Item -Path 'C:\inetpub\wwwroot\MyAspNetMVC_GSMA' -Type Directory -Force;  
                     
                    RUN New-Website -Name 'MyGSMAMvc' -PhysicalPath 'C:\inetpub\wwwroot\MyAspNetMVC_GSMA' -Port 80 -Force;  
                     
                    EXPOSE 80 
                     
                    COPY ["MyAspNetMVC_GSMA", "/inetpub/wwwroot/MyAspNetMVC_GSMA"] 
                     
                    RUN $path='C:\inetpub\wwwroot\MyAspNetMVC_GSMA'; ` 
                        $acl = Get-Acl $path; ` 
                        $newOwner = [System.Security.Principal.NTAccount]('BUILTIN\IIS_IUSRS'); ` 
                        $acl.SetOwner($newOwner); ` 
                        dir -r $path | Set-Acl -aclobject  $acl 
                    
                     
  5. Move the necessary files to cloud-2016.
  6. Login to the cloud-2016 server.
  7. Create the image using the below commands. Refer Docker commands.
     
                    
                    docker build -t myaspnetmvc/gmsa  
Step 2: Create Container
  1. when you are creating docker container you need to specify the additional configuration to utilize GMSA. Please execute below commands
     
                    
                    docker run -d --security-opt "credentialspec=file://Gmsa.json" myaspnetmvc/gmsa 
  2. Or execute the commands below
     
                     $id = docker run -d --security-opt "credentialspec=file://Gmsa.json" myaspnetmvc/gmsa docker logs $id
                    $ip = docker inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' $id start http://$($ip)
                     
  3. Browse the appropriate page, you can see DB records.
  4. You can test the Active directory communication below. 
    1. Login into running docker container using docker exec command and check if, in fact, you can communicate to Active Directory. Execute nltest /parentdomain to verify
       
                      docker exec -it 0974d72624eb powershell 
                      nltest /parentdomain 
                      cloudiq.local. (1) 
                      The command completed successfully
                        

This is a continuation of the previous posts that covered how to setup and run Image2Docker.

Docker Installation Status
  • Open PowerShell command and execute the following command.
  • docker info
  • Docker is already installed in the system If the command returns something like the below.

  • The docker is not installed in the machine if you see the error like below


Install Docker if not exists
  • Please follow the instructions below if docker is not installed in your machine.
  • Install the Docker-Microsoft PackageManagement Provider from the PowerShell Gallery.
    Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
  • Next, you use the PackageManagement PowerShell module to install the latest version of Docker.
    Install-Package -Name docker -ProviderName DockerMsftProvider
  • When PowerShell asks you whether to trust the package source ‘DockerDefault’, type A to continue the installation. When the installation is complete, reboot the computer.
    Restart-Computer -Force
        Tip: If you want to update Docker later:
        Check the installed version with
     
                    Get-Package -Name Docker -ProviderName DockerMsftProvider 

    Find the current version with    

     
                    Find-Package -Name Docker -ProviderName DockerMsftProvider 

    When you’re ready, upgrade with

     
                    Install-Package -Name Docker -ProviderName DockerMsftProvider -Update -Force 
     
                    Start-Service Docker 
  • Ensure your Windows Server system is up-to-date by running. Run the following command.
     
                    Sconfig 
    • This shows a text-based configuration menu, where you can choose option 6 to Download and Install Updates.
       
                      
                      ===============================================================================
                                               Server Configuration
                      ===============================================================================
                      
                      1) Domain/Workgroup:                    Workgroup:  WORKGROUP
                      2) Computer Name:                       WIN-HEFDK4V68M5
                      3) Add Local Administrator
                      4) Configure Remote Management          Enabled
                      
                      5) Windows Update Settings:             DownloadOnly
                      6) Download and Install Updates
                      7) Remote Desktop:                      Disabled
                      ...
                       
    •  When prompted, choose option A to download all updates.
Create Containers from Imag2Docker Dockerfile.
  • Make sure that docker installed on your Windows 2016 or Windows 10 with Anniversary updates.
  • To build that Dockerfile into an image, run:
     
                    docker build -t img2docker/aspnetwebsites. 
  • Here img2docker/aspnetwebsites is the name of the image. You can give your own name based on your needs.
  • When the build completes, we can run a container to start my ASP.NET sites.
  • This command runs a container in the background, exposes the app port, and stores the ID of the container.
     
                    $id = docker run -d -p 81:80 img2docker/aspnetwebsites 

    Here 81 is the host port number and 80 is the container port number.

  • When the site starts, we will see in the container logs that the IIS Service (W3SVC) is running:
     
                    docker logs $id 

    The Service ‘W3SVC’ is in the ‘Running’ state.

  • Now you can browse to the site running in IIS in the container, but because published ports on Windows containers don’t do loopback yet, if you’re on the machine running the Docker container, you need to use the container’s IP address:
     
                    $ip = docker inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' 
     
                    $id 
     
                    start http://$($ip) 

That will launch your browser and you’ll see your ASP.NET Web application running in IIS, in Windows Server Core, in a Docker container.


POPULAR POSTS

CloudIQ is a leading Cloud Consulting and Solutions firm that helps businesses solve today’s problems and plan the enterprise of tomorrow by integrating intelligent cloud solutions. We help you leverage the technologies that make your people more productive, your infrastructure more intelligent, and your business more profitable. 

US

626 120th Ave NE, B102, Bellevue,

WA, 98005.

 sales@cloudiqtech.com

INDIA

Chennai One IT SEZ,

Module No:5-C, Phase ll, 2nd Floor, North Block, Pallavaram-Thoraipakkam 200 ft road, Thoraipakkam, Chennai – 600097


© 2019 CloudIQ Technologies. All rights reserved.

Get in touch

Please contact us using the form below

USA

626 120th Ave NE, B102, Bellevue, WA, 98005.

+1 (206) 203-4151

sales@cloudiqtech.com

INDIA

Chennai One IT SEZ,

Module No:5-C, Phase ll, 2nd Floor, North Block, Pallavaram-Thoraipakkam 200 ft road, Thoraipakkam, Chennai – 600097

+91-044-48651163