Automation Testing helps complete the entire software testing life cycle (STLC) in less time and improve efficiency of the testing process.

Test Automation enables teams to verify functionality, test for regression and run simultaneous tests efficiently. In this article we will take a detailed look at the Automation Testing Tools available, standards and best practices to be followed during Test Automation.

Following the best practices for Software Testing Life Cycle (Unit testing, Integration Testing & System Testing) ensures that the client gets the software as intended without any bugs. End-to-end testing is the methodology used to test whether the flow of an application is performing as designed from start to finish. Carrying out end-to-end tests helps identify system dependencies and ensure the flow of right information across various system components and the system.

Ultimately Automation Testing increases the speed of test execution and the test coverage.

When to Choose Automation Testing
  • There is lots of regression work
  • GUI is same, but you have lot of often functional changes
  • Requirements do not change frequently
  • Load and performance testing with many virtual users
  • Repetitive test cases that tend well to automation & saves time
  • Huge projects
  • Projects that need to test the same areas

Steps to Implement Automation Testing
  • Identify areas within software to automate
  • Choose the appropriate tool for test automation
  • Write test scripts
  • Develop test suits
  • Execute test scripts
  • Build result reports
  • Find possible bugs or performance issues
Choosing your Automation Testing Tool

The strategy to adopt test automation should clearly define when to opt for automation, its scope and selection of the right kind of tools for execution. And when it comes to tools the top ones to go for are

  • Cypress
  • Selenium
  • Protractor
  • Appium(Mobile)
Why Cypress?

Cypress is a JavaScript based testing framework built for the modern web. Cypress helps to create End-to-end tests, Integration tests and Unit tests. Cypress takes a different approach compared to other testing frameworks, since it’s executed in the same run loop as the application. It also leverages a Node.js server to handle any task that needs to happen outside of the browser. With its ability to understand everything happening inside and outside of the browser, it produces more consistent results.

Key Features of Cypress
  • Automatic Waiting – No need for adding wait and sleep.
  • Spies, Stubs, and Clocks – Verify and control the behaviour of functions, server responses, or timers.
  • Network traffic control and monitoring – Easily control, stub, and test edge cases without involving your server. You can stub network traffic however you like.
  • Consistent Results – Cypress architecture doesn’t use Selenium or WebDriver. It is fast, consistent and does reliable tests that are flake-free.
  • Screenshots and Videos – View screenshots taken automatically on failure, or videos of your entire test suite when run from the CLI.
Azure CICD Setup with Cypress

Cypress runs on most of the following CI providers.

Azure DevOps / VSTS CI / TeamFoundation
BitBucket
CircleCI
Docker
GitLab
Jenkins
TravisCI

Azure DevOps – Steps to Integrate Cypress Automation Tests
  • Pre-Build Testing
  • Install the Node module and run application in test mode
  • Run the tests
  • Publish the test results
  • Cypress Containerization
  • Build the docker container of cypress
  • Push the image to container
  • Publish the Build

Before we get started here are the basic Cypress installation commands

Clean up the old results
$ rm -rf cypress/reports/
 
Run the cypress application with required spec file.
$ cypress run –spec \”cypress/integration/**/*.spec.ts\” // mention your spec file
 
Configure the mocha reports path for publishing test results.
–reporter junit –reporter-options ‘mochaFile=cypress/reports/test-output-[hash].xml,toConsole=true’
 
Uninstall the application.
$ npm uninstall cypress-multi-reporters; npm uninstall cypress-promise; npm uninstall cypres

Pre-Build Testing

It is critical to test the application before the Build, Deployment or Release. Essentially the process involves regression and smoke testing. And don’t forget the sanity checks before the build is deployed in the staging environment.

Cypress comes in handy for testing angular / JavaScript applications before they are deployed to staging or production environment.

Install the Node module and run application in test mode

Install the required node module of the application then run the application with test mode.

$ npm install –save-dev start-server-and-test

$ start-server-and-test start http://localhost:4200

Publish the test results

The results of the Cypress test execution are stored in specified path and are added to the Azure DevOps test results. Cypress supports JUnit, Mocha, Mochawsome test results reporter formats and provides options to create customised test results and merge all the test results as well.

Cypress Containerization

Cypress supports docker containerization and that makes it easy to set it up in a cluster environment like AKS. The Cypress base images are available at the link below.

https://github.com/cypress-io/cypress-docker-images

Copy the package.json and UI source code to the app folder and run the Cypress test. The following commands are used to run the docker and execute.

  script: |
        docker run -d -it --name cypressName:cypressImageTag bash
        docker commit -p cypressName:cypressImageTag
        docker stop cypressName
        docker rm -f cypressName
    
    - script: docker tag cypressName:cypressImageTag
      displayName: Tag Cypress image 
      
    
    - task: Docker@1
    displayName: Push image To Registry
    inputs:
        command: push
        azureSubscriptionEndpoint: azureSubscriptionEndpoint
        azureContainerRegistry: $(azureContainerRegistry)
        imageName: acrImageName:BuildId
 
    - script: sudo rm -rf /test-results/*
    displayName: Removing Previous Results
 
    - task: ShellScript@2
    displayName: 'Bash Script - cypress base image post-deployment'
    inputs:
        scriptPath: ./cypress-deployment.sh
        args: $(azureRegistry) $(cypressImageName) $(azureContainerValue) $(CYPRESS_OPTIONS) 
        continueOnError: true
    - task: PublishTestResults@1
    displayName: 'Publish Test Results ./test-results-*.xml'
    inputs:
 
    cypress-base-image-post-deployment.sh
 
    docker run -v $systemSourceDirectory:/app/cypress/reports --name vca-arp-ui 
    $cypress_Latestimage npx cypress run $cypressOptions bash

Now the container should be set up on on your local machine and start running your specs.

Cypress is simple and easily integrates with your CI environment. Apart from the browser support, Cypress reduces the efforts of manual testing and is relatively faster when compared to other automation testing tools.


LATEST BLOG

Image2Docker is a PowerShell module which ports existing Windows application workloads to Docker. It supports multiple application types, but the initial focus is on IIS and ASP.NET apps. You can use Image2Docker to extract ASP.NET websites from a VM – or from the local machine or a remote machine. Then so you can run your existing apps in Docker containers on Windows, with no application changes.

Image2Docker also supports Windows Server 2012, with support for 2008 and 2003 on its way. The websites on this VM are a mixture of technologies – ASP.NET WebForms, ASP.NET MVC, ASP.NET WebApi, together with a static HTML website.

To learn more about Image2Docker, please visit the following link

https://github.com/docker/communitytools-image2docker-win

Microsoft Windows 10 and Windows Server 2016 introduced new capabilities for containerizing applications.There are two types of container formats supported on the Microsoft Windows platform:

  • Hyper-V Containers – Containers with a dedicated kernel and stronger isolation from other containers
  • Windows Server Containers – application isolation using process and namespace isolation, and a shared kernel with the container host

Prerequisite
  • PowerShell 5.0 needs to be installed to use Image2Docker.

      Download URL: https://www.microsoft.com/en-us/download/details.aspx?id=50395

  • Image2Docker generates a Dockerfile which you can build into a Docker image. The system running the ConvertTo-Dockerfile command does not need Docker installed, but you will need Docker setup on Windows to build images and run containers.

Installation
  • Open PowerShell with administrative privileges. Run the following commands
     
                    
                    Install-Module Image2Docker
                    Import-Module Image2Docker
                     
  • You can validate the presence of the Install-Module command by running: Get-Command -Module PowerShellGet -Name Install-Module. If the PowerShellGet module or the Install-Module commands are not accessible, you may not be running a supported version of PowerShell. Make sure that you are running PowerShell 5.0 or later on a given machine.

Usage
  • Image2Docker can inspect web servers and extract a Dockerfile containing some or all of the websites configured on the server. ASP.NET is supported, and the generated Dockerfile will be correctly set up to run .NET 2.0, 3.5 or 4.x sites.
  • Image2Docker Supports the following source types.
    • Local Machines
    • Remote Path
    • Disk Images

The following commands show how to setup and run Image2Docker on local machines. Instructions on how to run it on remote path and disk images will be covered in future blog posts.

Local Machines
  • This mode looking for the IIS installed on the local machine and convert the IIS sites /virtual directories/ applications to docker files associate artifacts.
  • Run the following command
     
                    
                     ConvertTo-Dockerfile `
                     -Local `
                     -OutputPath {{OutputPath}} `
                     -Artifact IIS  `	
                     -Verbose
                     
  • Local parameter used for iis discovery on local machines.
  • OutputPath parameter specifying the location to store the generated Dockerfile and associated artifacts.
  • Artifact parameter specifies what artifact to inspect. In our case this is IIS.
  • Verbose parameter is optional and it will give all the verbose logs.
  • Following is the sample command
     
                    
                    ConvertTo-Dockerfile -Local -OutputPath c:\docker_repo\iis -Artifact IIS -Verbose
                     

This is a continuation of the previous posts that covered how to setup and run Image2Docker.

Docker Installation Status
  • Open PowerShell command and execute the following command.
  • docker info
  • Docker is already installed in the system If the command returns something like the below.

  • The docker is not installed in the machine if you see the error like below


Install Docker if not exists
  • Please follow the instructions below if docker is not installed in your machine.
  • Install the Docker-Microsoft PackageManagement Provider from the PowerShell Gallery.
    Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
  • Next, you use the PackageManagement PowerShell module to install the latest version of Docker.
    Install-Package -Name docker -ProviderName DockerMsftProvider
  • When PowerShell asks you whether to trust the package source ‘DockerDefault’, type A to continue the installation. When the installation is complete, reboot the computer.
    Restart-Computer -Force
        Tip: If you want to update Docker later:
        Check the installed version with
     
                    Get-Package -Name Docker -ProviderName DockerMsftProvider 

    Find the current version with    

     
                    Find-Package -Name Docker -ProviderName DockerMsftProvider 

    When you’re ready, upgrade with

     
                    Install-Package -Name Docker -ProviderName DockerMsftProvider -Update -Force 
     
                    Start-Service Docker 
  • Ensure your Windows Server system is up-to-date by running. Run the following command.
     
                    Sconfig 
    • This shows a text-based configuration menu, where you can choose option 6 to Download and Install Updates.
       
                      
                      ===============================================================================
                                               Server Configuration
                      ===============================================================================
                      
                      1) Domain/Workgroup:                    Workgroup:  WORKGROUP
                      2) Computer Name:                       WIN-HEFDK4V68M5
                      3) Add Local Administrator
                      4) Configure Remote Management          Enabled
                      
                      5) Windows Update Settings:             DownloadOnly
                      6) Download and Install Updates
                      7) Remote Desktop:                      Disabled
                      ...
                       
    •  When prompted, choose option A to download all updates.
Create Containers from Imag2Docker Dockerfile.
  • Make sure that docker installed on your Windows 2016 or Windows 10 with Anniversary updates.
  • To build that Dockerfile into an image, run:
     
                    docker build -t img2docker/aspnetwebsites. 
  • Here img2docker/aspnetwebsites is the name of the image. You can give your own name based on your needs.
  • When the build completes, we can run a container to start my ASP.NET sites.
  • This command runs a container in the background, exposes the app port, and stores the ID of the container.
     
                    $id = docker run -d -p 81:80 img2docker/aspnetwebsites 

    Here 81 is the host port number and 80 is the container port number.

  • When the site starts, we will see in the container logs that the IIS Service (W3SVC) is running:
     
                    docker logs $id 

    The Service ‘W3SVC’ is in the ‘Running’ state.

  • Now you can browse to the site running in IIS in the container, but because published ports on Windows containers don’t do loopback yet, if you’re on the machine running the Docker container, you need to use the container’s IP address:
     
                    $ip = docker inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' 
     
                    $id 
     
                    start http://$($ip) 

That will launch your browser and you’ll see your ASP.NET Web application running in IIS, in Windows Server Core, in a Docker container.

This is a continuation of the previous blog post on GMSA setup.

Step 1: Create Docker Image
  1. I have created ASPNET MVC app and it accessing the SQL server using windows authentication.
  2. My Connection string looks like below.
     
                    
                    <connectionStrings>
                    <add name="AdventureWorks2012Entities"
                    connectionString="metadata=res://*/ManagerEmployeeModel.csdl|res://*/ManagerEmployee
                    Model.ssdl|res://*/ManagerEmployeeModel.msl;provider=System.Data.SqlClient;provider 
                    connection string=&quot;data source=CIQSQL2012;initial
                    catalog=AdventureWorks2012;integrated
                    security=True;MultipleActiveResultSets=True;App=EntityFramework&quot;"
                    providerName="System.Data.EntityClient" />
                    </connectionStrings>
                     
  3. I have created the Docker file and necessary build folders using image2docker. Refer Image2Docker
  4. Docker file looks like below
     
                    
                    # escape=` 
                    FROM microsoft/aspnet:3.5-windowsservercore-10.0.14393.1066 
                    SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; 
                    $ProgressPreference = 'SilentlyContinue';"] 
                     
                    # disable DNS cache so container addresses always fetched from Docker 
                    RUN Set-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Services\Dnscache\
                    Parameters' -Name ServerPriorityTimeLimit -Value 0 -Type DWord 
                     
                    RUN Remove-Website 'Default Web Site'; 
                     
                    RUN Enable-WindowsOptionalFeature -Online -FeatureName IIS-ApplicationDevelopment,
                    IIS-ASPNET,IIS-ASPNET45,IIS-CommonHttpFeatures,IIS-DefaultDocument,
                    IIS-DirectoryBrowsing,IIS-HealthAndDiagnostics,IIS-HttpCompressionStatic,
                    IIS-HttpErrors,IIS-HttpLogging,IIS-ISAPIExtensions,IIS-ISAPIFilter,
                    IIS-NetFxExtensibility,IIS-NetFxExtensibility45,IIS-Performance,IIS-RequestFiltering,
                    IIS-Security,IIS-StaticContent,IIS-WebServer,IIS-WebServerRole,NetFx4Extended-ASPNET45 
                     
                    # Set up website: MyGSMAMvc 
                    RUN New-Item -Path 'C:\inetpub\wwwroot\MyAspNetMVC_GSMA' -Type Directory -Force;  
                     
                    RUN New-Website -Name 'MyGSMAMvc' -PhysicalPath 'C:\inetpub\wwwroot\MyAspNetMVC_GSMA' -Port 80 -Force;  
                     
                    EXPOSE 80 
                     
                    COPY ["MyAspNetMVC_GSMA", "/inetpub/wwwroot/MyAspNetMVC_GSMA"] 
                     
                    RUN $path='C:\inetpub\wwwroot\MyAspNetMVC_GSMA'; ` 
                        $acl = Get-Acl $path; ` 
                        $newOwner = [System.Security.Principal.NTAccount]('BUILTIN\IIS_IUSRS'); ` 
                        $acl.SetOwner($newOwner); ` 
                        dir -r $path | Set-Acl -aclobject  $acl 
                    
                     
  5. Move the necessary files to cloud-2016.
  6. Login to the cloud-2016 server.
  7. Create the image using the below commands. Refer Docker commands.
     
                    
                    docker build -t myaspnetmvc/gmsa  
Step 2: Create Container
  1. when you are creating docker container you need to specify the additional configuration to utilize GMSA. Please execute below commands
     
                    
                    docker run -d --security-opt "credentialspec=file://Gmsa.json" myaspnetmvc/gmsa 
  2. Or execute the commands below
     
                     $id = docker run -d --security-opt "credentialspec=file://Gmsa.json" myaspnetmvc/gmsa docker logs $id
                    $ip = docker inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' $id start http://$($ip)
                     
  3. Browse the appropriate page, you can see DB records.
  4. You can test the Active directory communication below. 
    1. Login into running docker container using docker exec command and check if, in fact, you can communicate to Active Directory. Execute nltest /parentdomain to verify
       
                      docker exec -it 0974d72624eb powershell 
                      nltest /parentdomain 
                      cloudiq.local. (1) 
                      The command completed successfully
                        

CloudIQ Tech, a growing cloud company, helping businesses, big or small, make the right cloud move to realize the true economies of cloud, has announced that it has achieved a Gold status for the Microsoft Cloud Platform Competency. The gold level is the highest Microsoft partner level, putting CloudIQ in an exclusive category with the other top partners.

The milestone achievement demonstrates CloudIQ Tech’s deep commitment, vast expertise in Microsoft cloud solutions and its team’s willingness to acquire in-depth knowledge and proficiency in Cloud tools and solutions while uniquely aligning them to evolving Microsoft’s Cloud Strategy and Competency goals. It is to be noted that to earn a Microsoft Gold Competency Certification, partner’s team members must successfully demonstrate their level of technology expertise in general, and deep knowledge of Microsoft and its products in particular. It is a valuable recognition by Microsoft for its partner’s holistic expertise in designing, migrating, integrating and delivering Windows-based applications and infrastructure solutions in the cloud using the Microsoft platform.

Commenting on the occasion, Mr. Prem Kumar Kandalu, CEO of CloudIQ Tech, said ”By achieving a Gold Competency, our dream to be part of the distinguished top 1 percent of Microsoft’s partner ecosystem has come true. This is a major step towards our objective of becoming a well known strategic player in Microsoft Cloud Solutions. Already within a short span of time we had become an Azure Gold Partner and now this Gold status for Microsoft Cloud Platform Competency will help us deliver cloud solutions with more confidence so that our customers drive innovative solutions on the latest Microsoft technology and move ahead successfully”.

About CloudIQ Tech:

CloudIQ Tech is a technology company helping businesses get the best out of emerging technologies, innovation and creative ideas. Our firm conviction that cloud is the way to go has enabled us to invest considerable time and efforts in R&D, focusing in designing, building, and managing cloud infrastructures and solutions that are uncomplicated, easily deployable, scalable while delivering the much needed edge from day one to our customers. The efforts are continual, ably supported by our team of cloud technical experts holding the highest possible certificate levels in designing, developing and implementing AWS and Azure cloud-based solutions.

Today our portfolio includes a range of Solutions & Services that comprise Cloud Consulting, Cloud Migration, Cloud Infrastructure Management services and Managed Cloud services besides DevOps Orchestration and home grown cloud apps and products. These cloud solutions empower people and organizations to innovate, increase operational efficiency, find opportunities to reduce cost and increase profits, and stay ahead of competition

Having achieved the Gold status for Microsoft Cloud Platform Competency will help us deliver cloud solutions with more confidence so that our customers drive innovative solutions on the latest Microsoft technology and move ahead successfully.

 

This blog post explains how to setup and configure SQL Server docker container on a linux machine. Microsoft recently started supporting running SQL Server on Linux and the entire process takes only few steps to run.

Install SQL Server Docker Image
 

//Pull the SQL Server Image from the docker registry
$docker pull microsoft/mssql-server-linux
$docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Password' -p 1433:1433 -d microsoft/mssql-server-linux
$docker exec -it 40dd "bash"
$/opt/mssql-tools/bin/sqlcmd -S localhost -U SA
Password:
 

[root@ip-10-0-0-110 ec2-user]# docker pull microsoft/mssql-server-linux
[root@ip-10-0-0-110 ec2-user]# docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Password' -p 1433:1433 -d microsoft/mssql-server-linux
[root@ip-10-0-0-120 ec2-user]# docker exec -it 40dd "bash"
root@40dde973f4a0:/# /opt/mssql-tools/bin/sqlcmd -S localhost -U SA
Password:

Command History
[root@ip-10-0-0-110 ec2-user]# docker pull microsoft/mssql-server-linux
Using default tag: latest
latest: Pulling from microsoft/mssql-server-linux
4c0c60131530: Pull complete
Digest: sha256:604d27fe5d3d9b4434fb1657e9bf4f2c2bf55ea9bd29dc0cb3660d84bc6f56a8
Status: Downloaded newer image for microsoft/mssql-server-linux:latest
[root@ip-10-0-0-110 ec2-user]# docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Password' -p 1433:1433 -d microsoft/mssql-server-linux
40dde973f4a0cc2af469f9d1c2182403d1e22e28c2a8821e29ce832529965513
[root@ip-10-0-0-120 ec2-user]# docker -it 40dd "bash"
flag provided but not defined: -it
See 'docker --help'.
[root@ip-10-0-0-120 ec2-user]# docker exec -it 40dd "bash"
root@40dde973f4a0:/# /opt/mssql-tools/bin/sqlcmd -S localhost -U SA
Password:
1> select @@servername;
2> go

--------------------------------------------------------------------------------------------------------------------------------
40dde973f4a0

(1 rows affected)
1> select db_name();
2> go

--------------------------------------------------------------------------------------------------------------------------------
master

(1 rows affected)
Version Info:
 

select @@version

/*
Microsoft SQL Server 2017 (CTP2.1) - 14.0.600.250 (X64) 
May 10 2017 12:21:23 
Copyright (C) 2017 Microsoft Corporation. All rights reserved.
Developer Edition (64-bit) on Linux (Ubuntu 16.04.2 LTS)
*/
Backup Database on Docker Container and Copy to Host:

Connect to SQL Server Management Studio or SQLCMD and issue the backup command

Backup Database on Docker Container
 

BACKUP DATABASE H1BData_V2
TO DISK ='/var/opt/mssql/data/SalaryDatabase_V2_06132017.bak'
Copy the file from Container to Host and Sync with S3 Bucket:
 
 $ docker cp <containerId>:/file/path/within/container /host/path/target

$ docker cp aabb19ca439f:/var/opt/mssql/data/SalaryDatabase_06132017.bak /Docker/

$ aws s3 sync ./ s3://docker-backups

upload: ./SalaryDatabase_06132017.bak  to s3://docker-backups/SalaryDatabase_06132017.bak

Completed 18.4 GiB/25.8 GiB (46.4 MiB/s) with 1 file(s) remaining
Troubleshooting :
 

root@e83b4048db28:/var/opt/mssql/log# /opt/mssql-tools/bin/sqlcmd -S localhost -U SA
Password:
Sqlcmd: Error: Microsoft ODBC Driver 13 for SQL Server : Login failed for user ‘SA’..
root@e83b4048db28:/var/opt/mssql/log# exit
exit
[root@ip-10-0-0-120 ec2-user]# docker rm $(docker ps -a -q)
Error response from daemon: You cannot remove a running container e83b4048db28505951f20fff4aff9f5132695fd1e1c7251c8daeb79d15ac403d. Stop the container before attempting removal or use -f
[root@ip-10-0-0-120 ec2-user]# docker rm -f $(docker ps -a -q)
e83b4048db28
Unable to telnet without allowing access to port 1433
MacBook:.ssh Raju$ telnet 54.44.40.26 1433
Trying 54.44.40.26…
telnet: connect to address 54.44.40.26: Operation timed out
telnet: Unable to connect to remote host
MacBook:.ssh Raju$ telnet 54.44.40.26 1433
Trying 54.44.40.26…
Connected to ec2-54-44-40-26.us-west-2.compute.amazonaws.com.
Escape character is ‘^]’.
^CError while connecting through SQL Server Management Studio without allowing access to port 1433
TITLE: Connect to Server
——————————
Cannot connect to 54.44.40.26.
——————————
ADDITIONAL INFORMATION:

A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 53)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&EvtSrc=MSSQLServer&EvtID=53&LinkId=20476

———-

The network path was not found

———

BUTTONS:

OK

———

 

Error while connecting through SQLCMD without allowing access to port 1433

C:\Users\>sqlcmd -S 54.44.40.26 -U SA
Password: HResult 0x35, Level 16, State 1
Named Pipes Provider: Could not open a connection to SQL Server [53].
Sqlcmd: Error: Microsoft SQL Server Native Client 10.0 : A network-related or in
stance-specific error has occurred while establishing a connection to SQL Server
. Server is not found or not accessible. Check if instance name is correct and i
f SQL Server is configured to allow remote connections. For more information see
SQL Server Books Online..
Sqlcmd: Error: Microsoft SQL Server Native Client 10.0 : Login timeout expired.

Open port 1433 in Security Groups

Allow inbound traffic from the IP’s or Security Groups you need SQL Server Access.



Restart Docker Container and See Docker Logs

 

1001 docker ps -a
1002 docker restart bb1b1
1003 docker logs bb
SQL Server Docker Container Errors:
 

017-06-09 00:08:39.35 spid9s Starting up database ‘tempdb’.
2017-06-09 00:08:39.45 spid26s Recovery of database ‘UserDBName’ (7) is 0% complete (approximately 1717 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
2017-06-09 00:08:40.15 spid9s The tempdb database has 1 data file(s).
2017-06-09 00:08:40.16 spid36s The Service Broker endpoint is in disabled or stopped state.
2017-06-09 00:08:40.17 spid36s The Database Mirroring endpoint is in disabled or stopped state.
2017-06-09 00:08:40.35 spid36s Service Broker manager has started.
2017-06-09 00:08:41.05 spid33s [INFO] HkRecoverFromLogOpenRange(): Database ID: [5]. Log recovery scan from 00000495:00005D20:006B to 000004C7:00010F48:0002.
2017-06-09 00:08:59.47 spid26s Recovery of database ‘UserDBName’ (7) is 16% complete (approximately 110 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
2017-06-09 00:09:11.16 Logon Error: 18456, Severity: 14, State: 38.
2017-06-09 00:09:11.16 Logon Login failed for user ‘UserName’. Reason: Failed to open the explicitly specified database ‘UserDBName’. [CLIENT: 00.000.00.00]
2017-06-09 00:09:19.51 spid26s Recovery of database ‘UserDBName’ (7) is 30% complete (approximately 94 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
2017-06-09 00:09:39.59 spid26s Recovery of database ‘UserDBName’ (7) is 44% complete (approximately 76 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
2017-06-09 00:09:51.38 spid26s Recovery of database ‘UserDBName’ (7) is 54% complete (approximately 62 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
2017-06-09 00:09:51.40 spid26s Recovery of database ‘UserDBName’ (7) is 54% complete (approximately 62 seconds remain). Phase 3 of 3. This is an informational message only. No user action is required.
DBSTARTUP (UserDBName, 7): FCBOpenTime took 164 ms
DBSTARTUP (UserDBName, 7): FCBHeaderReadTime took 135 ms
DBSTARTUP (UserDBName, 7): FileMgrPreRecoveryTime took 277 ms
DBSTARTUP (UserDBName, 7): MasterFilesScanTime took 144 ms
DBSTARTUP (UserDBName, 7): AnalysisRecTime took 1470 ms
DBSTARTUP (UserDBName, 7): RedoRecTime took 71938 ms
DBSTARTUP (UserDBName, 7): UndoRecTime took 4903 ms
DBSTARTUP (UserDBName, 7): PhysicalRecoveryTime took 73408 ms
DBSTARTUP (UserDBName, 7): PhysicalCompletionTime took 4913 ms
DBSTARTUP (UserDBName, 7): RecoveryCompletionTime took 102 ms
DBSTARTUP (UserDBName, 7): StartupInDatabaseTime took 136 ms
DBSTARTUP (UserDBName, 7): RemapSysfiles1Time took 125 ms
2017-06-09 00:10:11.44 spid6s Recovery of database ‘UserDBName’ (7) is 63% complete (approximately 55 seconds remain). Phase 3 of 3. This is an informational message only. No user action is required.
2017-06-09 00:10:31.48 spid6s Recovery of database ‘UserDBName’ (7) is 63% complete (approximately 65 seconds remain). Phase 3 of 3. This is an informational message only. No user action is required.
2017-06-09 00:10:34.61 spid33s [INFO] HkRedoCloseLastOpenRangeSegment(): Database ID: [5]. Log recovery open segment scan from 00000495:00005D20:006B to 000004C7:00010E78:002F.
2017-06-09 00:10:34.67 spid25s [INFO] redoOpenRangeSegment(): Database ID: [5]. Log recovery open segment scan completed at 000004C7:00010E78:002F.
2017-06-09 00:10:34.67 spid25s [INFO] HkPrintUndoRowStats(): Database ID: [5]. Undo Rows Stats. [UndoRowsSeen] = 0, [UndoRowsMatched] = 0, [InsertRowsMatched] = 0, [InsertRowsSeen] = 0, [UndoRowsAborted] = 0
DBSTARTUP (UserDBName, 5): FCBOpenTime took 202 ms
DBSTARTUP (UserDBName, 5): FCBHeaderReadTime took 133 ms
DBSTARTUP (UserDBName, 5): FileMgrPreRecoveryTime took 308 ms
DBSTARTUP (UserDBName, 5): MasterFilesScanTime took 158 ms
DBSTARTUP (UserDBName, 5): StreamFileMgrPreRecoveryTime took 141 ms
DBSTARTUP (UserDBName, 5): LogMgrPreRecoveryTime took 478 ms
DBSTARTUP (UserDBName, 5): PhysicalCompletionTime took 116181 ms
DBSTARTUP (UserDBName, 5): HekatonRecoveryTime took 116167 ms
2017-06-09 00:10:34.84 spid24s Recovery completed for database UserDBName (database ID 5) in 117 second(s) (analysis 12 ms, redo 0 ms, undo 50 ms.) This is an informational message only. No user action is required.
2017-06-09 00:10:51.48 spid6s Recovery of database ‘UserDBName’ (7) is 71% complete (approximately 53 seconds remain). Phase 3 of 3. This is an informational message only. No user action is required.
2017-06-09 00:11:11.54 spid6s Recovery of database ‘UserDBName’ (7) is 99% complete (approximately 1 seconds remain). Phase 3 of 3. This is an informational message only. No user action is required.
2017-06-09 00:11:11.54 spid6s 23 transactions rolled back in database ‘UserDBName’ (7:0). This is an informational message only. No user action is required.
2017-06-09 00:11:11.55 spid6s Recovery is writing a checkpoint in database ‘UserDBName’ (7). This is an informational message only. No user action is required.
2017-06-09 00:11:11.55 spid6s Recovery completed for database UserDBName (database ID 7) in 154 second(s) (analysis 1405 ms, redo 71933 ms, undo 80147 ms.) This is an informational message only. No user action is required.
2017-06-09 00:11:11.56 spid6s Parallel redo is shutdown for database ‘UserDBName’ with worker pool size [2].
2017-06-09 00:11:11.57 spid6s Recovery is complete. This is an informational message only. No user action is required.

Errors while reading log file
EXEC sp_readerrorlog

Msg 22004, Level 16, State 1, Line 0
The log file is not using Unicode format.
===================================
The log file is not using Unicode format. (.Net SqlClient Data Provider)
——————————
For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&ProdVer=14.00.0600&EvtSrc=MSSQLServer&EvtID=22004&LinkId=20476
——————————
Server Name: 52.42.36.22
Error Number: 22004
Severity: 16
State: 1

Running out of Space

Msg 3202, Level 16, State 1, Line 5 Write on “/var/opt/mssql/data/HWageInfo_06132017.bak” failed: Insufficient bytes transferred. Common causes are backup configuration, insufficient disk space, or other problems with the storage subsystem such as corruption or hardware failure. Check errorlogs/application-logs for detailed messages and correct error conditions. Msg 3013, Level 16, State 1, Line 5 BACKUP DATABASE is terminating abnormally. Ensure you have enough disk space. Ensure you have enough disk space.

References:

https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-docker
https://github.com/Microsoft/mssql-docker/issues/55
https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-troubleshooting-guide

Business Needs:

Our goal is to identify whether Amazon SQL Server RDS Service provides elastic , highly available , Scalable and operationally efficient solution for our use case. We are evaluating options to migrate our read/write heavy production SQL Server database to amazon SQL Server RDS. We have pretty high throughput needs for few hours a day for few months in a year ,which is mission critical for our business success. Any downtown during peak usuage would be catastrophic for our business. We are evaluating pros and cons of moving to amazon RDS with provisioned IOPS.

Caveats of AWS RDS SQL Server:

These information we gathered while working with SQL Server RDS.

Feature Name Yes/No Description
SQL Server 2016 Support No Microsoft says SQL Server 2016 comes with very rich feature set and ton on OLTP Enhancements
Native Backup Restore Yes AWS RDS Released this feature a week ago , which makes moving databases across environments lot more easier.
Elastic IOPS No Storage and IOPS needs to be incremented linearly for higher performance. You can’t get higher IOPS without increasing storage.
Elastic Storage No Scaling Storage is not an option after launching an instance
RAID Support No We usually have RAID 10 for production workload and RDS doesn’t have options to configure RAID
Point in Time Restore on Same Instance No You can’t do Point in Time Restore on the existing database. You have to spin up new Instance
AlwaysOn Availability Groups No This provides ability to failover group of databases to your secondary instance
Mirroring Yes Mirroring is Deprecated feature and its replaced with AlwaysOn Availability Groups
Linked Servers from RDS No But Linked Servers to RDS is Allowed.
Service Broker No Comes handy for services

  No Admin privileges. You can’t execute normal sql server system stored procedures and you need to work with options groupand parameters group to modify configuration. You can’t execute sp_configure to change configurations.


POPULAR POSTS

CloudIQ is a leading Cloud Consulting and Solutions firm that helps businesses solve today’s problems and plan the enterprise of tomorrow by integrating intelligent cloud solutions. We help you leverage the technologies that make your people more productive, your infrastructure more intelligent, and your business more profitable. 

US

626 120th Ave NE, B102, Bellevue,

WA, 98005.

 sales@cloudiqtech.com

INDIA

No. 3 & 4, Venkateswara Avenue,Bazaar Main Rd, Ramnagar South, Madipakkam, Chennai – 600091


© 2019 CloudIQ Technologies. All rights reserved.