Welcome to Cloud View! The last couple of weeks we have been curating predictions for the coming year (and decade) from well regarded sources. Now it’s time to drill down deeper into specific areas and find out what experts in the field see in store for the future. Predictions Industry Speak AT&T integrating 5G with […]
Windows Containers do not ship with Active Directory support and due to their nature can’t (yet) act as a full-fledged domain joined objects, but a certain level of Active Directory functionality can be supported through the use of Globally Managed Service Accounts (GMSA).
Windows Containers cannot be domain-joined, they can also take advantage of Active Directory domain identities similar to when a device is realm-joined. With Windows Server 2012 R2 domain controllers, we introduced a new domain account called a group Managed Service Account (GMSA) which was designed to be shared by services.
If everything is working as expected then you need to create credential spec file which need passed to docker during container creation to utilize this service account. Run the below commands to downloads module which will create this file from Microsoft github account and will create a JSON file containing required data.
-UseBasicParsing -OutFile $env:TEMP\cred.psm1
New-CredentialSpec -Name Gmsa -AccountName container_gmsa
#This will return location and name of JSON file
Step 5: SQL Server Configuration to allow GMSA
On SQL server create login for GMSA account and add it to “Sysadmin” role. Based on your on premise DB access, you can add suitable roles.
CREATE LOGIN [cloudiq\container_gmsa$] FROM WINDOWS
sp_addsrvRolemember "cloudiq\container_gmsa$", "sysadmin"
We typically get data feeds from our clients ( usually about ~ 5 – 20 GB) worth of data. We download these data files to our lab environment and use shell scripts to load the data into AURORA RDS . We wanted to avoid unnecessary data transfers and decided to setup data pipe line to automate the process and use S3 Buckets for file uploads from the clients.
In theory it’s very simple process of setting up data pipeline to load data from S3 Bucket into Aurora Instance .Even though it’s trivial , setting up this process is very convoluted multi step process . It’s not as simple as it sounds . Welcome to Managed services world.
STEPS INVOLVED :
Create ROLE and Attach S3 Bucket Policy :
Create Cluster Parameter Group
Modify Custom Parameter Groups to use ROLE
REBOOT AURORA INSTANCE
GRANT AURORA INSTANCE ACCESS TO S3 BUCKET
By default aurora cannot access S3 Buckets and we all know it’s just common sense default setup to reduce the surface area for better security.
For EC2 Machines you can attach a role and the EC2 machines can access other AWS services on behalf of role assigned to the Instance.Same method is applicable for AURORA RDS. You Can associate a role to AURORA RDS which has required permissions to S3 Bucket .
There are ton of documentation on how to create a role and attach policies . It’s pretty widely adopted best practice in AWS world. Based on AWS Documentation, AWS Rotates access keys attached to these roles automatically. From security aspect , its lot better than using hard coded Access Keys.
In Traditional Datacenter world , you would typically run few configuration commands to change configuration options .( Think of sp_configure in SQL Server ).
In AWS RDS World , its tricky . By default configurations gets attached to your AURORA Cluster . If you need to override any default configuration , you have to create your own DB Cluster Parameter Group and modify your RDS instance to use the custom DB Cluster Parameter Group you created.Now you can edit your configuration values .
The way you attach a ROLE to AURORA RDS is through Cluster parameter group .
These three configuration options are related to interaction with S3 Buckets.
Get the ARN for your Role and modify above configuration values from default empty string to ROLE ARN value.
Then you need to modify your Aurora instance and select to use the role . It should show up in the drop down menu in the modify role tab.
GRANT AURORA LOGIN LOAD FILE PERMISSION
GRANT LOAD FROM S3 ON *.* TO user@domain-or-ip-address
GRANT LOAD FROM S3 ON *.* TO 'aurora-load-svc'@'%'
REBOOT AURORA INSTANCE
Without Reboot you will be spending lot of time troubleshooting. You need to reboot to the AURORA Instance for new cluster parameter values to take effect.
After this you will be be able to execute the LOAD FILE FROM S3 to AURORA .
Screen Shots :
Create ROLE and Attach Policy :
Attach S3 Bucket Policy :
Create Parameter Group :
Modify Custom Parameter Groups
Modify AURORA RDS Instance to use ROLE
Error Code: 1871. S3 API returned error: Missing Credentials: Cannot instantiate S3 Client 0.078 sec
Usually means , AURORA Instance can’t reach S3 Bucket. Make sure you have applied the role and rebooted the Instance.
Sample BULK LOAD Command :
You could use following sample scripts to test your Setup.
AWS Data Pipeline is a web service that you can use to automate the movement and transformation of data. With AWS Data Pipeline, you can define data-driven workflows, so that tasks can be dependent on the successful completion of previous tasks.
AWS Data Pipe Line Sample Workflow
Default IAM Roles
AWS Data Pipeline requires IAM roles to determine what actions your pipelines can perform and who can access your pipeline’s resources.
The AWS Data Pipeline console creates the following roles for you:
Error MessageUnable to create resource for @EC2ResourceObj_2017-05-05T04:25:32 due to: No default VPC for this user (Service: AmazonEC2; Status Code: 400; Error Code: VPCIdNotSpecified; Request ID: bd2f3abb-d1c9-4c60-977f-6a83426a947d)
When you look at your VPC, you would notice Default VPC is not configured. While launching EC2 Instance on Data Pipeline, by default it can’t figure out which VPC to use and that needs to be explicitly specified in Configurations.
SubNetID for EC2 Resource
Build Sample Data Pipeline to Load S3 File into MySQL Table :
Use Cases for AWS Data Pipeline Setup sample Pipeline in our develop environment Import Text file from AWS S3 Bucket to AURORA Instance Send out notifications through SNS to email@example.com Export / Import Data Pipe Line Definition.
Have MySQL Instance Access to Invoke Data Pipeline with appropriate permissions Target Database and Target Table SNS Notification setup with right configuration
Steps to Follow:
Create Data Pipeline with Name Create MySQL Schema and Table Configure Your EC2 Resource ( Make sure EC2 instance has access to MySQL Instance ). If MySQL instance allows only certain IPS’s and VPC, then you need to configure your EC2 Resource in the same VPC or Subnet. Configure Data Source and appropriate Data Format ( Notice this is Pipe Delimited File ant CSV File ). Configure your SQL Insert Statement Configure SNS Notification for PASS / FAIL Activity. Run your Pipeline and Troubleshoot if errors occur.
You can use “TSV” type as your custom format type and provide:
“Column separator” as pipe(|),
“Record separator” as new line(\n),
“Escape Char” as backslash(\) or any other character you wa
errorId : ActivityFailed:SQLException errorMessage : No value specified for parameter errorMessage : Parameter index out of range (1 > number of parameters, which is 0). errorMessage : Incorrect integer value: ‘FALSE’ for column ‘likesports’ at row 1
Ensure the Table Column Data Type set to correct . By Default MySQL Doesn’t covert TRUE / FALSE into Boolean Data Type.
errorMessage : Parameter index out of range (1 > number of parameters, which is 0). errorMessage for Load script: ERROR 1227 (42000) at line 1: Access denied; you need (at least one of) the LOAD FROM S3 privilege(s) for this operation
This is a continuation of the previous posts that covered how to setup and run Image2Docker.
Docker Installation Status
Open PowerShell command and execute the following command.
Docker is already installed in the system If the command returns something like the below.
The docker is not installed in the machine if you see the error like below
Install Docker if not exists
Please follow the instructions below if docker is not installed in your machine.
Install the Docker-Microsoft PackageManagement Provider from the PowerShell Gallery. Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
Next, you use the PackageManagement PowerShell module to install the latest version of Docker. Install-Package -Name docker -ProviderName DockerMsftProvider
When PowerShell asks you whether to trust the package source ‘DockerDefault’, type A to continue the installation. When the installation is complete, reboot the computer. Restart-Computer -Force Tip: If you want to update Docker later: Check the installed version with
Ensure your Windows Server system is up-to-date by running. Run the following command.
This shows a text-based configuration menu, where you can choose option 6 to Download and Install Updates.
1) Domain/Workgroup: Workgroup: WORKGROUP
2) Computer Name: WIN-HEFDK4V68M5
3) Add Local Administrator
4) Configure Remote Management Enabled
5) Windows Update Settings: DownloadOnly
6) Download and Install Updates
7) Remote Desktop: Disabled
When prompted, choose option A to download all updates.
Create Containers from Imag2Docker Dockerfile.
Make sure that docker installed on your Windows 2016 or Windows 10 with Anniversary updates.
To build that Dockerfile into an image, run:
docker build -t img2docker/aspnetwebsites.
Here img2docker/aspnetwebsites is the name of the image. You can give your own name based on your needs.
When the build completes, we can run a container to start my ASP.NET sites.
This command runs a container in the background, exposes the app port, and stores the ID of the container.
$id = docker run -d -p 81:80 img2docker/aspnetwebsites
Here 81 is the host port number and 80 is the container port number.
When the site starts, we will see in the container logs that the IIS Service (W3SVC) is running:
docker logs $id
The Service ‘W3SVC’ is in the ‘Running’ state.
Now you can browse to the site running in IIS in the container, but because published ports on Windows containers don’t do loopback yet, if you’re on the machine running the Docker container, you need to use the container’s IP address:
CloudIQ is a leading Cloud Consulting and Solutions firm that helps businesses solve today’s problems and plan the enterprise of tomorrow by integrating intelligent cloud solutions. We help you leverage the technologies that make your people more productive, your infrastructure more intelligent, and your business more profitable.