Many of you are running your mission-critical applications on containers, and if you haven’t already deployed Kubernetes to manage your container ecosystem, then chances are you soon will.

If you are considering a Kubernetes implementation, then there are several ways to go about it –

  • In-house Kubernetes deployment – if you have a large enough IT team with the requisite expertise in Kubernetes architecture and deployment, then getting your Kubernetes cluster up and running in-house is certainly a possibility. Kubernetes deployment is a complex process and requires a mix of specific skill sets. Also running and monitoring a Kubernetes platform requires the full-time services of a dedicated team, and your requirement must justify this additional cost.
  • SaaS Solutions for Kubernetes– if your business needs are specific and straightforward, then you can explore the market for pre-designed Kubernetes offerings on a SaaS payment model.
  • Fully outsourced (managed) Kubernetes services – if budget permits and your business demands, then bringing in the professionals is a safe and hassle-free solution. From infrastructure assessments to building a Kubernetes strategy to engineering, deploying, and managing enterprise-wide Kubernetes solutions – you can outsource your entire project to experts.
  • Many service providers like CloudIQ also offer day-to-day management and support as well as Kubernetes training to your IT staff to set up internal management expertise.

If the last decade of cloud has taught us anything, it is that when it comes to technology, bringing in professionals to do the job always turns out to be the best option in the long run. Kubernetes is a sophisticated platform that requires specialized competencies. Here is a look at one of our tutorials on Kubernetes Networking – how it all works under the hood.

KUBERNETES NETWORKING – DATA PLANE

In Kubernetes, applications run as a set of pods with their own IP address and port. Kubernetes provides an abstract way to expose the applications/pods as a network service. Various forms of the service abstractions include ClusterIP, NodePort, Load Balancer & Ingress. When service requests enters Kubernetes cluster, the service abstractions have to be directed to individual service endpoints of Pods. This data plane function is implemented using a Linux  Kernel feature – iptables.

Iptables is used to set up, maintain, and inspect the tables of IP packet filter rules in the Linux kernel. Several different tables may be defined. Each table contains a number of built-in chains and may also contain user-defined chains. Each chain is a list of rules which can match a set of packets. Each rule specifies what to do with a packet that matches. This is called a ‘target’, which may be a jump (-j) to a user-defined chain in the same table.

The service(SVC) to service endpoints(SEP) are programmed using KUBE-SERVICES user-defined chains in the NAT(Network Address Translation) table. The contents of the iptables can be extracted using “iptables-save” command

# Generated by iptables-save v1.6.0 on Mon Sep 16 08:00:17 2019
*nat
:PREROUTING ACCEPT [1:52]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [23:1438]
:POSTROUTING ACCEPT [10:592]
:DOCKER - [0:0]
:IP-MASQ-AGENT - [0:0]

:KUBE-SERVICES - [0:0]

-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES

Let’s consider the Services in the following example.

cloudiq@hubandspoke:~$ kubectl get svc –namespace=workshop-development

NAMEciq-ingress-workshop-development-nginx-ingress-controller
TYPELoadBalancer
CLUSTER-IPhttp://192.168.5.65
EXTERNAL-IPhttp://10.82.0.97
PORT(S)80:30512/TCP,443:31512/TCP
AGE19h

Here we have the following service abstractions that are defined.

LoadBalancerIP=10.82.0.97

NodePort=30512/31512

ClusterIP=192.168.5.65

The above services have to be translated to individual service endpoints. The rules performing matching and translation are programmed using custom chains in the NAT table of Ip Tables as below.

Lets look for LoadBalancer=10.82.0.97 service

cloudiq@hubandspoke:~$ cat ciq-dev-aks-iptables-save.output | grep 10.82.0.97
-A KUBE-SERVICES -d 10.82.0.97/32 -p tcp -m comment --comment "workshop-development/ciq-ingress-workshop-development-nginx-ingress-controller:http loadbalancer IP" -m tcp --dport 80 -j KUBE-FW-SXB4UOYSLPHVISJM
-A KUBE-SERVICES -d 10.82.0.97/32 -p tcp -m comment --comment "workshop-development/ciq-ingress-workshop-development-nginx-ingress-controller:https loadbalancer IP" -m tcp --dport 443 -j KUBE-FW-JLRSZDR3OXJ4SUA2

Let’s look at the HTTPS service available on port 443.

cloudiq@hubandspoke:~$ cat ciq-dev-aks-iptables-save.output | grep KUBE-FW-JLRSZDR3OXJ4SUA2

cloudiq@hubandspoke:~$ cat ciq-dev-aks-iptables-save.output | grep KUBE-FW-JLRSZDR3OXJ4SUA2

:KUBE-FW-JLRSZDR3OXJ4SUA2 - [0:0]
-A KUBE-FW-JLRSZDR3OXJ4SUA2 -m comment --comment "workshop-development/ciq-ingress-workshop-development-nginx-ingress-controller:https loadbalancer IP" -j KUBE-MARK-MASQ
-A KUBE-FW-JLRSZDR3OXJ4SUA2 -m comment --comment "workshop-development/ciq-ingress-workshop-development-nginx-ingress-controller:https loadbalancer IP" -j KUBE-SVC-JLRSZDR3OXJ4SUA2
-A KUBE-FW-JLRSZDR3OXJ4SUA2 -m comment --comment "workshop-development/ciq-ingress-workshop-development-nginx-ingress-controller:https loadbalancer IP" -j KUBE-MARK-DROP
-A KUBE-SERVICES -d 10.82.0.97/32 -p tcp -m comment --comment "workshop-development/ciq-ingress-workshop-development-nginx-ingress-controller:https loadbalancer IP" -m tcp --dport 443 -j KUBE-FW-JLRSZDR3OXJ4SUA2

We see below NodePort & Cluster IP translation. The service chains SVC point to two different service endpoints. In order to select between the two service endpoints, a random probability measure is calculated, and appropriate SEP service endpoints are selected.

cloudiq@hubandspoke:~$ cat ciq-dev-aks-iptables-save.output | grep KUBE-SVC-JLRSZDR3OXJ4SUA2

:KUBE-SVC-JLRSZDR3OXJ4SUA2 - [0:0]
-A KUBE-FW-JLRSZDR3OXJ4SUA2 -m comment --comment "workshop-development/ciq-ingress-workshop-development-nginx-ingress-controller:https loadbalancer IP" -j KUBE-SVC-JLRSZDR3OXJ4SUA2
-A KUBE-NODEPORTS -p tcp -m comment --comment "workshop-development/ciq-ingress-workshop-development-nginx-ingress-controller:https" -m tcp --dport 31512 -j KUBE-SVC-JLRSZDR3OXJ4SUA2
-A KUBE-SERVICES -d 192.168.5.65/32 -p tcp -m comment --comment "workshop-development/ciq-ingress-workshop-development-nginx-ingress-controller:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-JLRSZDR3OXJ4SUA2
-A KUBE-SVC-JLRSZDR3OXJ4SUA2 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-4R3FOXQSM5T2ZADC
-A KUBE-SVC-JLRSZDR3OXJ4SUA2 -j KUBE-SEP-PI7R3ONIYH4XJLMW

In the SEP service endpoints, the actual DNAT is performed.

cloudiq@hubandspoke:~$ cat ciq-dev-aks-iptables-save.output | grep KUBE-SEP-4R3FOXQSM5T2ZADC
:KUBE-SEP-4R3FOXQSM5T2ZADC - [0:0]
-A KUBE-SEP-4R3FOXQSM5T2ZADC -s 10.82.0.10/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-4R3FOXQSM5T2ZADC -p tcp -m tcp -j DNAT --to-destination 10.82.0.10:443
-A KUBE-SVC-JLRSZDR3OXJ4SUA2 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-4R3FOXQSM5T2ZADC
cloudiq@hubandspoke:~$ cat ciq-dev-aks-iptables-save.output | grep KUBE-SEP-PI7R3ONIYH4XJLMW
:KUBE-SEP-PI7R3ONIYH4XJLMW - [0:0]
-A KUBE-SEP-PI7R3ONIYH4XJLMW -s 10.82.0.82/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-PI7R3ONIYH4XJLMW -p tcp -m tcp -j DNAT --to-destination 10.82.0.82:443
-A KUBE-SVC-JLRSZDR3OXJ4SUA2 -j KUBE-SEP-PI7R3ONIYH4XJLMW

LATEST BLOG

This blog post explains how to setup and configure SQL Server docker container on a linux machine. Microsoft recently started supporting running SQL Server on Linux and the entire process takes only few steps to run.

Install SQL Server Docker Image
 

//Pull the SQL Server Image from the docker registry
$docker pull microsoft/mssql-server-linux
$docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Password' -p 1433:1433 -d microsoft/mssql-server-linux
$docker exec -it 40dd "bash"
$/opt/mssql-tools/bin/sqlcmd -S localhost -U SA
Password:
 

[root@ip-10-0-0-110 ec2-user]# docker pull microsoft/mssql-server-linux
[root@ip-10-0-0-110 ec2-user]# docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Password' -p 1433:1433 -d microsoft/mssql-server-linux
[root@ip-10-0-0-120 ec2-user]# docker exec -it 40dd "bash"
root@40dde973f4a0:/# /opt/mssql-tools/bin/sqlcmd -S localhost -U SA
Password:

Command History
[root@ip-10-0-0-110 ec2-user]# docker pull microsoft/mssql-server-linux
Using default tag: latest
latest: Pulling from microsoft/mssql-server-linux
4c0c60131530: Pull complete
Digest: sha256:604d27fe5d3d9b4434fb1657e9bf4f2c2bf55ea9bd29dc0cb3660d84bc6f56a8
Status: Downloaded newer image for microsoft/mssql-server-linux:latest
[root@ip-10-0-0-110 ec2-user]# docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Password' -p 1433:1433 -d microsoft/mssql-server-linux
40dde973f4a0cc2af469f9d1c2182403d1e22e28c2a8821e29ce832529965513
[root@ip-10-0-0-120 ec2-user]# docker -it 40dd "bash"
flag provided but not defined: -it
See 'docker --help'.
[root@ip-10-0-0-120 ec2-user]# docker exec -it 40dd "bash"
root@40dde973f4a0:/# /opt/mssql-tools/bin/sqlcmd -S localhost -U SA
Password:
1> select @@servername;
2> go

--------------------------------------------------------------------------------------------------------------------------------
40dde973f4a0

(1 rows affected)
1> select db_name();
2> go

--------------------------------------------------------------------------------------------------------------------------------
master

(1 rows affected)
Version Info:
 

select @@version

/*
Microsoft SQL Server 2017 (CTP2.1) - 14.0.600.250 (X64) 
May 10 2017 12:21:23 
Copyright (C) 2017 Microsoft Corporation. All rights reserved.
Developer Edition (64-bit) on Linux (Ubuntu 16.04.2 LTS)
*/
Backup Database on Docker Container and Copy to Host:

Connect to SQL Server Management Studio or SQLCMD and issue the backup command

Backup Database on Docker Container
 

BACKUP DATABASE H1BData_V2
TO DISK ='/var/opt/mssql/data/SalaryDatabase_V2_06132017.bak'
Copy the file from Container to Host and Sync with S3 Bucket:
 
 $ docker cp <containerId>:/file/path/within/container /host/path/target

$ docker cp aabb19ca439f:/var/opt/mssql/data/SalaryDatabase_06132017.bak /Docker/

$ aws s3 sync ./ s3://docker-backups

upload: ./SalaryDatabase_06132017.bak  to s3://docker-backups/SalaryDatabase_06132017.bak

Completed 18.4 GiB/25.8 GiB (46.4 MiB/s) with 1 file(s) remaining
Troubleshooting :
 

root@e83b4048db28:/var/opt/mssql/log# /opt/mssql-tools/bin/sqlcmd -S localhost -U SA
Password:
Sqlcmd: Error: Microsoft ODBC Driver 13 for SQL Server : Login failed for user ‘SA’..
root@e83b4048db28:/var/opt/mssql/log# exit
exit
[root@ip-10-0-0-120 ec2-user]# docker rm $(docker ps -a -q)
Error response from daemon: You cannot remove a running container e83b4048db28505951f20fff4aff9f5132695fd1e1c7251c8daeb79d15ac403d. Stop the container before attempting removal or use -f
[root@ip-10-0-0-120 ec2-user]# docker rm -f $(docker ps -a -q)
e83b4048db28
Unable to telnet without allowing access to port 1433
MacBook:.ssh Raju$ telnet 54.44.40.26 1433
Trying 54.44.40.26…
telnet: connect to address 54.44.40.26: Operation timed out
telnet: Unable to connect to remote host
MacBook:.ssh Raju$ telnet 54.44.40.26 1433
Trying 54.44.40.26…
Connected to ec2-54-44-40-26.us-west-2.compute.amazonaws.com.
Escape character is ‘^]’.
^CError while connecting through SQL Server Management Studio without allowing access to port 1433
TITLE: Connect to Server
——————————
Cannot connect to 54.44.40.26.
——————————
ADDITIONAL INFORMATION:

A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 53)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&EvtSrc=MSSQLServer&EvtID=53&LinkId=20476

———-

The network path was not found

———

BUTTONS:

OK

———

 

Error while connecting through SQLCMD without allowing access to port 1433

C:\Users\>sqlcmd -S 54.44.40.26 -U SA
Password: HResult 0x35, Level 16, State 1
Named Pipes Provider: Could not open a connection to SQL Server [53].
Sqlcmd: Error: Microsoft SQL Server Native Client 10.0 : A network-related or in
stance-specific error has occurred while establishing a connection to SQL Server
. Server is not found or not accessible. Check if instance name is correct and i
f SQL Server is configured to allow remote connections. For more information see
SQL Server Books Online..
Sqlcmd: Error: Microsoft SQL Server Native Client 10.0 : Login timeout expired.

Open port 1433 in Security Groups

Allow inbound traffic from the IP’s or Security Groups you need SQL Server Access.



Restart Docker Container and See Docker Logs

 

1001 docker ps -a
1002 docker restart bb1b1
1003 docker logs bb
SQL Server Docker Container Errors:
 

017-06-09 00:08:39.35 spid9s Starting up database ‘tempdb’.
2017-06-09 00:08:39.45 spid26s Recovery of database ‘UserDBName’ (7) is 0% complete (approximately 1717 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
2017-06-09 00:08:40.15 spid9s The tempdb database has 1 data file(s).
2017-06-09 00:08:40.16 spid36s The Service Broker endpoint is in disabled or stopped state.
2017-06-09 00:08:40.17 spid36s The Database Mirroring endpoint is in disabled or stopped state.
2017-06-09 00:08:40.35 spid36s Service Broker manager has started.
2017-06-09 00:08:41.05 spid33s [INFO] HkRecoverFromLogOpenRange(): Database ID: [5]. Log recovery scan from 00000495:00005D20:006B to 000004C7:00010F48:0002.
2017-06-09 00:08:59.47 spid26s Recovery of database ‘UserDBName’ (7) is 16% complete (approximately 110 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
2017-06-09 00:09:11.16 Logon Error: 18456, Severity: 14, State: 38.
2017-06-09 00:09:11.16 Logon Login failed for user ‘UserName’. Reason: Failed to open the explicitly specified database ‘UserDBName’. [CLIENT: 00.000.00.00]
2017-06-09 00:09:19.51 spid26s Recovery of database ‘UserDBName’ (7) is 30% complete (approximately 94 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
2017-06-09 00:09:39.59 spid26s Recovery of database ‘UserDBName’ (7) is 44% complete (approximately 76 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
2017-06-09 00:09:51.38 spid26s Recovery of database ‘UserDBName’ (7) is 54% complete (approximately 62 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.
2017-06-09 00:09:51.40 spid26s Recovery of database ‘UserDBName’ (7) is 54% complete (approximately 62 seconds remain). Phase 3 of 3. This is an informational message only. No user action is required.
DBSTARTUP (UserDBName, 7): FCBOpenTime took 164 ms
DBSTARTUP (UserDBName, 7): FCBHeaderReadTime took 135 ms
DBSTARTUP (UserDBName, 7): FileMgrPreRecoveryTime took 277 ms
DBSTARTUP (UserDBName, 7): MasterFilesScanTime took 144 ms
DBSTARTUP (UserDBName, 7): AnalysisRecTime took 1470 ms
DBSTARTUP (UserDBName, 7): RedoRecTime took 71938 ms
DBSTARTUP (UserDBName, 7): UndoRecTime took 4903 ms
DBSTARTUP (UserDBName, 7): PhysicalRecoveryTime took 73408 ms
DBSTARTUP (UserDBName, 7): PhysicalCompletionTime took 4913 ms
DBSTARTUP (UserDBName, 7): RecoveryCompletionTime took 102 ms
DBSTARTUP (UserDBName, 7): StartupInDatabaseTime took 136 ms
DBSTARTUP (UserDBName, 7): RemapSysfiles1Time took 125 ms
2017-06-09 00:10:11.44 spid6s Recovery of database ‘UserDBName’ (7) is 63% complete (approximately 55 seconds remain). Phase 3 of 3. This is an informational message only. No user action is required.
2017-06-09 00:10:31.48 spid6s Recovery of database ‘UserDBName’ (7) is 63% complete (approximately 65 seconds remain). Phase 3 of 3. This is an informational message only. No user action is required.
2017-06-09 00:10:34.61 spid33s [INFO] HkRedoCloseLastOpenRangeSegment(): Database ID: [5]. Log recovery open segment scan from 00000495:00005D20:006B to 000004C7:00010E78:002F.
2017-06-09 00:10:34.67 spid25s [INFO] redoOpenRangeSegment(): Database ID: [5]. Log recovery open segment scan completed at 000004C7:00010E78:002F.
2017-06-09 00:10:34.67 spid25s [INFO] HkPrintUndoRowStats(): Database ID: [5]. Undo Rows Stats. [UndoRowsSeen] = 0, [UndoRowsMatched] = 0, [InsertRowsMatched] = 0, [InsertRowsSeen] = 0, [UndoRowsAborted] = 0
DBSTARTUP (UserDBName, 5): FCBOpenTime took 202 ms
DBSTARTUP (UserDBName, 5): FCBHeaderReadTime took 133 ms
DBSTARTUP (UserDBName, 5): FileMgrPreRecoveryTime took 308 ms
DBSTARTUP (UserDBName, 5): MasterFilesScanTime took 158 ms
DBSTARTUP (UserDBName, 5): StreamFileMgrPreRecoveryTime took 141 ms
DBSTARTUP (UserDBName, 5): LogMgrPreRecoveryTime took 478 ms
DBSTARTUP (UserDBName, 5): PhysicalCompletionTime took 116181 ms
DBSTARTUP (UserDBName, 5): HekatonRecoveryTime took 116167 ms
2017-06-09 00:10:34.84 spid24s Recovery completed for database UserDBName (database ID 5) in 117 second(s) (analysis 12 ms, redo 0 ms, undo 50 ms.) This is an informational message only. No user action is required.
2017-06-09 00:10:51.48 spid6s Recovery of database ‘UserDBName’ (7) is 71% complete (approximately 53 seconds remain). Phase 3 of 3. This is an informational message only. No user action is required.
2017-06-09 00:11:11.54 spid6s Recovery of database ‘UserDBName’ (7) is 99% complete (approximately 1 seconds remain). Phase 3 of 3. This is an informational message only. No user action is required.
2017-06-09 00:11:11.54 spid6s 23 transactions rolled back in database ‘UserDBName’ (7:0). This is an informational message only. No user action is required.
2017-06-09 00:11:11.55 spid6s Recovery is writing a checkpoint in database ‘UserDBName’ (7). This is an informational message only. No user action is required.
2017-06-09 00:11:11.55 spid6s Recovery completed for database UserDBName (database ID 7) in 154 second(s) (analysis 1405 ms, redo 71933 ms, undo 80147 ms.) This is an informational message only. No user action is required.
2017-06-09 00:11:11.56 spid6s Parallel redo is shutdown for database ‘UserDBName’ with worker pool size [2].
2017-06-09 00:11:11.57 spid6s Recovery is complete. This is an informational message only. No user action is required.

Errors while reading log file
EXEC sp_readerrorlog

Msg 22004, Level 16, State 1, Line 0
The log file is not using Unicode format.
===================================
The log file is not using Unicode format. (.Net SqlClient Data Provider)
——————————
For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&ProdVer=14.00.0600&EvtSrc=MSSQLServer&EvtID=22004&LinkId=20476
——————————
Server Name: 52.42.36.22
Error Number: 22004
Severity: 16
State: 1

Running out of Space

Msg 3202, Level 16, State 1, Line 5 Write on “/var/opt/mssql/data/HWageInfo_06132017.bak” failed: Insufficient bytes transferred. Common causes are backup configuration, insufficient disk space, or other problems with the storage subsystem such as corruption or hardware failure. Check errorlogs/application-logs for detailed messages and correct error conditions. Msg 3013, Level 16, State 1, Line 5 BACKUP DATABASE is terminating abnormally. Ensure you have enough disk space. Ensure you have enough disk space.

References:

https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-docker
https://github.com/Microsoft/mssql-docker/issues/55
https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-troubleshooting-guide

Business Needs:

Our goal is to identify whether Amazon SQL Server RDS Service provides elastic , highly available , Scalable and operationally efficient solution for our use case. We are evaluating options to migrate our read/write heavy production SQL Server database to amazon SQL Server RDS. We have pretty high throughput needs for few hours a day for few months in a year ,which is mission critical for our business success. Any downtown during peak usuage would be catastrophic for our business. We are evaluating pros and cons of moving to amazon RDS with provisioned IOPS.

Caveats of AWS RDS SQL Server:

These information we gathered while working with SQL Server RDS.

Feature Name Yes/No Description
SQL Server 2016 Support No Microsoft says SQL Server 2016 comes with very rich feature set and ton on OLTP Enhancements
Native Backup Restore Yes AWS RDS Released this feature a week ago , which makes moving databases across environments lot more easier.
Elastic IOPS No Storage and IOPS needs to be incremented linearly for higher performance. You can’t get higher IOPS without increasing storage.
Elastic Storage No Scaling Storage is not an option after launching an instance
RAID Support No We usually have RAID 10 for production workload and RDS doesn’t have options to configure RAID
Point in Time Restore on Same Instance No You can’t do Point in Time Restore on the existing database. You have to spin up new Instance
AlwaysOn Availability Groups No This provides ability to failover group of databases to your secondary instance
Mirroring Yes Mirroring is Deprecated feature and its replaced with AlwaysOn Availability Groups
Linked Servers from RDS No But Linked Servers to RDS is Allowed.
Service Broker No Comes handy for services

  No Admin privileges. You can’t execute normal sql server system stored procedures and you need to work with options groupand parameters group to modify configuration. You can’t execute sp_configure to change configurations.

Remote Host Identification SSH Error
 

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
SHA256:oFgx2U4Q0KUxtFnouxHYJdLyH6qDbX/rKtsg
Please contact your system administrator.
Add correct host key in /Users/username/.ssh/known_hosts to get rid of this message.
Offending RSA key in /Users/ username /.ssh/known_hosts:1
RSA host key for 54.214.1.26 has changed and you have requested strict checking.
Host key verification failed.
Edit known_hosts and remove the offending host entry and SSH into Host Again
 

username -MBP:.ssh username $ vi known_hosts
username -MBP:.ssh username $ ssh -i Key.pem ec2-user@54.214.1.26
The authenticity of host ‘54.214.1.26 (54.214.1.26)’ can’t be established.
ECDSA key fingerprint is SHA256:2VcR+DiKNRwyQwZ2LqtZ6EHxQYv5MWMAsrrI.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘54.214.1.26’ (ECDSA) to the list of known hosts.
Last login: Fri Jul 29 23:08:35 2016 from
__| __|_ )
_| ( / Amazon Linux AMI
___|\___|___|
https://aws.amazon.com/amazon-linux-ami/2016.03-release-notes/
4 package(s) needed for security, out of 4 available
Run “sudo yum update” to apply all updates.

POPULAR POSTS

CloudIQ is a leading Cloud Consulting and Solutions firm that helps businesses solve today’s problems and plan the enterprise of tomorrow by integrating intelligent cloud solutions. We help you leverage the technologies that make your people more productive, your infrastructure more intelligent, and your business more profitable. 

US

626 120th Ave NE, B102, Bellevue,

WA, 98005.

 sales@cloudiqtech.com

INDIA

Chennai One IT SEZ,

Module No:5-C, Phase ll, 2nd Floor, North Block, Pallavaram-Thoraipakkam 200 ft road, Thoraipakkam, Chennai – 600097


© 2019 CloudIQ Technologies. All rights reserved.