Principles of Cloud-native Architecture

Cloud-native architecture is the design or plan for applications and services built specifically to exist in the cloud. Most resources emphasize the role of microservices in cloud-native architecture.The major advantage of cloud-native architecture over legacy systems is its flexibility.

Cloud-native architectures aren’t built on on-premise physical servers but are instead deployed on a cloud platform and leverage the cloud philosophy of distributed systems. This enables cloud-native architectures to take full advantage of the latest and best technologies around distributed systems. They are specifically designed to utilize the versatility and scalability benefits of the cloud.

The best way to understand cloud-native architecture is to take a closer look at cloud-native applications. Cloud-native apps are built on a fundamentally different approach than monolithic applications. Rather than developing and deploying the application as a whole, cloud-native apps are based on microservices that are self-contained and independently deployable.

Microservices are the core of cloud-native application architecture. They are essentially small, self-sufficient mini-programs, each with their own data store and application logic, built to execute a single business function. Cloud-native architecture will consist of many small pieces that work together. You can change, add, or replace one without potentially breaking the entire system.

Cloud-native architecture typical components include:

  • Containers
  • Immutable infrastructure
  • Microservices
  • Service meshes

These pieces work together, but you can tinker with them independently without taking down the entire system. Your final build is scalable, resilient, and available to all consumers.

Traditional vs Cloud Computing Environments:

    In a traditional computing environment, a company needs to provision capacity based of their best guess of a maximum peak traffic (for instance - Black Friday). Which means that for extended periods of time, a vast majority of your capacity is essentially wasted.

    This is more or less why Cloud Computing was born - you get to use other's extra capacity for your own purposes. Servers, databases, storage etc. can be started and shut down within hours or even minutes based on the requirements.

5 Principles of cloud-native Architecture:

Principle 1: Design for automation
    1. Continuous Integration/Continuous Delivery

    2. Scale up and scale down
    
    3. Monitoring and automated recovery - black-box monitoring and white-box monitoring

Principle 2: Be smart with state
    1. Stateless components(containers) - Stateless means that any state (persistent data of any kind) is stored outside of a container
     
    2. Immutable components(containers) - Immutable means that a container won't be modified during its life: no updates, no patches, no configuration changes.

Principle 3: Favor managed services

Principle 4: Practice defense in depth - Adopt an approach of defense-in-depth by applying authentication between each component, and by minimizing the trust between those components (even if they are 'internal'). As a result, there is no 'inside' and 'outside'.

Principle 5: Always be architecting - Always seek to refine, simplify and improve the architecture of the system, as the needs of the organization change, the landscape of your IT systems change, and the capabilities of your cloud provider itself change.

ref: 
 


 
 
Architecting for the Cloud(AWS Best Practices) -  
 
    

Cloud Design Patterns

A software design pattern is a general, reusable solution to a commonly occurring problem within a given context in software design. An architectural pattern is a general, reusable solution to a commonly occurring problem in software architecture within a given context. The architectural patterns address various issues in software engineering, such as computer hardware performance limitations, high availability and minimization of a business risk.

Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. The term is generally used to describe data centers available to many users over the Internet. Cloud computing architecture refers to the components and sub-components required for cloud computing. These components typically consist of a front end platform (fat client, thin client, mobile ),back end platforms (servers, storage), a cloud based delivery, and a network (Internet, Intranet, Intercloud). Combined, these components make up cloud computing architecture.

Cloud Development Challenges:

    1. Availability - Availability is the proportion of time that the system is running, functional and working, usually measured as a percentage of uptime. It can be affected by system errors, infrastructure problems, malicious attacks, and system load.

    2. Performance & Scalability - Performance is an indication of the responsiveness of a system to execute any action within a given time interval, while scalability is ability of a system either to handle increases in load without impact on performance or for the available resources to be readily increased.
    Cloud applications typically encounter variable workloads and peaks in activity. Instead, applications should be able to scale out within limits to meet peaks in demand, and scale in when demand decreases. Scalability concerns not just compute instances, but other elements such as data storage, messaging infrastructure.

    3. Management and Monitoring - Cloud applications run in a remote data-center(hybrid/public/private) where you do not have full control of the infrastructure or, in some cases, the operating system. This can make management and monitoring more difficult than an on-premises deployment. Applications must expose runtime information that administrators and operators can use to manage and monitor the system, as well as supporting changing business requirements and customization without requiring the application to be stopped or redeployed.

    4. Security - Security is the capability of a system to prevent malicious or accidental actions outside of the designed usage, and to prevent disclosure or loss of information. Cloud applications are exposed on the Internet outside trusted on-premises boundaries, are often open to the public, and may serve untrusted users. Applications must be designed and deployed in a way that protects them from malicious attacks, restricts access to only approved users, and protects sensitive data.

Popular Cloud Design Patterns:

    1. Asynchronous Request-Reply 

    2. Ambassador   

    3. Sidecar

    4. Publisher-Subscriber

ref:

Wiki - 
 
 


AWS Cloud Design Patterns -  
 
 

 
Books -
 
    1. Cloud Design Patterns Book from Microsoft - 
 
 
 
 
Youtube Videos -

    1. Distributed Architecture Patterns - https://www.youtube.com/watch?v=tpspO9K28PM
 
    2. Cloud Architecture - https://www.youtube.com/watch?v=TuZZIGSbFfQ
 
    3. Architectural patterns for the cloud(Mahesh Krishnan) - https://www.youtube.com/watch?v=TuZZIGSbFfQ
 
    4. Cloud Security - https://www.youtube.com/watch?v=4TxvqZFMaoA

The Lightweight Kubernetes Distribution Built for the Edge - k3s

K3s is a lightweight, easy to install Kubernetes distribution geared towards resource-constrained environments and low touch operations. Some use cases in which k3s really shines are edge, ARM, IoT, and CI. 

K3s from Rancher Labs is packaged as a single binary which is about 40 megabytes in size. Bundled in that single binary is everything needed to run Kubernetes, including the container runtime and any important host utilities like iptables, socat, and du. The only OS dependencies are the Linux kernel itself and a proper dev, proc, and sysfs mounts (this is done automatically on all modern distros). Cloud Native Computing Foundation (CNCF) has accepted the K3s as its Sandbox project in Aug'2020. 

What is K3s?:
K3s is a fully compliant Kubernetes distribution with the following enhancements:

  1. Packaged as a single binary.
  2. Lightweight storage backend based on sqlite3 as the default storage mechanism. etcd3, MySQL, Postgres also still available.
  3. Wrapped in simple launcher that handles a lot of the complexity of TLS and options.
  4. Secure by default with reasonable defaults for lightweight environments.
  5. Simple but powerful “batteries-included” features have been added, such as: a local storage provider, a service load balancer, a Helm controller, and the Traefik ingress controller.
  6. Operation of all Kubernetes control plane components is encapsulated in a single binary and process. This allows K3s to automate and manage complex cluster operations like distributing certificates.
  7. External dependencies have been minimized (just a modern kernel and cgroup mounts needed). K3s packages required dependencies, including:
  1. Containerd
  2. Flannel
  3. CoreDNS
  4. CNI
  5. Host utilities (iptables, socat, etc)
  6. Ingress controller (traefik)
  7. Embedded service loadbalancer
  8. Embedded network policy controller

ref: 

K3s, Lightweight Kubernetes - https://rancher.com/docs/k3s/latest/en/, https://www.infoworld.com/article/3342125/rancher-k3s-brings-kubernetes-to-iot-devices.html

K3s Architecture - https://rancher.com/docs/k3s/latest/en/architecture/

K3s github source code - https://github.com/rancher/k3s

K3s overview - https://rancher.com/blog/2019/2019-02-26-introducing-k3s-the-lightweight-kubernetes-distribution-built-for-the-edge/ 

Build a Kubernetes cluster using k3s via Ansible - https://github.com/rancher/k3s-ansible

Develop your cloud native use cases at the edge with K3s - https://www.cncf.io/webinars/develop-your-cloud-native-use-cases-at-the-edge-with-k3s/

Rancher Labs’ K3s Joins Cloud Native Computing Foundation as Sandbox Project - https://www.businesswire.com/news/home/20200826005093/en/Rancher-Labs%E2%80%99-K3s-Joins-Cloud-Native-Computing

 

Open Network Automation Platform (ONAP)

Open Network Automation Platform (ONAP) project addresses the rising need for a common automation platform for telecommunication, cable, and cloud service providers and their solution providers that enables the automation of different lifecycle processes, to deliver differentiated network services on demand, profitably and competitively, while leveraging existing investments. It is an open source software platform that delivers robust capabilities for the design, creation, orchestration, monitoring, and life cycle management of Network Function Virtualization (NFV) environments, as well as Software-Defined Networks (SDN).

Network Functions Virtualization (NFV) allows network operators to reduce their dependence on single-purpose appliances by taking functions that were previously built into hardware and implementing them in software that runs on industry-standard servers, network, and storage platforms. Beyond reducing network operators’ dependency on dedicated hardware, leveraging NFV enables more programmability in the network and greatly reduces the complexity and time-to-market associated with introducing new services.

Network Function Virtualization(NFV) is a way to reduce cost and accelerate service deployment for network operators by decoupling functions like a firewall or encryption from dedicated hardware and moving them to virtual servers. Instead of installing expensive proprietary hardware, service providers can purchase inexpensive switches, storage and servers to run virtual machines that perform network functions.  This collapses multiple functions into a single physical server, reducing costs and minimizing truck rolls. If a customer wants to add a new network function, the service provider can simply spin up a new virtual machine to perform that function. For example, instead of deploying a new hardware appliance across the network to enable network encryption, encryption software can be deployed on a standardized server or switch already in the network.

Software Defined Networking(SDN) technology is an approach to network management that enables dynamic, programmatically efficient network configuration in order to improve network performance and monitoring, making it more like cloud computing than traditional network management. 

SDN vs NFV:

Network Functions Virtualization is highly complementary to Software-Defined Networking (SDN) but not dependent on it (or vice-versa). Network Functions Virtualization can be implemented without an SDN being required, although the two concepts and solutions can be combined and potentially greater value accrued.

Network Functions Virtualization goals can be achieved using non-SDN mechanisms, relying on the techniques currently in use in many data centers. But approaches relying on the separation of the control and data forwarding planes as proposed by SDN can enhance performance, simplify compatibility with existing deployments, and facilitate operation and maintenance procedures. NFV is able to support SDN by providing the infrastructure upon which the SDN software can be run. Furthermore, Network Functions Virtualization aligns closely with the SDN objectives to use commodity servers and switches.



 

ref:


 
ONAP Glossary(NFV, SDN resources) - https://wiki.onap.org/display/DW/Glossary
 
 
The Edge Multi Cloud Orchestrator(EMCO) Architecture & Design - https://wiki.onap.org/pages/viewpage.action?pageId=84668166 
 
ONAP github source code - https://github.com/onap

ONAP multicloud-k8s github source code - https://github.com/onap/multicloud-k8s
 
 
 
 
ETSI NFV -
 
 

 
 
 
Misc =>

Open source data collector for Unified Logging - Fluentd

Fluentd is an open source data collector for unified logging layer. It allows you to unify data collection and consumption for a better use and understanding of data.

Fluentd decouples data sources from backend systems by providing a unified logging layer in between. It is Apache 2.0 Licensed, fully open source software. Fluentd treats logs as JSON, a popular machine-readable format. It is written primarily in C with a thin-Ruby wrapper that gives users flexibility.

Fluentd is an open source log management tool supported by the CNCF that unifies your data collection in a language- and platform-agnostic manner. It brings together data from your databases, system logs, and application events, filters out the noise, and then structures that data so it can be easily fed out to multiple destinations. Through its flexible plugin architecture, Fluentd works with hundreds of different services, from commercial products like Splunk to open source tools like ElasticSearch or MongoDB. Prized for microservices architecture, Fluentd is also an excellent choice for legacy and monolithic applications. Its reduced footprint sibling Fluent Bit is even applicable for the Internet of Things.

 

ref:

Fluentd - https://www.fluentd.org/

Fluentd overview - https://docs.fluentd.org/quickstart 

Fluentd github - https://github.com/fluent/fluentd

Fluentd community - https://www.fluentd.org/community

Fluentd wiki - https://en.wikipedia.org/wiki/Fluentd

Fluentd as part of CNCF - 

    https://landscape.cncf.io/selected=fluentd

    https://epsagon.com/tools/cncf-tools-overview-fluentd-unified-logging-layer/

Aggregating Application Logs from Kubernetes Clusters using Fluentd to Log Intelligence - 

    https://medium.com/@bahubalishetti/aggregating-application-logs-from-kubernetes-clusters-using-fluentd-to-log-intelligence-91da5f536692

    https://medium.com/kubernetes-tutorials/cluster-level-logging-in-kubernetes-with-fluentd-e59aa2b6093a

Analyzing logs in real time using Fluentd and BigQuery - https://cloud.google.com/solutions/real-time/fluentd-bigquery

Open source Identity and Access Management(IAM) - Keycloak

Single sign-on (SSO) is a property of Identity and Access Management (IAM) that enables users to securely authenticate with multiple applications and websites by logging in only once with just one set of credentials (username and password). With SSO, the application or website that the user is trying to access relies on a trusted third party to verify that users are who they say they are. It is often accomplished by using the Lightweight Directory Access Protocol (LDAP) and stored LDAP databases on (directory) servers.

Keycloak is an open source software product to allow single sign-on(SSO) with Identity and Access Management(IAM) aimed at modern applications and services. Keycloak supports both SAML and Auth2.0 protocols. Keycloak holds the Apache open source license.

Keycloak supports OpenID Connect and SAML (Security Assertion Markup Language) protocols. OpenId Connect is known to be an extension of the OAuth2 protocol also it’s a framework for building authorization protocols.

====

Authentication => The process of verifying who a user is

Authorization => The process of verifying what they have access to

SAML (Security Assertion Mark-up Language) => An umbrella standard that covers federation, identity management and single sign-on (SSO)

OAuth (Open Authorization) => A standard for authorization of resources. OAuth 2.0 is a framework that controls authorization to a protected resource such as an application or a set of files

OpenID Connect => A standard for federated authentication. OpenID Connect is a simple identity layer on top of the OAuth 2.0 protocol, which allows computing clients to verify the identity of an end-user based on the authentication performed by an authorization server, as well as to obtain basic profile information about the end-user in an interoperable and REST-like manner

====

OpenSource Single Sign-on(SSO) products: 

    1. Keycloak - https://www.keycloak.org/, https://www.keycloak.org/getting-started/getting-started-kube

    2. Shibboleth -  https://www.shibboleth.net/, https://www.internet2.edu/products-services/trust-identity/shibboleth/

    3. Univention Corporate Server - https://www.univention.com/

    4. WSO2 Identity Server - https://wso2.com/identity-and-access-management/


ref:

wiki - https://en.wikipedia.org/wiki/Keycloak

OpenSource Single Sign-On(SSO) - https://medium.com/faun/opensource-single-sign-on-sso-e52d39e1927

Difference Between OAuth, OpenID Connect, and SAML - https://www.okta.com/identity-101/whats-the-difference-between-oauth-openid-connect-and-saml/

Choosing an SSO Strategy: SAML vs OAuth2 - https://www.mutuallyhuman.com/blog/choosing-an-sso-strategy-saml-vs-oauth2/

Adding authentication to your Kubernetes Web applications with Keycloak =>

    1. https://www.openshift.com/blog/adding-authentication-to-your-kubernetes-web-applications-with-keycloak   

    2. https://medium.com/stakater/proxy-injector-enabling-sso-with-keycloak-on-kubernetes-a1012c3d9f8d

    3. https://thenewstack.io/kubernetes-single-sign-one-less-identity/

    4. https://www.keycloak.org/getting-started/getting-started-kube
 
    5. https://blog.codecentric.de/en/2019/05/configuring-kubernetes-login-keycloak/

Kubernetes package manager "helm" commands

helm is a package manager for Kubernetes. Helm 2 described a workflow for creating, installing, and managing charts. Helm 3 builds upon that workflow, changing the underlying infrastructure to meet the needs of the evolving ecosystem.

Overview of Helm 3 Changes:
    1. Removal of Tiller:
    2. Replaces client/server with client/library architecture (helm binary only)
    3. Security is now on per user basis (delegated to Kubernetes user cluster security)
    4. Releases are now stored as in-cluster secrets and the release object metadata has changed
    5. Releases are persisted on a release namespace basis and not in the Tiller namespace anymore


helm v3 Commands:

helm version => helm version

helm help => helm help 

 

Search for charts in the Helm Hub or an instance of Monocular => helm search hub

In case you want to search for any chart you can helm search command for the same => helm search <chart_name>

Search repositories for a keyword in charts => helm search repo
    Search for stable release versions matching the keyword "nginx"
    $ helm search repo nginx
 

Download all information for a named release => helm get all
    This command prints a human readable collection of information about the notes, hooks, supplied values, and generated manifest file of the given release.
    $ helm get all RELEASE_NAME [flags]

Download the values file for a named release => helm get values
    This command downloads a values file for a given release.
    $ helm get values RELEASE_NAME [flags]

 

Install a chart => helm install
    This command installs a chart archive. The install argument must be a chart reference, a path to a packaged chart, a path to an unpacked chart directory or a URL.

    To override values in a chart, use either the '--values' flag and pass in a file or use the '--set' flag and pass configuration from the command line, to force a string value use '--set-string'. In case a value is large and therefore you want not to use neither '--values' nor '--set', use '--set-file' to read the single large value from file.

    $ helm install -f myvalues.yaml myredis ./redis

    $ helm install mynginx ./nginx-1.2.3.tgz

 

List releases => helm list
    This command lists all of the releases for a specified namespace (uses current namespace context if namespace not specified). By default, it lists only releases that are deployed or failed. Flags like '--uninstalled' and '--all' will alter this behavior. Such flags can be combined: '--uninstalled --failed'.

    By default, items are sorted alphabetically. Use the '-d' flag to sort by release date.

    If the --filter flag is provided, it will be treated as a filter. Filters are regular expressions (Perl compatible) that are applied to the list of releases. Only items that match the filter will be returned.

    $ helm list --filter 'ara[a-z]+'

 

Uninstall a release => helm uninstall
    This command takes a release name and uninstalls the release. It removes all of the resources associated with the last release of the chart as well as the release history, freeing it up for future use.

    Use the '--dry-run' flag to see which releases will be uninstalled without actually uninstalling them.

    $helm uninstall RELEASE_NAME [...] [flags]

 

Package a chart directory into a chart archive => helm package
    This command packages a chart into a versioned chart archive file. If a path is given, this will look at that path for a chart (which must contain a Chart.yaml file) and then package that directory.

    Versioned chart archives are used by Helm package repositories.

    To sign a chart, use the '--sign' flag. In most cases, you should also provide '--keyring path/to/secret/keys' and '--key keyname'.

    $ helm package --sign ./mychart --key mykey --keyring ~/.gnupg/secring.gpg


ref:

helm - https://helm.sh/

helm releases - https://github.com/helm/helm/releases

helm github source code - https://github.com/helm/helm

Helm Best Practices - https://helm.sh/docs/chart_best_practices/

Helm command cheat sheet - 

    https://helm.sh/docs/helm/

    https://v3.helm.sh/docs/helm/

    https://linuxroutes.com/helm-commands-cheat-sheet/

    https://devopsqa.wordpress.com/2020/01/29/helm-cli-cheatsheet/

    https://gist.github.com/tuannvm/4e1bcc993f683ee275ed36e67c30ac49

    https://github.com/RehanSaeed/Helm-Cheat-Sheet

Kubernetes command-line tool "kubectl" commands

kubectl is Kubernetes command-line tool that allows you to run commands against Kubernetes clusters. 

The kubectl command line tool lets you control Kubernetes clusters. For configuration, kubectl looks for a file named config in the $HOME/.kube directory. You can specify other kubeconfig files by setting the KUBECONFIG environment variable or by setting the --kubeconfig flag.

example:

    kubectl command to pass kubeconfig  as commandline argument & fetch client & server version => kubectl --kubeconfig=<kubeconfig_file_path> version


Cluster Management:
     

    Display the Kubernetes version running on both the client and server => kubectl version

    Display endpoint information about the master and services in the cluster => kubectl cluster-info 

    Get the configuration of the cluster => kubectl config view 

    List the API resources that are available => kubectl api-resources 

    List all everything(running resources in all namespaces) => kubectl get all -A 


Nodes(no):
     

    Update the taints on one or more nodes => kubectl taint node <node_name> 

    List one or more nodes => kubectl get node 

    Describe one or more nodes => kubectl get node

    Show node labels =>  kubectl get nodes --show-labels

    Add or update the labels of one or more nodes => kubectl label node <node-name> <key>=<value>

    Display Resource usage (CPU/Memory/Storage) for nodes => kubectl top node

    Delete a node or multiple nodes => kubectl delete node <node_name>

    Resource allocation per node => kubectl describe nodes | grep Allocated -A 5 

    GPU Resource available/used per node =>
kubectl describe nodes  |  tr -d '\000' | sed -n -e '/^Name/,/Roles/p' -e '/^Capacity/,/Allocatable/p' -e '/^Allocated resources/,/Events/p'  | grep -e Name  -e  nvidia.com  | perl -pe 's/\n//'  |  perl -pe 's/Name:/\n/g' | sed 's/nvidia.com\/gpu:\?//g'  | sed '1s/^/Node Available(GPUs)  Used(GPUs)/' | sed 's/$/ 0 0 0/'  | awk '{print $1, $2, $3}'  | column -t
 

    Pods running on a node => kubectl get pods -o wide | grep <node_name> 

    Annotate a node => kubectl annotate node <node_name> 

    Mark a node as unschedulable => kubectl cordon node <node_name> 

    Mark node as schedulable => kubectl uncordon node <node_name>


Pods(po):


    List one or more pods => kubectl get pod

    List one or more pods in all namespaces => kubectl get pod -A

    List one or more pods in wide format => kubectl get pod -o wide

    List one or more pods yaml spec => kubectl get pod -o yaml

    List one or more pods of a specific namespace => kubectl get pod -n <namespace_name> 

    Delete a pod => kubectl delete pod <pod_name> 

    Display the detailed state of a pods => kubectl describe pod <pod_name> 

    Create a pod => kubectl create pod <pod_name> 

    Execute a command against a container in a pod => kubectl exec <pod_name> -c <container_name> <command> 

    Get interactive shell on a a single-container pod => kubectl exec -it <pod_name> /bin/sh 

    Display Resource usage (CPU/Memory/Storage) for pods => kubectl top pod 

    Add or update the annotations of a pod => kubectl annotate pod <pod_name> <annotation> 

    Add or update the label of a pod => kubectl label pod <pod_name>


Services(svc): 

    List one or more services => kubectl get services

    List one or more services in all namespaces => kubectl get svc -A

    Display the detailed state of a service => kubectl describe services 

    Expose a replication controller, service, deployment or pod as a new Kubernetes service => kubectl expose deployment <deployment_name> 

    Edit and update the definition of one or more services => kubectl edit services
 
 

Watch:

    To monitor progress, use the kubectl get service command with the --watch argument.
    example:
        kubectl get service azure-vote-front --watch

 

Secrets: 

    Create a secret => kubectl create secret

    List secrets => kubectl get secrets 

    List details about secrets => kubectl describe secrets 

    Delete a secret => kubectl delete secret <secret_name>
 

Deployments(deploy):
 

    List one or more deployments in default namespace => kubectl get deployment

    List one or more deployments in all namespaces => kubectl get deployment -A 

    Display the detailed state of one or more deployments => kubectl describe deployment <deployment_name> 

    Edit and update the definition of one or more deployment on the server => kubectl edit deployment <deployment_name> 

    Create one a new deployment => kubectl create deployment <deployment_name> 

    Delete deployments => kubectl delete deployment <deployment_name> 


Logs:

    Print the logs for a pod => kubectl logs <pod_name>

    Print the logs for a pod and follow new logs => kubectl logs -f <pod_name>

    Print the logs for a container in a pod => kubectl logs -c <container_name> <pod_name>

    Output the logs for a pod into a file named ‘pod.log’ => kubectl logs <pod_name> pod.log 

    View the logs for a previously failed pod => kubectl logs --previous <pod_name>

    Print the logs for the last hour for a pod => kubectl logs --since=1h <pod_name> 

    Get the most recent 20 lines of logs => kubectl logs --tail=20 <pod_name> 

    Get logs from a service and optionally select which container => kubectl logs -f <service_name> [-c <$container>]

 

Events(ev):

    List recent events for all resources in the system => kubectl get events 

    List Warnings only => kubectl get events --field-selector type=Warning 

    List events but exclude Pod events => kubectl get events --field-selector involvedObject.kind!=Pod
 

Manifest Files:
 

    Apply a configuration to an object by filename or stdin. Overrides the existing configuration => kubectl apply -f manifest_file.yaml 

    Create objects => kubectl create -f manifest_file.yaml 

    Create objects in all manifest files in a directory => kubectl create -f ./dir 

    Create objects from a URL => kubectl create -f ‘url’

    Delete an object => kubectl delete -f manifest_file.yaml

 


ref:

Kubectl overview - https://kubernetes.io/docs/reference/kubectl/overview/

Install kubectl - https://kubernetes.io/docs/tasks/tools/install-kubectl/

kubectl cheat sheet - 

    https://kubernetes.io/docs/reference/kubectl/cheatsheet/

    https://www.bluematador.com/learn/kubectl-cheatsheet

    https://unofficial-kubernetes.readthedocs.io/en/latest/user-guide/kubectl-cheatsheet/

    https://opensource.com/article/20/5/kubectl-cheat-sheet

Ansible Automation for Kubernetes Cluster Deployment

Ansible is an open-source software provisioning, configuration management, and application-deployment tool enabling infrastructure as code. It runs on many Unix-like systems, and can configure both Unix-like systems as well as Microsoft Windows. It includes its own declarative language to describe system configuration. Ansible was written by Michael DeHaan and acquired by Red Hat in 2015. Ansible is agentless, temporarily connecting remotely via SSH or Windows Remote Management (allowing remote PowerShell execution) to do its tasks.

In contrast with other popular configuration-management software such as Chef, Puppet,CFEngine, Ansible uses an agentless architecture, with Ansible software not normally running or even installed on the controlled node. Instead, Ansible orchestrates a node by installing and running modules on the node temporarily via SSH. For the duration of an orchestration task, a process running the module communicates with the controlling machine with a JSON-based protocol via its standard input and output.

Ansible Playbook are YAML files that express configurations, deployment, and orchestration in Ansible and allow Ansible to perform operations on managed nodes. Each Playbook maps a group of hosts to a set of roles. Each role is represented by calls to Ansible tasks. They are like a to-do list for Ansible that contains a list of tasks. Playbooks contain the steps which the user wants to execute on a particular machine.

Ansible Kubernetes Cluster Management:
    Kubernetes Clusters don't appear out of thin air. Depending on the type of clusters you're using, they require management for upgrades and integrations. Cluster management can become crippling, especially if, like most organizations, you’re managing multiple clusters (multiple production clusters, staging and QA clusters, etc.).

    If you're running inside a private cloud, or on bare metal servers, you will need a way to install Kubernetes and manage individual servers in the cluster. Ansible has a proven track record of being able to orchestrate multi-server applications, and Kubernetes itself is a multi-server application which happens to manage one or thousands of other multi-server applications through containerization.

    Projects like Kubespray have used Ansible for custom Kubernetes cluster builds and are compatible with dozens of different infrastructure arrangements.

    Even if you use a managed Kubernetes offering, like AKS, EKS, or GKE, Ansible has modules like azure_rm_aks, aws_eks_cluster, and gcp_container_cluster, which manage clusters, along with thousands of other modules which simplify and somewhat standardize cluster management among different cloud providers.

ref:

wiki - https://en.wikipedia.org/wiki/Ansible_(software)

ansible website - https://www.ansible.com/

ansible github - https://github.com/ansible

ansible overview - https://www.ansible.com/overview/how-ansible-works

ansible playbook - 

    1. https://docs.ansible.com/ansible/latest/user_guide/playbooks.html

    2. https://docs.ansible.com/ansible/latest/network/getting_started/first_playbook.html

    3. https://github.com/ansible/ansible-examples

ansible and kubernetes - 

    1. https://www.ansible.com/blog/how-useful-is-ansible-in-a-cloud-native-kubernetes-environment

    2. https://github.com/geerlingguy/ansible-for-kubernetes

    3. https://victorops.com/blog/ansibles-role-in-a-docker-and-kubernetes-world

How to create your own Kubernetes Cluster using Ansible - https://medium.com/faun/how-to-create-your-own-kubernetes-cluster-using-ansible-7c6b5c031a5d

How useful is Ansible in a Cloud-Native Kubernetes Environment? - https://www.ansible.com/blog/how-useful-is-ansible-in-a-cloud-native-kubernetes-environment

Kubernetes Setup Using Ansible and Vagrant - https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/

Deploy a Production Ready Kubernetes Cluster using Kubespray with Ansible Playbook - 

    1. https://github.com/kubernetes-sigs/kubespray

    2. https://computingforgeeks.com/deploy-production-kubernetes-cluster-with-ansible/

Kubeadm Ansible Playbook - https://github.com/kairen/kubeadm-ansible