Skip to main content

Home/ Larvata/ Group items tagged automator

Rss Feed Group items tagged

張 旭

Using Infrastructure as Code to Automate VMware Deployments - 1 views

  • Infrastructure as code is at the heart of provisioning for cloud infrastructure marking a significant shift away from monolithic point-and-click management tools.
  • infrastructure as code enables operators to take a programmatic approach to provisioning.
  • provides a single workflow to provision and maintain infrastructure and services from all of your vendors, making it not only easier to switch providers
  • ...5 more annotations...
  • A Terraform Provider is responsible for understanding API interactions between and exposing the resources from a given Infrastructure, Platform, or SaaS offering to Terraform.
  • write a Terraform file that describes the Virtual Machine that you want, apply that file with Terraform and create that VM as you described without ever needing to log into the vSphere dashboard.
  • HashiCorp Configuration Language (HCL)
  • the provider credentials are passed in at the top of the script to connect to the vSphere account.
  • modules— a way to encapsulate infrastructure resources into a reusable format.
  •  
    "revolutionizing"
crazylion lee

fastlane - iOS and Android Automation for Continuous Delivery - 1 views

  •  
    "stlane is the tool to release your iOS and Android app "
張 旭

Ansible Tower vs Ansible AWX for Automation - 4sysops - 0 views

  • you can run Ansible freely by downloading the module and running configurations and playbooks from the command line.
  • AWX Project from Red Hat. It provides an open-source version of Ansible Tower that may suit the needs of Tower functionality in many environments.
  • Ansible Tower may be the more familiar option for Ansible users as it is the commercial GUI Ansible tool that provides the officially supported GUI interface, API access, role-based access, scheduling, notifications, and other nice features that allow businesses to manage environments easily with Ansible.
  • ...5 more annotations...
  • Ansible AWX is the open-sourced project that was the foundation on which Ansible Tower was created. With this being said, Ansible AWX is a development branch of code that only undergoes minimal testing and quality engineering testing.
  • Ansible AWX is a powerful open-source, freely available project for testing or using Ansible AWX in a lab, development, or other POC environment.
  • to use an external PostgreSQL database, please note that the minimum version is 9.6+
  • Full enterprise features and functionality of Tower
  • Not limited to 10 nodes
張 旭

Automated Nginx Reverse Proxy for Docker - 0 views

  • Docker containers are assigned random IPs and ports which makes addressing them much more complicated from a client perspsective
  • Binding the container to the hosts port can prevent multiple containers from running on the same host. For example, only one container can bind to port 80 at a time.
  • Docker provides a remote API to inspect containers and access their IP, Ports and other configuration meta-data.
  • ...1 more annotation...
  • nginx template can be used to generate a reverse proxy configuration for docker containers using virtual hosts for routing.
張 旭

Introducing Infrastructure as Code | Linode - 0 views

  • Infrastructure as Code (IaC) is a technique for deploying and managing infrastructure using software, configuration files, and automated tools.
  • With the older methods, technicians must configure a device manually, perhaps with the aid of an interactive tool. Information is added to configuration files by hand or through the use of ad-hoc scripts. Configuration wizards and similar utilities are helpful, but they still require hands-on management. A small group of experts owns the expertise, the process is typically poorly defined, and errors are common.
  • The development of the continuous integration and continuous delivery (CI/CD) pipeline made the idea of treating infrastructure as software much more attractive.
  • ...20 more annotations...
  • Infrastructure as Code takes advantage of the software development process, making use of quality assurance and test automation techniques.
  • Consistency/Standardization
  • Each node in the network becomes what is known as a snowflake, with its own unique settings. This leads to a system state that cannot easily be reproduced and is difficult to debug.
  • With standard configuration files and software-based configuration, there is greater consistency between all equipment of the same type. A key IaC concept is idempotence.
  • Idempotence makes it easy to troubleshoot, test, stabilize, and upgrade all the equipment.
  • Infrastructure as Code is central to the culture of DevOps, which is a mix of development and operations
  • edits are always made to the source configuration files, never on the target.
  • A declarative approach describes the final state of a device, but does not mandate how it should get there. The specific IaC tool makes all the procedural decisions. The end state is typically defined through a configuration file, a JSON specification, or a similar encoding.
  • An imperative approach defines specific functions or procedures that must be used to configure the device. It focuses on what must happen, but does not necessarily describe the final state. Imperative techniques typically use scripts for the implementation.
  • With a push configuration, the central server pushes the configuration to the destination device.
  • If a device is mutable, its configuration can be changed while it is active
  • Immutable devices cannot be changed. They must be decommissioned or rebooted and then completely rebuilt.
  • an immutable approach ensures consistency and avoids drift. However, it usually takes more time to remove or rebuild a configuration than it does to change it.
  • System administrators should consider security issues as part of the development process.
  • Ansible is a very popular open source IaC application from Red Hat
  • Ansible is often used in conjunction with Kubernetes and Docker.
  • Linode offers a collection of several Ansible guides for a more comprehensive overview.
  • Pulumi permits the use of a variety of programming languages to deploy and manage infrastructure within a cloud environment.
  • Terraform allows users to provision data center infrastructure using either JSON or Terraform’s own declarative language.
  • Terraform manages resources through the use of providers, which are similar to APIs.
張 旭

Cluster Networking - Kubernetes - 0 views

  • Networking is a central part of Kubernetes, but it can be challenging to understand exactly how it is expected to work
  • Highly-coupled container-to-container communications
  • Pod-to-Pod communications
  • ...57 more annotations...
  • this is the primary focus of this document
    • 張 旭
       
      Cluster Networking 所關注處理的是: Pod 到 Pod 之間的連線
  • Pod-to-Service communications
  • External-to-Service communications
  • Kubernetes is all about sharing machines between applications.
  • sharing machines requires ensuring that two applications do not try to use the same ports.
  • Dynamic port allocation brings a lot of complications to the system
  • Every Pod gets its own IP address
  • do not need to explicitly create links between Pods
  • almost never need to deal with mapping container ports to host ports.
  • Pods can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.
  • pods on a node can communicate with all pods on all nodes without NAT
  • agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node
  • pods in the host network of a node can communicate with all pods on all nodes without NAT
  • If your job previously ran in a VM, your VM had an IP and could talk to other VMs in your project. This is the same basic model.
  • containers within a Pod share their network namespaces - including their IP address
  • containers within a Pod can all reach each other’s ports on localhost
  • containers within a Pod must coordinate port usage
  • “IP-per-pod” model.
  • request ports on the Node itself which forward to your Pod (called host ports), but this is a very niche operation
  • The Pod itself is blind to the existence or non-existence of host ports.
  • AOS is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform.
  • Cisco Application Centric Infrastructure offers an integrated overlay and underlay SDN solution that supports containers, virtual machines, and bare metal servers.
  • AOS Reference Design currently supports Layer-3 connected hosts that eliminate legacy Layer-2 switching problems.
  • The AWS VPC CNI offers integrated AWS Virtual Private Cloud (VPC) networking for Kubernetes clusters.
  • users can apply existing AWS VPC networking and security best practices for building Kubernetes clusters.
  • Using this CNI plugin allows Kubernetes pods to have the same IP address inside the pod as they do on the VPC network.
  • The CNI allocates AWS Elastic Networking Interfaces (ENIs) to each Kubernetes node and using the secondary IP range from each ENI for pods on the node.
  • Big Cloud Fabric is a cloud native networking architecture, designed to run Kubernetes in private cloud/on-premises environments.
  • Cilium is L7/HTTP aware and can enforce network policies on L3-L7 using an identity based security model that is decoupled from network addressing.
  • CNI-Genie is a CNI plugin that enables Kubernetes to simultaneously have access to different implementations of the Kubernetes network model in runtime.
  • CNI-Genie also supports assigning multiple IP addresses to a pod, each from a different CNI plugin.
  • cni-ipvlan-vpc-k8s contains a set of CNI and IPAM plugins to provide a simple, host-local, low latency, high throughput, and compliant networking stack for Kubernetes within Amazon Virtual Private Cloud (VPC) environments by making use of Amazon Elastic Network Interfaces (ENI) and binding AWS-managed IPs into Pods using the Linux kernel’s IPvlan driver in L2 mode.
  • to be straightforward to configure and deploy within a VPC
  • Contiv provides configurable networking
  • Contrail, based on Tungsten Fabric, is a truly open, multi-cloud network virtualization and policy management platform.
  • DANM is a networking solution for telco workloads running in a Kubernetes cluster.
  • Flannel is a very simple overlay network that satisfies the Kubernetes requirements.
  • Any traffic bound for that subnet will be routed directly to the VM by the GCE network fabric.
  • sysctl net.ipv4.ip_forward=1
  • Jaguar provides overlay network using vxlan and Jaguar CNIPlugin provides one IP address per pod.
  • Knitter is a network solution which supports multiple networking in Kubernetes.
  • Kube-OVN is an OVN-based kubernetes network fabric for enterprises.
  • Kube-router provides a Linux LVS/IPVS-based service proxy, a Linux kernel forwarding-based pod-to-pod networking solution with no overlays, and iptables/ipset-based network policy enforcer.
  • If you have a “dumb” L2 network, such as a simple switch in a “bare-metal” environment, you should be able to do something similar to the above GCE setup.
  • Multus is a Multi CNI plugin to support the Multi Networking feature in Kubernetes using CRD based network objects in Kubernetes.
  • NSX-T can provide network virtualization for a multi-cloud and multi-hypervisor environment and is focused on emerging application frameworks and architectures that have heterogeneous endpoints and technology stacks.
  • NSX-T Container Plug-in (NCP) provides integration between NSX-T and container orchestrators such as Kubernetes
  • Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards.
  • OpenVSwitch is a somewhat more mature but also complicated way to build an overlay network
  • OVN is an opensource network virtualization solution developed by the Open vSwitch community.
  • Project Calico is an open source container networking provider and network policy engine.
  • Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet
  • Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking.
  • Calico can also be run in policy enforcement mode in conjunction with other networking solutions such as Flannel, aka canal, or native GCE, AWS or Azure networking.
  • Romana is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network
  • Weave Net runs as a CNI plug-in or stand-alone. In either version, it doesn’t require any configuration or extra code to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes.
  • The network model is implemented by the container runtime on each node.
crazylion lee

HUBOT - 0 views

  •  
    "Hubot is your company's robot. Install him in your company to dramatically improve and reduce employee efficiency. "
crazylion lee

Antigate.Com: automated captcha guessing, recognition service. API. - 0 views

  •  
    "Antigate.Com is an online service which provides real-time captcha-to-text decodings. This works easy: your software uploads a captcha to our server and receives text from it within seconds."
crazylion lee

Gobot - 0 views

shared by crazylion lee on 17 Jan 14 - No Cached
  •  
    Gobot is set of libraries in the Go programming language for robotics and physical computing. It provides a simple, yet powerful way to create solutions that incorporate multiple, different hardware devices at the same time. Want to use Ruby on robots? Check out our sister project Artoo (http://artoo.io). Want to use Node.js? Check out our sister project Cylon (http://cylonjs.com).
張 旭

How To Create a Kubernetes Cluster Using Kubeadm on Ubuntu 18.04 | DigitalOcean - 0 views

  • A pod is an atomic unit that runs one or more containers.
  • Pods are the basic unit of scheduling in Kubernetes: all containers in a pod are guaranteed to run on the same node that the pod is scheduled on.
  • Each pod has its own IP address, and a pod on one node should be able to access a pod on another node using the pod's IP.
  • ...12 more annotations...
  • Communication between pods is more complicated, however, and requires a separate networking component that can transparently route traffic from a pod on one node to a pod on another.
  • pod network plugins. For this cluster, you will use Flannel, a stable and performant option.
  • Passing the argument --pod-network-cidr=10.244.0.0/16 specifies the private subnet that the pod IPs will be assigned from.
  • kubectl apply -f descriptor.[yml|json] is the syntax for telling kubectl to create the objects described in the descriptor.[yml|json] file.
  • deploy Nginx using Deployments and Services
  • A deployment is a type of Kubernetes object that ensures there's always a specified number of pods running based on a defined template, even if the pod crashes during the cluster's lifetime.
  • NodePort, a scheme that will make the pod accessible through an arbitrary port opened on each node of the cluster
  • Services are another type of Kubernetes object that expose cluster internal services to clients, both internal and external.
  • load balancing requests to multiple pods
  • Pods are ubiquitous in Kubernetes, so understanding them will facilitate your work
  • how controllers such as deployments work since they are used frequently in stateless applications for scaling and the automated healing of unhealthy applications.
  • Understanding the types of services and the options they have is essential for running both stateless and stateful applications.
張 旭

Introduction to CI/CD with GitLab | GitLab - 0 views

  • deploying code changes at every small iteration, reducing the chance of developing new code based on bugged or failed previous versions
  • based on automating the execution of scripts to minimize the chance of introducing errors while developing applications.
  • For every push to the repository, you can create a set of scripts to build and test your application automatically, decreasing the chance of introducing errors to your app.
  • ...5 more annotations...
  • checked automatically but requires human intervention to manually and strategically trigger the deployment of the changes.
  • instead of deploying your application manually, you set it to be deployed automatically.
  • .gitlab-ci.yml, located in the root path of your repository
  • all the scripts you add to the configuration file are the same as the commands you run on a terminal in your computer.
  • GitLab will detect it and run your scripts with the tool called GitLab Runner, which works similarly to your terminal.
  •  
    "deploying code changes at every small iteration, reducing the chance of developing new code based on bugged or failed previous versions"
張 旭

The Twelve-Factor App - 0 views

  • software is commonly delivered as a service: called web apps, or software-as-a-service.
  • Use declarative formats for setup automation
  • offering maximum portability between execution environments
  • ...18 more annotations...
  • obviating the need for servers and systems administration
  • Minimize divergence between development and production
  • scale up without significant changes to tooling, architecture, or development practices
  • Ops engineers who deploy or manage such applications.
  • developer building applications which run as a service
  • One codebase
  • many deploys
  • in the environment
  • services as attached resources
  • Explicitly declare
  • separate build and run stages
  • stateless processes
  • Export services via port binding
  • Scale out
  • fast startup and graceful shutdown
  • as similar as possible
  • logs as event streams
  • admin/management tasks as one-off processes
  •  
    "software is commonly delivered as a service: called web apps, or software-as-a-service"
張 旭

Glossary - CircleCI - 0 views

  • User authentication may use LDAP for an instance of the CircleCI application that is installed on your private server or cloud
  • The first user to log into a private installation of CircleCI
  • Contexts provide a mechanism for securing and sharing environment variables across projects.
  • ...22 more annotations...
  • The environment variables are defined as name/value pairs and are injected at runtime.
  • The CircleCI Docker Layer Caching feature allows builds to reuse Docker image layers
  • from previous builds.
  • Image layers are stored in separate volumes in the cloud and are not shared between projects.
  • Layers may only be used by builds from the same project.
  • Environment variables store customer data that is used by a project.
  • Defines the underlying technology to run a job.
  • machine to run your job inside a full virtual machine.
  • docker to run your job inside a Docker container with a specified image
  • A job is a collection of steps.
  • The first image listed in config.yml
  • A CircleCI project shares the name of the code repository for which it automates workflows, tests, and deployment.
  • must be added with the Add Project button
  • Following a project enables a user to subscribe to email notifications for the project build status and adds the project to their CircleCI dashboard.
  • A step is a collection of executable commands
  • Users must be added to a GitHub or Bitbucket org to view or follow associated CircleCI projects.
  • Users may not view project data that is stored in environment variables.  
  • A Workflow is a set of rules for defining a collection of jobs and their run order.
  • Workflows are implemented as a directed acyclic graph (DAG) of jobs for greatest flexibility.
  • referred to as Pipelines
  • A workspace is a workflows-aware storage mechanism.
  • A workspace stores data unique to the job, which may be needed in downstream jobs.
張 旭

jwilder/nginx-proxy: Automated nginx proxy for Docker containers using docker-gen - 0 views

  • docker-gen generates reverse proxy configs for nginx and reloads nginx when containers are started and stopped.
  • /var/run/docker.sock:/tmp/docker.sock:ro
  • Use this image to fully support HTTP/2 (including ALPN required by recent Chrome versions).
  • ...10 more annotations...
  • support multiple virtual hosts for a container
  • to connect to your backend using HTTPS instead of HTTP, set VIRTUAL_PROTO=https on the backend container.
  • The contents of /path/to/certs should contain the certificates and private keys for any virtual hosts in use.
  • to replace the default proxy settings for the nginx container, add a configuration file at /etc/nginx/proxy.conf
  • The default configuration blocks the Proxy HTTP request header from being sent to downstream servers
  • add your configuration file under /etc/nginx/conf.d using a name ending in .conf
  • If your container exposes multiple ports, nginx-proxy will default to the service running on port 80. If you need to specify a different port, you can set a VIRTUAL_PORT env var to select a different one.
  • To add settings on a per-VIRTUAL_HOST basis, add your configuration file under /etc/nginx/vhost.d
  • SNI
  • The default behavior for the proxy when port 80 and 443 are exposed is as follows: If a container has a usable cert, port 80 will redirect to 443 for that container so that HTTPS is always preferred when available. If the container does not have a usable cert, a 503 will be returned.
crazylion lee

Open Source Continuous Delivery and Automation Server | GoCD - 0 views

shared by crazylion lee on 19 Apr 18 - No Cached
  •  
    "SIMPLIFY CONTINUOUS DELIVERY"
張 旭

Using Ansible and Ansible Tower with shared roles - 2 views

  • clearly defined roles for dedicated tasks
  • a predefined structure of folders and files to hold your automation code.
  • Roles can be part of your project repository.
  • ...8 more annotations...
  • a better way is to keep a role in its own repository.
  • to be available to a playbook, the role still needs to be included.
  • The best way to make shared roles available to your playbooks is to use a function built into Ansible itself: by using the command ansible-galaxy
  • ansible galaxy can read a file specifying which external roles need to be imported for a successful Ansible run: requirements.yml
  • requirements.yml ensures that the used role can be pinned to a certain release tag value, commit hash, or branch name.
  • Each time Ansible Tower checks out a project it looks for a roles/requirements.yml. If such a file is present, a new version of each listed role is copied to the local checkout of the project and thus available to the relevant playbooks.
  • stick to the directory name roles, sitting in the root of your project directory.
  • have one requirements.yml only, and keep it at roles/requirements.yml
  •  
    "clearly defined roles for dedicated tasks"
張 旭

Deploying Rails Apps, Part 6: Writing Capistrano Tasks - Vladi Gleba - 0 views

  • we can write our own tasks to help us automate various things.
  • organizing all of the tasks here under a namespace
  • upload a file from our local computer.
  • ...27 more annotations...
  • learn about is SSHKit and the various methods it provides
  • SSHKit was actually developed and released with Capistrano 3, and it’s basically a lower-level tool that provides methods for connecting and interacting with remote servers
  • on(): specifies the server to run on
  • within(): specifies the directory path to run in
  • with(): specifies the environment variables to run with
  • run on the application server
  • within the path specified
  • with certain environment variables set
  • execute(): the workhorse that runs the commands on your server
  • upload(): uploads a file from your local computer to your remote server
  • capture(): executes a command and returns its output as a string
    • 張 旭
       
      capture 是跑在遠端伺服器上
  • upload() has the bang symbol (!) because that’s how it’s defined in SSHKit, and it’s just a convention letting us know that the method will block until it finishes.
  • But in order to ensure rake runs with the proper environment variables set, we have to use rake as a symbol and pass db:seed as a string
  • This format will also be necessary whenever you’re running any other Rails-specific commands that rely on certain environment variables being set
  • I recommend you take a look at SSHKit’s example page to learn more
  • make sure we pushed all our local changes to the remote master branch
  • run this task before Capistrano runs its own deploy task
  • actually creates three separate tasks
  • I created a namespace called deploy to contain these tasks since that’s what they’re related to.
  • we’re using the callbacks inside a namespace to make sure Capistrano knows which tasks the callbacks are referencing.
  • custom recipe (a Capistrano term meaning a series of tasks)
  • /shared: holds files and directories that persist throughout deploys
  • When you run cap production deploy, you’re actually calling a Capistrano task called deploy, which then sequentially invokes other tasks
  • your favorite browser (I hope it’s not Internet Explorer)
  • Deployment is hard and takes a while to sink in.
  • the most important thing is to not get discouraged
  • I didn’t want other people going through the same thing
‹ Previous 21 - 40 of 45 Next ›
Showing 20 items per page