Skip to main content

Home/ Larvata/ Group items tagged role

Rss Feed Group items tagged

張 旭

Using Ansible and Ansible Tower with shared roles - 2 views

  • clearly defined roles for dedicated tasks
  • a predefined structure of folders and files to hold your automation code.
  • Roles can be part of your project repository.
  • ...8 more annotations...
  • a better way is to keep a role in its own repository.
  • to be available to a playbook, the role still needs to be included.
  • The best way to make shared roles available to your playbooks is to use a function built into Ansible itself: by using the command ansible-galaxy
  • ansible galaxy can read a file specifying which external roles need to be imported for a successful Ansible run: requirements.yml
  • requirements.yml ensures that the used role can be pinned to a certain release tag value, commit hash, or branch name.
  • Each time Ansible Tower checks out a project it looks for a roles/requirements.yml. If such a file is present, a new version of each listed role is copied to the local checkout of the project and thus available to the relevant playbooks.
  • stick to the directory name roles, sitting in the root of your project directory.
  • have one requirements.yml only, and keep it at roles/requirements.yml
  •  
    "clearly defined roles for dedicated tasks"
張 旭

Helm | - 0 views

  • Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses.
  • kubectl cluster-info
  • Role-Based Access Control (RBAC) enabled
  • ...133 more annotations...
  • initialize the local CLI
  • install Tiller into your Kubernetes cluster
  • helm install
  • helm init --upgrade
  • By default, when Tiller is installed, it does not have authentication enabled.
  • helm repo update
  • Without a max history set the history is kept indefinitely, leaving a large number of records for helm and tiller to maintain.
  • helm init --upgrade
  • Whenever you install a chart, a new release is created.
  • one chart can be installed multiple times into the same cluster. And each can be independently managed and upgraded.
  • helm list function will show you a list of all deployed releases.
  • helm delete
  • helm status
  • you can audit a cluster’s history, and even undelete a release (with helm rollback).
  • the Helm server (Tiller).
  • The Helm client (helm)
  • brew install kubernetes-helm
  • Tiller, the server portion of Helm, typically runs inside of your Kubernetes cluster.
  • it can also be run locally, and configured to talk to a remote Kubernetes cluster.
  • Role-Based Access Control - RBAC for short
  • create a service account for Tiller with the right roles and permissions to access resources.
  • run Tiller in an RBAC-enabled Kubernetes cluster.
  • run kubectl get pods --namespace kube-system and see Tiller running.
  • helm inspect
  • Helm will look for Tiller in the kube-system namespace unless --tiller-namespace or TILLER_NAMESPACE is set.
  • For development, it is sometimes easier to work on Tiller locally, and configure it to connect to a remote Kubernetes cluster.
  • even when running locally, Tiller will store release configuration in ConfigMaps inside of Kubernetes.
  • helm version should show you both the client and server version.
  • Tiller stores its data in Kubernetes ConfigMaps, you can safely delete and re-install Tiller without worrying about losing any data.
  • helm reset
  • The --node-selectors flag allows us to specify the node labels required for scheduling the Tiller pod.
  • --override allows you to specify properties of Tiller’s deployment manifest.
  • helm init --override manipulates the specified properties of the final manifest (there is no “values” file).
  • The --output flag allows us skip the installation of Tiller’s deployment manifest and simply output the deployment manifest to stdout in either JSON or YAML format.
  • By default, tiller stores release information in ConfigMaps in the namespace where it is running.
  • switch from the default backend to the secrets backend, you’ll have to do the migration for this on your own.
  • a beta SQL storage backend that stores release information in an SQL database (only postgres has been tested so far).
  • Once you have the Helm Client and Tiller successfully installed, you can move on to using Helm to manage charts.
  • Helm requires that kubelet have access to a copy of the socat program to proxy connections to the Tiller API.
  • A Release is an instance of a chart running in a Kubernetes cluster. One chart can often be installed many times into the same cluster.
  • helm init --client-only
  • helm init --dry-run --debug
  • A panic in Tiller is almost always the result of a failure to negotiate with the Kubernetes API server
  • Tiller and Helm have to negotiate a common version to make sure that they can safely communicate without breaking API assumptions
  • helm delete --purge
  • Helm stores some files in $HELM_HOME, which is located by default in ~/.helm
  • A Chart is a Helm package. It contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster.
  • it like the Kubernetes equivalent of a Homebrew formula, an Apt dpkg, or a Yum RPM file.
  • A Repository is the place where charts can be collected and shared.
  • Set the $HELM_HOME environment variable
  • each time it is installed, a new release is created.
  • Helm installs charts into Kubernetes, creating a new release for each installation. And to find new charts, you can search Helm chart repositories.
  • chart repository is named stable by default
  • helm search shows you all of the available charts
  • helm inspect
  • To install a new package, use the helm install command. At its simplest, it takes only one argument: The name of the chart.
  • If you want to use your own release name, simply use the --name flag on helm install
  • additional configuration steps you can or should take.
  • Helm does not wait until all of the resources are running before it exits. Many charts require Docker images that are over 600M in size, and may take a long time to install into the cluster.
  • helm status
  • helm inspect values
  • helm inspect values stable/mariadb
  • override any of these settings in a YAML formatted file, and then pass that file during installation.
  • helm install -f config.yaml stable/mariadb
  • --values (or -f): Specify a YAML file with overrides.
  • --set (and its variants --set-string and --set-file): Specify overrides on the command line.
  • Values that have been --set can be cleared by running helm upgrade with --reset-values specified.
  • Chart designers are encouraged to consider the --set usage when designing the format of a values.yaml file.
  • --set-file key=filepath is another variant of --set. It reads the file and use its content as a value.
  • inject a multi-line text into values without dealing with indentation in YAML.
  • An unpacked chart directory
  • When a new version of a chart is released, or when you want to change the configuration of your release, you can use the helm upgrade command.
  • Kubernetes charts can be large and complex, Helm tries to perform the least invasive upgrade.
  • It will only update things that have changed since the last release
  • $ helm upgrade -f panda.yaml happy-panda stable/mariadb
  • deployment
  • If both are used, --set values are merged into --values with higher precedence.
  • The helm get command is a useful tool for looking at a release in the cluster.
  • helm rollback
  • A release version is an incremental revision. Every time an install, upgrade, or rollback happens, the revision number is incremented by 1.
  • helm history
  • a release name cannot be re-used.
  • you can rollback a deleted resource, and have it re-activate.
  • helm repo list
  • helm repo add
  • helm repo update
  • The Chart Development Guide explains how to develop your own charts.
  • helm create
  • helm lint
  • helm package
  • Charts that are archived can be loaded into chart repositories.
  • chart repository server
  • Tiller can be installed into any namespace.
  • Limiting Tiller to only be able to install into specific namespaces and/or resource types is controlled by Kubernetes RBAC roles and rolebindings
  • Release names are unique PER TILLER INSTANCE
  • Charts should only contain resources that exist in a single namespace.
  • not recommended to have multiple Tillers configured to manage resources in the same namespace.
  • a client-side Helm plugin. A plugin is a tool that can be accessed through the helm CLI, but which is not part of the built-in Helm codebase.
  • Helm plugins are add-on tools that integrate seamlessly with Helm. They provide a way to extend the core feature set of Helm, but without requiring every new feature to be written in Go and added to the core tool.
  • Helm plugins live in $(helm home)/plugins
  • The Helm plugin model is partially modeled on Git’s plugin model
  • helm referred to as the porcelain layer, with plugins being the plumbing.
  • helm plugin install https://github.com/technosophos/helm-template
  • command is the command that this plugin will execute when it is called.
  • Environment variables are interpolated before the plugin is executed.
  • The command itself is not executed in a shell. So you can’t oneline a shell script.
  • Helm is able to fetch Charts using HTTP/S
  • Variables like KUBECONFIG are set for the plugin if they are set in the outer environment.
  • In Kubernetes, granting a role to an application-specific service account is a best practice to ensure that your application is operating in the scope that you have specified.
  • restrict Tiller’s capabilities to install resources to certain namespaces, or to grant a Helm client running access to a Tiller instance.
  • Service account with cluster-admin role
  • The cluster-admin role is created by default in a Kubernetes cluster
  • Deploy Tiller in a namespace, restricted to deploying resources only in that namespace
  • Deploy Tiller in a namespace, restricted to deploying resources in another namespace
  • When running a Helm client in a pod, in order for the Helm client to talk to a Tiller instance, it will need certain privileges to be granted.
  • SSL Between Helm and Tiller
  • The Tiller authentication model uses client-side SSL certificates.
  • creating an internal CA, and using both the cryptographic and identity functions of SSL.
  • Helm is a powerful and flexible package-management and operations tool for Kubernetes.
  • default installation applies no security configurations
  • with a cluster that is well-secured in a private network with no data-sharing or no other users or teams.
  • With great power comes great responsibility.
  • Choose the Best Practices you should apply to your helm installation
  • Role-based access control, or RBAC
  • Tiller’s gRPC endpoint and its usage by Helm
  • Kubernetes employ a role-based access control (or RBAC) system (as do modern operating systems) to help mitigate the damage that can be done if credentials are misused or bugs exist.
  • In the default installation the gRPC endpoint that Tiller offers is available inside the cluster (not external to the cluster) without authentication configuration applied.
  • Tiller stores its release information in ConfigMaps. We suggest changing the default to Secrets.
  • release information
  • charts
  • charts are a kind of package that not only installs containers you may or may not have validated yourself, but it may also install into more than one namespace.
  • As with all shared software, in a controlled or shared environment you must validate all software you install yourself before you install it.
  • Helm’s provenance tools to ensure the provenance and integrity of charts
  •  
    "Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses."
crazylion lee

Xplain - 0 views

  •  
    "Last year, I wrote an article describing the free and open-source graphics stack, explaining all of the interconnected pieces: X11, graphics drivers, DRM, and DRI, and their roles in getting pixels placed on the screen. In the year since, I've answered a lot of really good questions from the Linux community about my post, about X11, and about Wayland. I've learned a lot. Several new technologies have appeared, and I have started to take a more active role in this myself, as part of our efforts to port GNOME to Wayland."
張 旭

Authentication, Permissions and Roles in Rails with Devise, CanCan and Role Model | Phase2 - 0 views

  • Devise is a modular user authentication system
  • just gradually investigating the components you need for your app and configuring them as you need
  • define permissions
張 旭

Custom Resources | Kubernetes - 0 views

  • Custom resources are extensions of the Kubernetes API
  • A resource is an endpoint in the Kubernetes API that stores a collection of API objects of a certain kind
  • Custom resources can appear and disappear in a running cluster through dynamic registration
  • ...30 more annotations...
  • Once a custom resource is installed, users can create and access its objects using kubectl
  • When you combine a custom resource with a custom controller, custom resources provide a true declarative API.
  • A declarative API allows you to declare or specify the desired state of your resource and tries to keep the current state of Kubernetes objects in sync with the desired state.
  • Custom controllers can work with any kind of resource, but they are especially effective when combined with custom resources.
  • The Operator pattern combines custom resources and custom controllers.
  • the API represents a desired state, not an exact state.
  • define configuration of applications or infrastructure.
  • The main operations on the objects are CRUD-y (creating, reading, updating and deleting).
  • The client says "do this", and then gets an operation ID back, and has to check a separate Operation object to determine completion of the request.
  • The natural operations on the objects are not CRUD-y.
  • High bandwidth access (10s of requests per second sustained) needed.
  • Use a ConfigMap if any of the following apply
  • You want to put the entire config file into one key of a configMap.
  • You want to perform rolling updates via Deployment, etc., when the file is updated.
  • Use a secret for sensitive data, which is similar to a configMap but more secure.
  • You want to build new automation that watches for updates on the new object, and then CRUD other objects, or vice versa.
  • You want the object to be an abstraction over a collection of controlled resources, or a summarization of other resources.
  • CRDs are simple and can be created without any programming.
  • Aggregated APIs are subordinate API servers that sit behind the primary API server
  • CRDs allow users to create new types of resources without adding another API server
  • Defining a CRD object creates a new custom resource with a name and schema that you specify.
  • The name of a CRD object must be a valid DNS subdomain name
  • each resource in the Kubernetes API requires code that handles REST requests and manages persistent storage of objects.
  • The main API server delegates requests to you for the custom resources that you handle, making them available to all of its clients.
  • The new endpoints support CRUD basic operations via HTTP and kubectl
  • Custom resources consume storage space in the same way that ConfigMaps do.
  • Aggregated API servers may use the same storage as the main API server
  • CRDs always use the same authentication, authorization, and audit logging as the built-in resources of your API server.
  • most RBAC roles will not grant access to the new resources (except the cluster-admin role or any role created with wildcard rules).
  • CRDs and Aggregated APIs often come bundled with new role definitions for the types they add.
張 旭

Variables - Ansible Documentation - 0 views

  • with the last listed variables winning prioritization
  • anything that goes into “role defaults” (the defaults folder inside the role) is the most malleable and easily overridden.
  • Anything in the vars directory of the role overrides previous versions of that variable in namespace.
  • ...1 more annotation...
  • with command line -e extra vars always winning
張 旭

RolifyCommunity/rolify: Role management library with resource scoping - 0 views

  •  
    "resource"
張 旭

Keep your Terraform code DRY - 0 views

  • Each root terragrunt.hcl file (the one at the environment level, e.g prod/terragrunt.hcl) should define a generate block to generate the AWS provider configuration to assume the role for that environment.
  • The include block tells Terragrunt to use the exact same Terragrunt configuration from the terragrunt.hcl file specified via the path parameter.
  •  
    "Each root terragrunt.hcl file (the one at the environment level, e.g prod/terragrunt.hcl) should define a generate block to generate the AWS provider configuration to assume the role for that environment. "
張 旭

Creating a cluster with kubeadm | Kubernetes - 0 views

  • (Recommended) If you have plans to upgrade this single control-plane kubeadm cluster to high availability you should specify the --control-plane-endpoint to set the shared endpoint for all control-plane nodes
  • set the --pod-network-cidr to a provider-specific value.
  • kubeadm tries to detect the container runtime by using a list of well known endpoints.
  • ...12 more annotations...
  • kubeadm uses the network interface associated with the default gateway to set the advertise address for this particular control-plane node's API server. To use a different network interface, specify the --apiserver-advertise-address=<ip-address> argument to kubeadm init
  • Do not share the admin.conf file with anyone and instead grant users custom permissions by generating them a kubeconfig file using the kubeadm kubeconfig user command.
  • The token is used for mutual authentication between the control-plane node and the joining nodes. The token included here is secret. Keep it safe, because anyone with this token can add authenticated nodes to your cluster.
  • You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.
  • Take care that your Pod network must not overlap with any of the host networks
  • Make sure that your Pod network plugin supports RBAC, and so do any manifests that you use to deploy it.
  • You can install only one Pod network per cluster.
  • The cluster created here has a single control-plane node, with a single etcd database running on it.
  • The node-role.kubernetes.io/control-plane label is such a restricted label and kubeadm manually applies it using a privileged client after a node has been created.
  • By default, your cluster will not schedule Pods on the control plane nodes for security reasons.
  • kubectl taint nodes --all node-role.kubernetes.io/control-plane-
  • remove the node-role.kubernetes.io/control-plane:NoSchedule taint from any nodes that have it, including the control plane nodes, meaning that the scheduler will then be able to schedule Pods everywhere.
張 旭

Production environment | Kubernetes - 0 views

  • to promote an existing cluster for production use
  • Separating the control plane from the worker nodes.
  • Having enough worker nodes available
  • ...22 more annotations...
  • You can use role-based access control (RBAC) and other security mechanisms to make sure that users and workloads can get access to the resources they need, while keeping workloads, and the cluster itself, secure. You can set limits on the resources that users and workloads can access by managing policies and container resources.
  • you need to plan how to scale to relieve increased pressure from more requests to the control plane and worker nodes or scale down to reduce unused resources.
  • Managed control plane: Let the provider manage the scale and availability of the cluster's control plane, as well as handle patches and upgrades.
  • The simplest Kubernetes cluster has the entire control plane and worker node services running on the same machine.
  • You can deploy a control plane using tools such as kubeadm, kops, and kubespray.
  • Secure communications between control plane services are implemented using certificates.
  • Certificates are automatically generated during deployment or you can generate them using your own certificate authority.
  • Separate and backup etcd service: The etcd services can either run on the same machines as other control plane services or run on separate machines
  • Create multiple control plane systems: For high availability, the control plane should not be limited to a single machine
  • Some deployment tools set up Raft consensus algorithm to do leader election of Kubernetes services. If the primary goes away, another service elects itself and take over.
  • Groups of zones are referred to as regions.
  • if you installed with kubeadm, there are instructions to help you with Certificate Management and Upgrading kubeadm clusters.
  • Production-quality workloads need to be resilient and anything they rely on needs to be resilient (such as CoreDNS).
  • Add nodes to the cluster: If you are managing your own cluster you can add nodes by setting up your own machines and either adding them manually or having them register themselves to the cluster’s apiserver.
  • Set up node health checks: For important workloads, you want to make sure that the nodes and pods running on those nodes are healthy.
  • Authentication: The apiserver can authenticate users using client certificates, bearer tokens, an authenticating proxy, or HTTP basic auth.
  • Authorization: When you set out to authorize your regular users, you will probably choose between RBAC and ABAC authorization.
  • Role-based access control (RBAC): Lets you assign access to your cluster by allowing specific sets of permissions to authenticated users. Permissions can be assigned for a specific namespace (Role) or across the entire cluster (ClusterRole).
  • Attribute-based access control (ABAC): Lets you create policies based on resource attributes in the cluster and will allow or deny access based on those attributes.
  • Set limits on workload resources
  • Set namespace limits: Set per-namespace quotas on things like memory and CPU
  • Prepare for DNS demand: If you expect workloads to massively scale up, your DNS service must be ready to scale up as well.
張 旭

Manage nodes in a swarm | Docker Documentation - 0 views

  • Drain means the scheduler doesn’t assign new tasks to the node. The scheduler shuts down any existing tasks and schedules them on an available node.
  • Reachable means the node is a manager node participating in the Raft consensus quorum. If the leader node becomes unavailable, the node is eligible for election as the new leader.
  • If a manager node becomes unavailable, you should either join a new manager node to the swarm or promote a worker node to be a manager.
  • ...8 more annotations...
  • docker node inspect self --pretty
  • docker node update --availability drain node
  • use node labels in service constraints
  • The labels you set for nodes using docker node update apply only to the node entity within the swarm
  • node labels can be used to limit critical tasks to nodes that meet certain requirements
  • promote a worker node to the manager role
  • demote a manager node to the worker role
  • If the last manager node leaves the swarm, the swarm becomes unavailable requiring you to take disaster recovery measures.
張 旭

plataformatec/simple_form - 0 views

  • The basic goal of Simple Form is to not touch your way of defining the layout
  • by default contains label, hints, errors and the input itself
  • Simple Form acts as a DSL and just maps your input type (retrieved from the column definition in the database) to a specific helper method.
  • ...68 more annotations...
  • can overwrite the default label by passing it to the input method
  • configure the html of any of them
  • disable labels, hints or error
  • add a hint, an error, or even a placeholder
  • add an inline label
  • pass any html attribute straight to the input, by using the :input_html option
  • use the :defaults option in simple_form_fo
  • Simple Form generates a wrapper div around your label and input by default, you can pass any html attribute to that wrapper as well using the :wrapper_html option,
  • By default all inputs are required
  • the required property of any input can be overwritten
  • Simple Form will look at the column type in the database and use an appropriate input for the column
  • lets you overwrite the default input type it creates
  • can also render boolean attributes using as: :select to show a dropdown.
  • give the :disabled option to Simple Form, and it'll automatically mark the wrapper as disabled with a CSS class
  • Simple Form accepts same options as their corresponding input type helper in Rails
  • Any extra option passed to these methods will be rendered as html option.
  • use label, hint, input_field, error and full_error helpers
  • to strip away all the div wrappers around the <input> field
  • is to use f.input_field
  • changing boolean_style from default value :nested to :inline
  • overriding the :collection option
  • Collections can be arrays or ranges, and when a :collection is given the :select input will be rendered by default
  • Other types of collection are :radio_buttons and :check_boxes
  • label_method
  • value_method
  • Both of these options also accept lambda/procs
  • define a to_label method on your model as Simple Form will search for and use :to_label as a :label_method first if it is found
  • create grouped collection selects, that will use the html optgroup tags
  • Grouped collection inputs accept the same :label_method and :value_method options
  • group_method
  • group_label_method
  • configured with a default value to be used on the site through the SimpleForm.country_priority and SimpleForm.time_zone_priority helpers.
  • association
  • association
  • render a :select input for choosing the :company, and another :select input with :multiple option for the :roles
  • all options available to :select, :radio_buttons and :check_boxes are also available to association
  • declare different labels and values
  • the association helper is currently only tested with Active Record
  • f.input
  • f.select
  • create a <button> element
  • simple_fields_for
  • Creates a collection of radio inputs with labels associated
  • Creates a collection of checkboxes with labels associated
  • collection_radio_buttons
  • collection_check_boxes
  • associations in your model
  • Role.all
  • the html element you will get for each attribute according to its database definition
  • redefine existing Simple Form inputs by creating a new class with the same name
  • Simple Form uses all power of I18n API to lookup labels, hints, prompts and placeholders
  • specify defaults for all models under the 'defaults' key
  • Simple Form will always look for a default attribute translation under the "defaults" key if no specific is found inside the model key
  • Simple Form will fallback to default human_attribute_name from Rails when no other translation is found for labels.
  • Simple Form will only do the lookup for options if you give a collection composed of symbols only.
  • "Add %{model}"
  • translations for labels, hints and placeholders for a namespaced model, e.g. Admin::User, should be placed under admin_user, not under admin/user
  • This difference exists because Simple Form relies on object_name provided by Rails' FormBuilder to determine the translation path for a given object instead of i18n_key from the object itself.
  • configure how your components will be rendered using the wrappers API
  • optional
  • unless_blank
  • By default, Simple Form will generate input field types and attributes that are supported in HTML5
  • The HTML5 extensions include the new field types such as email, number, search, url, tel, and the new attributes such as required, autofocus, maxlength, min, max, step.
  • If you want to have all other HTML 5 features, such as the new field types, you can disable only the browser validation
  • add novalidate to a specific form by setting the option on the form itself
  • the Simple Form configuration file
  • passing the html5 option
  • as: :date, html5: true
張 旭

Full Cycle Developers at Netflix - Operate What You Build - 1 views

  • Researching issues felt like bouncing a rubber ball between teams, hard to catch the root cause and harder yet to stop from bouncing between one another.
  • In the past, Edge Engineering had ops-focused teams and SRE specialists who owned the deploy+operate+support parts of the software life cycle
  • hearing about those problems second-hand
  • ...17 more annotations...
  • devs could push code themselves when needed, and also were responsible for off-hours production issues and support requests
  • What were we trying to accomplish and why weren’t we being successful?
  • These specialized roles create efficiencies within each segment while potentially creating inefficiencies across the entire life cycle.
  • Grouping differing specialists together into one team can reduce silos, but having different people do each role adds communication overhead, introduces bottlenecks, and inhibits the effectiveness of feedback loops.
  • devops principles
  • develops a system also be responsible for operating and supporting that system
  • Each development team owns deployment issues, performance bugs, capacity planning, alerting gaps, partner support, and so on.
  • Those centralized teams act as force multipliers by turning their specialized knowledge into reusable building blocks.
  • Communication and alignment are the keys to success.
  • Full cycle developers are expected to be knowledgeable and effective in all areas of the software life cycle.
  • ramping up on areas they haven’t focused on before
  • We run dev bootcamps and other forms of ongoing training to impart this knowledge and build up these skills
  • “how can I automate what is needed to operate this system?”
  • “what self-service tool will enable my partners to answer their questions without needing me to be involved?”
  • A full cycle developer thinks and acts like an SWE, SDET, and SRE. At times they create software that solves business problems, at other times they write test cases for that, and still other times they automate operational aspects of that system.
  • the need for continuous delivery pipelines, monitoring/observability, and so on.
  • Tooling and automation help to scale expertise, but no tool will solve every problem in the developer productivity and operations space
張 旭

Service objects in Rails will help you design clean and maintainable code. Here's how. - 0 views

  • Services has the benefit of concentrating the core logic of the application in a separate object, instead of scattering it around controllers and models.
  • Additional initialize arguments might include other context information if applicable.
  • And as programmers, we know that when something can go wrong, sooner or later it will!
  • ...7 more annotations...
  • we need a way to signal success or failure when using a service
  • what ActiveRecord save method uses
  • if the services role is to create or update rails models, it makes sense to return such an object as result.
  • utility objects to signal success or error
  • services will be used on the boundary between user interface and application
  • All the business logic is encapsulated in services and models
  • how we can use Service Objects, Status Objects and Rails’s Responders to produce a nice, consistent API
張 旭

Creating Reusable Playbooks - Ansible Documentation - 0 views

  • Ansible pre-processes all static imports during Playbook parsing time
  • Dynamic includes are processed during runtime at the point in which that task is encountered.
  • advantage of using include* statements is looping. When a loop is used with an include, the included tasks or role will be executed once for each item in the loop.
  • ...1 more annotation...
  • loops cannot be used with imports at all
張 旭

Boosting your kubectl productivity ♦︎ Learnk8s - 0 views

  • kubectl is your cockpit to control Kubernetes.
  • kubectl is a client for the Kubernetes API
  • Kubernetes API is an HTTP REST API.
  • ...75 more annotations...
  • This API is the real Kubernetes user interface.
  • Kubernetes is fully controlled through this API
  • every Kubernetes operation is exposed as an API endpoint and can be executed by an HTTP request to this endpoint.
  • the main job of kubectl is to carry out HTTP requests to the Kubernetes API
  • Kubernetes maintains an internal state of resources, and all Kubernetes operations are CRUD operations on these resources.
  • Kubernetes is a fully resource-centred system
  • Kubernetes API reference is organised as a list of resource types with their associated operations.
  • This is how kubectl works for all commands that interact with the Kubernetes cluster.
  • kubectl simply makes HTTP requests to the appropriate Kubernetes API endpoints.
  • it's totally possible to control Kubernetes with a tool like curl by manually issuing HTTP requests to the Kubernetes API.
  • Kubernetes consists of a set of independent components that run as separate processes on the nodes of a cluster.
  • components on the master nodes
  • Storage backend: stores resource definitions (usually etcd is used)
  • API server: provides Kubernetes API and manages storage backend
  • Controller manager: ensures resource statuses match specifications
  • Scheduler: schedules Pods to worker nodes
  • component on the worker nodes
  • Kubelet: manages execution of containers on a worker node
  • triggers the ReplicaSet controller, which is a sub-process of the controller manager.
  • the scheduler, who watches for Pod definitions that are not yet scheduled to a worker node.
  • creating and updating resources in the storage backend on the master node.
  • The kubelet of the worker node your ReplicaSet Pods have been scheduled to instructs the configured container runtime (which may be Docker) to download the required container images and run the containers.
  • Kubernetes components (except the API server and the storage backend) work by watching for resource changes in the storage backend and manipulating resources in the storage backend.
  • However, these components do not access the storage backend directly, but only through the Kubernetes API.
    • 張 旭
       
      很精彩,相互之間都是使用 API call 溝通,良好的微服務行為。
  • double usage of the Kubernetes API for internal components as well as for external users is a fundamental design concept of Kubernetes.
  • All other Kubernetes components and users read, watch, and manipulate the state (i.e. resources) of Kubernetes through the Kubernetes API
  • The storage backend stores the state (i.e. resources) of Kubernetes.
  • command completion is a shell feature that works by the means of a completion script.
  • A completion script is a shell script that defines the completion behaviour for a specific command. Sourcing a completion script enables completion for the corresponding command.
  • kubectl completion zsh
  • /etc/bash_completion.d directory (create it, if it doesn't exist)
  • source <(kubectl completion bash)
  • source <(kubectl completion zsh)
  • autoload -Uz compinit compinit
  • the API reference, which contains the full specifications of all resources.
  • kubectl api-resources
  • displays the resource names in their plural form (e.g. deployments instead of deployment). It also displays the shortname (e.g. deploy) for those resources that have one. Don't worry about these differences. All of these name variants are equivalent for kubectl.
  • .spec
  • custom columns output format comes in. It lets you freely define the columns and the data to display in them. You can choose any field of a resource to be displayed as a separate column in the output
  • kubectl get pods -o custom-columns='NAME:metadata.name,NODE:spec.nodeName'
  • kubectl explain pod.spec.
  • kubectl explain pod.metadata.
  • browse the resource specifications and try it out with any fields you like!
  • JSONPath is a language to extract data from JSON documents (it is similar to XPath for XML).
  • with kubectl explain, only a subset of the JSONPath capabilities is supported
  • Many fields of Kubernetes resources are lists, and this operator allows you to select items of these lists. It is often used with a wildcard as [*] to select all items of the list.
  • kubectl get pods -o custom-columns='NAME:metadata.name,IMAGES:spec.containers[*].image'
  • a Pod may contain more than one container.
  • The availability zones for each node are obtained through the special failure-domain.beta.kubernetes.io/zone label.
  • kubectl get nodes -o yaml kubectl get nodes -o json
  • The default kubeconfig file is ~/.kube/config
  • with multiple clusters, then you have connection parameters for multiple clusters configured in your kubeconfig file.
  • Within a cluster, you can set up multiple namespaces (a namespace is kind of "virtual" clusters within a physical cluster)
  • overwrite the default kubeconfig file with the --kubeconfig option for every kubectl command.
  • Namespace: the namespace to use when connecting to the cluster
  • a one-to-one mapping between clusters and contexts.
  • When kubectl reads a kubeconfig file, it always uses the information from the current context.
  • just change the current context in the kubeconfig file
  • to switch to another namespace in the same cluster, you can change the value of the namespace element of the current context
  • kubectl also provides the --cluster, --user, --namespace, and --context options that allow you to overwrite individual elements and the current context itself, regardless of what is set in the kubeconfig file.
  • for switching between clusters and namespaces is kubectx.
  • kubectl config get-contexts
  • just have to download the shell scripts named kubectl-ctx and kubectl-ns to any directory in your PATH and make them executable (for example, with chmod +x)
  • kubectl proxy
  • kubectl get roles
  • kubectl get pod
  • Kubectl plugins are distributed as simple executable files with a name of the form kubectl-x. The prefix kubectl- is mandatory,
  • To install a plugin, you just have to copy the kubectl-x file to any directory in your PATH and make it executable (for example, with chmod +x)
  • krew itself is a kubectl plugin
  • check out the kubectl-plugins GitHub topic
  • The executable can be of any type, a Bash script, a compiled Go program, a Python script, it really doesn't matter. The only requirement is that it can be directly executed by the operating system.
  • kubectl plugins can be written in any programming or scripting language.
  • you can write more sophisticated plugins with real programming languages, for example, using a Kubernetes client library. If you use Go, you can also use the cli-runtime library, which exists specifically for writing kubectl plugins.
  • a kubeconfig file consists of a set of contexts
  • changing the current context means changing the cluster, if you have only a single context per cluster.
chiehting

Top 5 Kubernetes Best Practices From Sandeep Dinesh (Google) - DZone Cloud - 0 views

  • Best Practices for Kubernetes
  • #1: Building Containers
  • Don’t Trust Arbitrary Base Images!
  • ...29 more annotations...
  • There’s a lot wrong with this: you could be using the wrong version of code that has exploits, has a bug in it, or worse it could have malware bundled in on purpose—you just don’t know.
  • Keep Base Images Small
  • Node.js for example, it includes an extra 600MB of libraries you don’t need.
  • Use the Builder Pattern
  • #2: Container Internals
  • Use a Non-Root User Inside the Container
  • Make the File System Read-Only
  • One Process per Container
  • Don’t Restart on Failure. Crash Cleanly Instead.
  • Log Everything to stdout and stderr
  • #3: Deployments
  • Use the “Record” Option for Easier Rollbacks
  • Use Plenty of Descriptive Labels
  • Use Sidecars for Proxies, Watchers, Etc.
  • Don’t Use Sidecars for Bootstrapping!
  • Don’t Use :Latest or No Tag
  • Readiness and Liveness Probes are Your Friend
  • #4: Services
  • Don’t Use type: LoadBalancer
  • Type: Nodeport Can Be “Good Enough”
  • Use Static IPs They Are Free!
  • Map External Services to Internal Ones
  • #5: Application Architecture
  • Use Helm Charts
  • All Downstream Dependencies Are Unreliable
  • Use Weave Cloud
  • Make Sure Your Microservices Aren’t Too Micro
  • Use Namespaces to Split Up Your Cluster
  • Role-Based Access Control
張 旭

Ansible Tower vs Ansible AWX for Automation - 4sysops - 0 views

  • you can run Ansible freely by downloading the module and running configurations and playbooks from the command line.
  • AWX Project from Red Hat. It provides an open-source version of Ansible Tower that may suit the needs of Tower functionality in many environments.
  • Ansible Tower may be the more familiar option for Ansible users as it is the commercial GUI Ansible tool that provides the officially supported GUI interface, API access, role-based access, scheduling, notifications, and other nice features that allow businesses to manage environments easily with Ansible.
  • ...5 more annotations...
  • Ansible AWX is the open-sourced project that was the foundation on which Ansible Tower was created. With this being said, Ansible AWX is a development branch of code that only undergoes minimal testing and quality engineering testing.
  • Ansible AWX is a powerful open-source, freely available project for testing or using Ansible AWX in a lab, development, or other POC environment.
  • to use an external PostgreSQL database, please note that the minimum version is 9.6+
  • Full enterprise features and functionality of Tower
  • Not limited to 10 nodes
張 旭

AskF5 | Manual Chapter: Working with Partitions - 0 views

  • During BIG-IP® system installation, the system automatically creates a partition named Common
  • An administrative partition is a logical container that you create, containing a defined set of BIG-IP® system objects.
  • No user can delete partition Common itself.
  • ...9 more annotations...
  • With respect to permissions, all users on the system except those with a user role of No Access have read access to objects in partition Common, and by default, partition Common is their current partition.
  • The current partition is the specific partition to which the system is currently set for a logged-in user.
  • A partition access assignment gives a user some level of access to the specified partition.
  • assigning partition access to a user does not necessarily give the user full access to all objects in the partition
  • user account objects also reside in partitions
  • when you first install the BIG-IP system, every existing user account (root and admin) resides in partition Common
  • the partition in which a user account object resides does not affect the partition or partitions to which that user is granted access to manage other BIG-IP objects
  • the object it references resides in partition Common
  • a referenced object must reside either in the same partition as the object that is referencing it
1 - 20 of 21 Next ›
Showing 20 items per page