Skip to main content

Home/ Larvata/ Group items tagged permission

Rss Feed Group items tagged

張 旭

Authentication, Permissions and Roles in Rails with Devise, CanCan and Role Model | Phase2 - 0 views

  • Devise is a modular user authentication system
  • just gradually investigating the components you need for your app and configuring them as you need
  • define permissions
張 旭

Secrets - Kubernetes - 0 views

  • Putting this information in a secret is safer and more flexible than putting it verbatim in a PodThe smallest and simplest Kubernetes object. A Pod represents a set of running containers on your cluster. definition or in a container imageStored instance of a container that holds a set of software needed to run an application. .
  • A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key.
  • Users can create secrets, and the system also creates some secrets.
  • ...63 more annotations...
  • To use a secret, a pod needs to reference the secret.
  • A secret can be used with a pod in two ways: as files in a volumeA directory containing data, accessible to the containers in a pod. mounted on one or more of its containers, or used by kubelet when pulling images for the pod.
  • --from-file
  • You can also create a Secret in a file first, in json or yaml format, and then create that object.
  • The Secret contains two maps: data and stringData.
  • The data field is used to store arbitrary data, encoded using base64.
  • Kubernetes automatically creates secrets which contain credentials for accessing the API and it automatically modifies your pods to use this type of secret.
  • kubectl get and kubectl describe avoid showing the contents of a secret by default.
  • stringData field is provided for convenience, and allows you to provide secret data as unencoded strings.
  • where you are deploying an application that uses a Secret to store a configuration file, and you want to populate parts of that configuration file during your deployment process.
  • a field is specified in both data and stringData, the value from stringData is used.
  • The keys of data and stringData must consist of alphanumeric characters, ‘-’, ‘_’ or ‘.’.
  • Newlines are not valid within these strings and must be omitted.
  • When using the base64 utility on Darwin/macOS users should avoid using the -b option to split long lines.
  • create a Secret from generators and then apply it to create the object on the Apiserver.
  • The generated Secrets name has a suffix appended by hashing the contents.
  • base64 --decode
  • Secrets can be mounted as data volumes or be exposed as environment variablesContainer environment variables are name=value pairs that provide useful information into containers running in a Pod. to be used by a container in a pod.
  • Multiple pods can reference the same secret.
  • Each key in the secret data map becomes the filename under mountPath
  • each container needs its own volumeMounts block, but only one .spec.volumes is needed per secret
  • use .spec.volumes[].secret.items field to change target path of each key:
  • If .spec.volumes[].secret.items is used, only keys specified in items are projected. To consume all keys from the secret, all of them must be listed in the items field.
  • You can also specify the permission mode bits files part of a secret will have. If you don’t specify any, 0644 is used by default.
  • JSON spec doesn’t support octal notation, so use the value 256 for 0400 permissions.
  • Inside the container that mounts a secret volume, the secret keys appear as files and the secret values are base-64 decoded and stored inside these files.
  • Mounted Secrets are updated automatically
  • Kubelet is checking whether the mounted secret is fresh on every periodic sync.
  • cache propagation delay depends on the chosen cache type
  • A container using a Secret as a subPath volume mount will not receive Secret updates.
  • Multiple pods can reference the same secret.
  • env: - name: SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username
  • Inside a container that consumes a secret in an environment variables, the secret keys appear as normal environment variables containing the base-64 decoded values of the secret data.
  • An imagePullSecret is a way to pass a secret that contains a Docker (or other) image registry password to the Kubelet so it can pull a private image on behalf of your Pod.
  • a secret needs to be created before any pods that depend on it.
  • Secret API objects reside in a namespaceAn abstraction used by Kubernetes to support multiple virtual clusters on the same physical cluster. . They can only be referenced by pods in that same namespace.
  • Individual secrets are limited to 1MiB in size.
  • Kubelet only supports use of secrets for Pods it gets from the API server.
  • Secrets must be created before they are consumed in pods as environment variables unless they are marked as optional.
  • References to Secrets that do not exist will prevent the pod from starting.
  • References via secretKeyRef to keys that do not exist in a named Secret will prevent the pod from starting.
  • Once a pod is scheduled, the kubelet will try to fetch the secret value.
  • Think carefully before sending your own ssh keys: other users of the cluster may have access to the secret.
  • volumes: - name: secret-volume secret: secretName: ssh-key-secret
  • Special characters such as $, \*, and ! require escaping. If the password you are using has special characters, you need to escape them using the \\ character.
  • You do not need to escape special characters in passwords from files
  • make that key begin with a dot
  • Dotfiles in secret volume
  • .secret-file
  • a frontend container which handles user interaction and business logic, but which cannot see the private key;
  • a signer container that can see the private key, and responds to simple signing requests from the frontend
  • When deploying applications that interact with the secrets API, access should be limited using authorization policies such as RBAC
  • watch and list requests for secrets within a namespace are extremely powerful capabilities and should be avoided
  • watch and list all secrets in a cluster should be reserved for only the most privileged, system-level components.
  • additional precautions with secret objects, such as avoiding writing them to disk where possible.
  • A secret is only sent to a node if a pod on that node requires it
  • only the secrets that a pod requests are potentially visible within its containers
  • each container in a pod has to request the secret volume in its volumeMounts for it to be visible within the container.
  • In the API server secret data is stored in etcdConsistent and highly-available key value store used as Kubernetes’ backing store for all cluster data.
  • limit access to etcd to admin users
  • Base64 encoding is not an encryption method and is considered the same as plain text.
  • A user who can create a pod that uses a secret can also see the value of that secret.
  • anyone with root on any node can read any secret from the apiserver, by impersonating the kubelet.
張 旭

Introduction to GitLab Flow | GitLab - 0 views

  • GitLab flow as a clearly defined set of best practices. It combines feature-driven development and feature branches with issue tracking.
  • In Git, you add files from the working copy to the staging area. After that, you commit them to your local repo. The third step is pushing to a shared remote repository.
  • branching model
  • ...68 more annotations...
  • The biggest problem is that many long-running branches emerge that all contain part of the changes.
  • It is a convention to call your default branch master and to mostly branch from and merge to this.
  • Nowadays, most organizations practice continuous delivery, which means that your default branch can be deployed.
  • Continuous delivery removes the need for hotfix and release branches, including all the ceremony they introduce.
  • Merging everything into the master branch and frequently deploying means you minimize the amount of unreleased code, which is in line with lean and continuous delivery best practices.
  • GitHub flow assumes you can deploy to production every time you merge a feature branch.
  • You can deploy a new version by merging master into the production branch. If you need to know what code is in production, you can just checkout the production branch to see.
  • Production branch
  • Environment branches
  • have an environment that is automatically updated to the master branch.
  • deploy the master branch to staging.
  • To deploy to pre-production, create a merge request from the master branch to the pre-production branch.
  • Go live by merging the pre-production branch into the production branch.
  • Release branches
  • work with release branches if you need to release software to the outside world.
  • each branch contains a minor version
  • After announcing a release branch, only add serious bug fixes to the branch.
  • merge these bug fixes into master, and then cherry-pick them into the release branch.
  • Merging into master and then cherry-picking into release is called an “upstream first” policy
  • Tools such as GitHub and Bitbucket choose the name “pull request” since the first manual action is to pull the feature branch.
  • Tools such as GitLab and others choose the name “merge request” since the final action is to merge the feature branch.
  • If you work on a feature branch for more than a few hours, it is good to share the intermediate result with the rest of the team.
  • the merge request automatically updates when new commits are pushed to the branch.
  • If the assigned person does not feel comfortable, they can request more changes or close the merge request without merging.
  • In GitLab, it is common to protect the long-lived branches, e.g., the master branch, so that most developers can’t modify them.
  • if you want to merge into a protected branch, assign your merge request to someone with maintainer permissions.
  • After you merge a feature branch, you should remove it from the source control software.
  • Having a reason for every code change helps to inform the rest of the team and to keep the scope of a feature branch small.
  • If there is no issue yet, create the issue
  • The issue title should describe the desired state of the system.
  • For example, the issue title “As an administrator, I want to remove users without receiving an error” is better than “Admin can’t remove users.”
  • create a branch for the issue from the master branch
  • If you open the merge request but do not assign it to anyone, it is a “Work In Progress” merge request.
  • Start the title of the merge request with [WIP] or WIP: to prevent it from being merged before it’s ready.
  • When they press the merge button, GitLab merges the code and creates a merge commit that makes this event easily visible later on.
  • Merge requests always create a merge commit, even when the branch could be merged without one. This merge strategy is called “no fast-forward” in Git.
  • Suppose that a branch is merged but a problem occurs and the issue is reopened. In this case, it is no problem to reuse the same branch name since the first branch was deleted when it was merged.
  • At any time, there is at most one branch for every issue.
  • It is possible that one feature branch solves more than one issue.
  • GitLab closes these issues when the code is merged into the default branch.
  • If you have an issue that spans across multiple repositories, create an issue for each repository and link all issues to a parent issue.
  • use an interactive rebase (rebase -i) to squash multiple commits into one or reorder them.
  • you should never rebase commits you have pushed to a remote server.
  • Rebasing creates new commits for all your changes, which can cause confusion because the same change would have multiple identifiers.
  • if someone has already reviewed your code, rebasing makes it hard to tell what changed since the last review.
  • never rebase commits authored by other people.
  • it is a bad idea to rebase commits that you have already pushed.
  • If you revert a merge commit and then change your mind, revert the revert commit to redo the merge.
  • Often, people avoid merge commits by just using rebase to reorder their commits after the commits on the master branch.
  • Using rebase prevents a merge commit when merging master into your feature branch, and it creates a neat linear history.
  • every time you rebase, you have to resolve similar conflicts.
  • Sometimes you can reuse recorded resolutions (rerere), but merging is better since you only have to resolve conflicts once.
  • A good way to prevent creating many merge commits is to not frequently merge master into the feature branch.
  • keep your feature branches short-lived.
  • Most feature branches should take less than one day of work.
  • If your feature branches often take more than a day of work, try to split your features into smaller units of work.
  • You could also use feature toggles to hide incomplete features so you can still merge back into master every day.
  • you should try to prevent merge commits, but not eliminate them.
  • Your codebase should be clean, but your history should represent what actually happened.
  • If you rebase code, the history is incorrect, and there is no way for tools to remedy this because they can’t deal with changing commit identifiers
  • Commit often and push frequently
  • You should push your feature branch frequently, even when it is not yet ready for review.
  • A commit message should reflect your intention, not just the contents of the commit.
  • each merge request must be tested before it is accepted.
  • test the master branch after each change.
  • If new commits in master cause merge conflicts with the feature branch, merge master back into the branch to make the CI server re-run the tests.
  • When creating a feature branch, always branch from an up-to-date master.
  • Do not merge from upstream again if your code can work and merge cleanly without doing so.
張 旭

Production environment | Kubernetes - 0 views

  • to promote an existing cluster for production use
  • Separating the control plane from the worker nodes.
  • Having enough worker nodes available
  • ...22 more annotations...
  • You can use role-based access control (RBAC) and other security mechanisms to make sure that users and workloads can get access to the resources they need, while keeping workloads, and the cluster itself, secure. You can set limits on the resources that users and workloads can access by managing policies and container resources.
  • you need to plan how to scale to relieve increased pressure from more requests to the control plane and worker nodes or scale down to reduce unused resources.
  • Managed control plane: Let the provider manage the scale and availability of the cluster's control plane, as well as handle patches and upgrades.
  • The simplest Kubernetes cluster has the entire control plane and worker node services running on the same machine.
  • You can deploy a control plane using tools such as kubeadm, kops, and kubespray.
  • Secure communications between control plane services are implemented using certificates.
  • Certificates are automatically generated during deployment or you can generate them using your own certificate authority.
  • Separate and backup etcd service: The etcd services can either run on the same machines as other control plane services or run on separate machines
  • Create multiple control plane systems: For high availability, the control plane should not be limited to a single machine
  • Some deployment tools set up Raft consensus algorithm to do leader election of Kubernetes services. If the primary goes away, another service elects itself and take over.
  • Groups of zones are referred to as regions.
  • if you installed with kubeadm, there are instructions to help you with Certificate Management and Upgrading kubeadm clusters.
  • Production-quality workloads need to be resilient and anything they rely on needs to be resilient (such as CoreDNS).
  • Add nodes to the cluster: If you are managing your own cluster you can add nodes by setting up your own machines and either adding them manually or having them register themselves to the cluster’s apiserver.
  • Set up node health checks: For important workloads, you want to make sure that the nodes and pods running on those nodes are healthy.
  • Authentication: The apiserver can authenticate users using client certificates, bearer tokens, an authenticating proxy, or HTTP basic auth.
  • Authorization: When you set out to authorize your regular users, you will probably choose between RBAC and ABAC authorization.
  • Role-based access control (RBAC): Lets you assign access to your cluster by allowing specific sets of permissions to authenticated users. Permissions can be assigned for a specific namespace (Role) or across the entire cluster (ClusterRole).
  • Attribute-based access control (ABAC): Lets you create policies based on resource attributes in the cluster and will allow or deny access based on those attributes.
  • Set limits on workload resources
  • Set namespace limits: Set per-namespace quotas on things like memory and CPU
  • Prepare for DNS demand: If you expect workloads to massively scale up, your DNS service must be ready to scale up as well.
張 旭

Share Process Namespace between Containers in a Pod | Kubernetes - 0 views

  • When process namespace sharing is enabled, processes in a container are visible to all other containers in the same pod.
  • It's even possible to access the file system of another container using the /proc/$pid/root link.
  • Pods share many resources so it makes sense they would also share a process namespace.
  • ...2 more annotations...
  • Processes are visible to other containers in the pod. This includes all information visible in /proc, such as passwords that were passed as arguments or environment variables. These are protected only by regular Unix permissions.
  • Container filesystems are visible to other containers in the pod through the /proc/$pid/root link. This makes debugging easier, but it also means that filesystem secrets are protected only by filesystem permissions.
  •  
    "When process namespace sharing is enabled, processes in a container are visible to all other containers in the same pod. "
張 旭

phusion/baseimage-docker - 1 views

    • 張 旭
       
      原始的 docker 在執行命令時,預設就是將傳入的 COMMAND 當成 PID 1 的程序,執行完畢就結束這個  docker,其他的 daemons 並不會執行,而 baseimage 解決了這個問題。
    • crazylion lee
       
      好棒棒
  • docker exec
  • Through SSH
  • ...57 more annotations...
  • docker exec -t -i YOUR-CONTAINER-ID bash -l
  • Login to the container
  • Baseimage-docker only advocates running multiple OS processes inside a single container.
  • Password and challenge-response authentication are disabled by default. Only key authentication is allowed.
  • A tool for running a command as another user
  • The Docker developers advocate the philosophy of running a single logical service per container. A logical service can consist of multiple OS processes.
  • All syslog messages are forwarded to "docker logs".
  • Splitting your logical service into multiple OS processes also makes sense from a security standpoint.
  • Baseimage-docker provides tools to encourage running processes as different users
  • sometimes it makes sense to run multiple services in a single container, and sometimes it doesn't.
  • Baseimage-docker advocates running multiple OS processes inside a single container, and a single logical service can consist of multiple OS processes.
  • using environment variables to pass parameters to containers is very much the "Docker way"
  • add additional daemons (e.g. your own app) to the image by creating runit entries.
  • the shell script must run the daemon without letting it daemonize/fork it.
  • All executable scripts in /etc/my_init.d, if this directory exists. The scripts are run in lexicographic order.
  • variables will also be passed to all child processes
  • Environment variables on Unix are inherited on a per-process basis
  • there is no good central place for defining environment variables for all applications and services
  • centrally defining environment variables
  • One of the ideas behind Docker is that containers should be stateless, easily restartable, and behave like a black box.
  • a one-shot command in a new container
  • immediately exit after the command exits,
  • However the downside of this approach is that the init system is not started. That is, while invoking COMMAND, important daemons such as cron and syslog are not running. Also, orphaned child processes are not properly reaped, because COMMAND is PID 1.
  • Baseimage-docker provides a facility to run a single one-shot command, while solving all of the aforementioned problems
  • Nginx is one such example: it removes all environment variables unless you explicitly instruct it to retain them through the env configuration option.
  • Mechanisms for easily running multiple processes, without violating the Docker philosophy
  • Ubuntu is not designed to be run inside Docker
  • According to the Unix process model, the init process -- PID 1 -- inherits all orphaned child processes and must reap them
  • Syslog-ng seems to be much more stable
  • cron daemon
  • Rotates and compresses logs
  • /sbin/setuser
  • A tool for installing apt packages that automatically cleans up after itself.
  • a single logical service inside a single container
  • A daemon is a program which runs in the background of its system, such as a web server.
  • The shell script must be called run, must be executable, and is to be placed in the directory /etc/service/<NAME>. runsv will switch to the directory and invoke ./run after your container starts.
  • If any script exits with a non-zero exit code, the booting will fail.
  • If your process is started with a shell script, make sure you exec the actual process, otherwise the shell will receive the signal and not your process.
  • any environment variables set with docker run --env or with the ENV command in the Dockerfile, will be picked up by my_init
  • not possible for a child process to change the environment variables of other processes
  • they will not see the environment variables that were originally passed by Docker.
  • We ignore HOME, SHELL, USER and a bunch of other environment variables on purpose, because not ignoring them will break multi-user containers.
  • my_init imports environment variables from the directory /etc/container_environment
  • /etc/container_environment.sh - a dump of the environment variables in Bash format.
  • modify the environment variables in my_init (and therefore the environment variables in all child processes that are spawned after that point in time), by altering the files in /etc/container_environment
  • my_init only activates changes in /etc/container_environment when running startup scripts
  • environment variables don't contain sensitive data, then you can also relax the permissions
  • Syslog messages are forwarded to the console
  • syslog-ng is started separately before the runit supervisor process, and shutdown after runit exits.
  • RUN apt-get update && apt-get upgrade -y -o Dpkg::Options::="--force-confold"
  • /sbin/my_init --skip-startup-files --quiet --
  • By default, no keys are installed, so nobody can login
  • provide a pregenerated, insecure key (PuTTY format)
  • RUN /usr/sbin/enable_insecure_key
  • docker run YOUR_IMAGE /sbin/my_init --enable-insecure-key
  • RUN cat /tmp/your_key.pub >> /root/.ssh/authorized_keys && rm -f /tmp/your_key.pub
  • The default baseimage-docker installs syslog-ng, cron and sshd services during the build process
張 旭

elabs/pundit: Minimal authorization through OO design and pure Ruby classes - 0 views

  • The class implements some kind of query method
  • Pundit will call the current_user method to retrieve what to send into this argumen
  • put these classes in app/policies
  • ...49 more annotations...
  • in leveraging regular Ruby classes and object oriented design patterns to build a simple, robust and scaleable authorization system
  • map to the name of a particular controller action
  • In the generated ApplicationPolicy, the model object is called record.
  • record
  • authorize
  • authorize would have done something like this: raise "not authorized" unless PostPolicy.new(current_user, @post).update?
  • pass a second argument to authorize if the name of the permission you want to check doesn't match the action name.
  • you can chain it
  • authorize returns the object passed to it
  • the policy method in both the view and controller.
  • have some kind of view listing records which a particular user has access to
  • ActiveRecord::Relation
  • Instances of this class respond to the method resolve, which should return some kind of result which can be iterated over.
  • scope.where(published: true)
    • 張 旭
       
      我想大概的意思就是:如果是 admin 可以看到全部 post,如果不是只能看到 published = true 的 post
  • use this class from your controller via the policy_scope method:
  • PostPolicy::Scope.new(current_user, Post).resolve
  • policy_scope(@user.posts).each
  • This method will raise an exception if authorize has not yet been called.
  • verify_policy_scoped to your controller. This will raise an exception in the vein of verify_authorized. However, it tracks if policy_scope is used instead of authorize
  • need to conditionally bypass verification, you can use skip_authorization
  • skip_policy_scope
  • Having a mechanism that ensures authorization happens allows developers to thoroughly test authorization scenarios as units on the policy objects themselves.
  • Pundit doesn't do anything you couldn't have easily done yourself. It's a very small library, it just provides a few neat helpers.
  • all of the policy and scope classes are just plain Ruby classes
  • rails g pundit:policy post
  • define a filter that redirects unauthenticated users to the login page
  • fail more gracefully
  • raise Pundit::NotAuthorizedError, "must be logged in" unless user
  • having rails handle them as a 403 error and serving a 403 error page.
  • config.action_dispatch.rescue_responses["Pundit::NotAuthorizedError"] = :forbidden
  • with I18n to generate error messages
  • retrieve a policy for a record outside the controller or view
  • define a method in your controller called pundit_user
  • Pundit strongly encourages you to model your application in such a way that the only context you need for authorization is a user object and a domain model that you want to check authorization for.
  • Pundit does not allow you to pass additional arguments to policies
  • authorization is dependent on IP address in addition to the authenticated user
  • create a special class which wraps up both user and IP and passes it to the policy.
  • set up a permitted_attributes method in your policy
  • policy(@post).permitted_attributes
  • permitted_attributes(@post)
  • Pundit provides a convenient helper method
  • permit different attributes based on the current action,
  • If you have defined an action-specific method on your policy for the current action, the permitted_attributes helper will call it instead of calling permitted_attributes on your controller
  • If you don't have an instance for the first argument to authorize, then you can pass the class
  • restart the Rails server
  • Given there is a policy without a corresponding model / ruby class, you can retrieve it by passing a symbol
  • after_action :verify_authorized
  • It is not some kind of failsafe mechanism or authorization mechanism.
  • Pundit will work just fine without using verify_authorized and verify_policy_scoped
  •  
    "Minimal authorization through OO design and pure Ruby classes"
張 旭

RolifyCommunity/rolify: Role management library with resource scoping - 0 views

  •  
    "resource"
張 旭

Kubernetes - Traefik - 0 views

  • allow fine-grained control of Kubernetes resources and API.
  • authorize Traefik to use the Kubernetes API.
  • namespace-specific RoleBindings
  • ...29 more annotations...
  • a single, global ClusterRoleBinding.
  • RoleBindings per namespace enable to restrict granted permissions to the very namespaces only that Traefik is watching over, thereby following the least-privileges principle.
  • The scalability can be much better when using a Deployment
  • you will have a Single-Pod-per-Node model when using a DaemonSet,
  • DaemonSets automatically scale to new nodes, when the nodes join the cluster
  • DaemonSets ensure that only one replica of pods run on any single node.
  • DaemonSets can be run with the NET_BIND_SERVICE capability, which will allow it to bind to port 80/443/etc on each host. This will allow bypassing the kube-proxy, and reduce traffic hops.
  • start with the Daemonset
  • The Deployment has easier up and down scaling possibilities.
  • The DaemonSet automatically scales to all nodes that meets a specific selector and guarantees to fill nodes one at a time.
  • Rolling updates are fully supported from Kubernetes 1.7 for DaemonSets as well.
  • provide the TLS certificate via a Kubernetes secret in the same namespace as the ingress.
  • If there are any errors while loading the TLS section of an ingress, the whole ingress will be skipped.
  • create secret generic
  • Name-based Routing
  • Path-based Routing
  • Traefik will merge multiple Ingress definitions for the same host/path pair into one definition.
  • specify priority for ingress routes
  • traefik.frontend.priority
  • When specifying an ExternalName, Traefik will forward requests to the given host accordingly and use HTTPS when the Service port matches 443.
  • By default Traefik will pass the incoming Host header to the upstream resource.
  • traefik.frontend.passHostHeader: "false"
  • type: ExternalName
  • By default, Traefik processes every Ingress objects it observes.
  • It is also possible to set the ingressClass option in Traefik to a particular value. Traefik will only process matching Ingress objects.
  • It is possible to split Ingress traffic in a fine-grained manner between multiple deployments using service weights.
  • use case is canary releases where a deployment representing a newer release is to receive an initially small but ever-increasing fraction of the requests over time.
  • annotations: traefik.ingress.kubernetes.io/service-weights: | my-app: 99% my-app-canary: 1%
  • Over time, the ratio may slowly shift towards the canary deployment until it is deemed to replace the previous main application, in steps such as 5%/95%, 10%/90%, 50%/50%, and finally 100%/0%.
張 旭

Helm | - 0 views

  • Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses.
  • kubectl cluster-info
  • Role-Based Access Control (RBAC) enabled
  • ...133 more annotations...
  • initialize the local CLI
  • install Tiller into your Kubernetes cluster
  • helm install
  • helm init --upgrade
  • By default, when Tiller is installed, it does not have authentication enabled.
  • helm repo update
  • Without a max history set the history is kept indefinitely, leaving a large number of records for helm and tiller to maintain.
  • helm init --upgrade
  • Whenever you install a chart, a new release is created.
  • one chart can be installed multiple times into the same cluster. And each can be independently managed and upgraded.
  • helm list function will show you a list of all deployed releases.
  • helm delete
  • helm status
  • you can audit a cluster’s history, and even undelete a release (with helm rollback).
  • the Helm server (Tiller).
  • The Helm client (helm)
  • brew install kubernetes-helm
  • Tiller, the server portion of Helm, typically runs inside of your Kubernetes cluster.
  • it can also be run locally, and configured to talk to a remote Kubernetes cluster.
  • Role-Based Access Control - RBAC for short
  • create a service account for Tiller with the right roles and permissions to access resources.
  • run Tiller in an RBAC-enabled Kubernetes cluster.
  • run kubectl get pods --namespace kube-system and see Tiller running.
  • helm inspect
  • Helm will look for Tiller in the kube-system namespace unless --tiller-namespace or TILLER_NAMESPACE is set.
  • For development, it is sometimes easier to work on Tiller locally, and configure it to connect to a remote Kubernetes cluster.
  • even when running locally, Tiller will store release configuration in ConfigMaps inside of Kubernetes.
  • helm version should show you both the client and server version.
  • Tiller stores its data in Kubernetes ConfigMaps, you can safely delete and re-install Tiller without worrying about losing any data.
  • helm reset
  • The --node-selectors flag allows us to specify the node labels required for scheduling the Tiller pod.
  • --override allows you to specify properties of Tiller’s deployment manifest.
  • helm init --override manipulates the specified properties of the final manifest (there is no “values” file).
  • The --output flag allows us skip the installation of Tiller’s deployment manifest and simply output the deployment manifest to stdout in either JSON or YAML format.
  • By default, tiller stores release information in ConfigMaps in the namespace where it is running.
  • switch from the default backend to the secrets backend, you’ll have to do the migration for this on your own.
  • a beta SQL storage backend that stores release information in an SQL database (only postgres has been tested so far).
  • Once you have the Helm Client and Tiller successfully installed, you can move on to using Helm to manage charts.
  • Helm requires that kubelet have access to a copy of the socat program to proxy connections to the Tiller API.
  • A Release is an instance of a chart running in a Kubernetes cluster. One chart can often be installed many times into the same cluster.
  • helm init --client-only
  • helm init --dry-run --debug
  • A panic in Tiller is almost always the result of a failure to negotiate with the Kubernetes API server
  • Tiller and Helm have to negotiate a common version to make sure that they can safely communicate without breaking API assumptions
  • helm delete --purge
  • Helm stores some files in $HELM_HOME, which is located by default in ~/.helm
  • A Chart is a Helm package. It contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster.
  • it like the Kubernetes equivalent of a Homebrew formula, an Apt dpkg, or a Yum RPM file.
  • A Repository is the place where charts can be collected and shared.
  • Set the $HELM_HOME environment variable
  • each time it is installed, a new release is created.
  • Helm installs charts into Kubernetes, creating a new release for each installation. And to find new charts, you can search Helm chart repositories.
  • chart repository is named stable by default
  • helm search shows you all of the available charts
  • helm inspect
  • To install a new package, use the helm install command. At its simplest, it takes only one argument: The name of the chart.
  • If you want to use your own release name, simply use the --name flag on helm install
  • additional configuration steps you can or should take.
  • Helm does not wait until all of the resources are running before it exits. Many charts require Docker images that are over 600M in size, and may take a long time to install into the cluster.
  • helm status
  • helm inspect values
  • helm inspect values stable/mariadb
  • override any of these settings in a YAML formatted file, and then pass that file during installation.
  • helm install -f config.yaml stable/mariadb
  • --values (or -f): Specify a YAML file with overrides.
  • --set (and its variants --set-string and --set-file): Specify overrides on the command line.
  • Values that have been --set can be cleared by running helm upgrade with --reset-values specified.
  • Chart designers are encouraged to consider the --set usage when designing the format of a values.yaml file.
  • --set-file key=filepath is another variant of --set. It reads the file and use its content as a value.
  • inject a multi-line text into values without dealing with indentation in YAML.
  • An unpacked chart directory
  • When a new version of a chart is released, or when you want to change the configuration of your release, you can use the helm upgrade command.
  • Kubernetes charts can be large and complex, Helm tries to perform the least invasive upgrade.
  • It will only update things that have changed since the last release
  • $ helm upgrade -f panda.yaml happy-panda stable/mariadb
  • deployment
  • If both are used, --set values are merged into --values with higher precedence.
  • The helm get command is a useful tool for looking at a release in the cluster.
  • helm rollback
  • A release version is an incremental revision. Every time an install, upgrade, or rollback happens, the revision number is incremented by 1.
  • helm history
  • a release name cannot be re-used.
  • you can rollback a deleted resource, and have it re-activate.
  • helm repo list
  • helm repo add
  • helm repo update
  • The Chart Development Guide explains how to develop your own charts.
  • helm create
  • helm lint
  • helm package
  • Charts that are archived can be loaded into chart repositories.
  • chart repository server
  • Tiller can be installed into any namespace.
  • Limiting Tiller to only be able to install into specific namespaces and/or resource types is controlled by Kubernetes RBAC roles and rolebindings
  • Release names are unique PER TILLER INSTANCE
  • Charts should only contain resources that exist in a single namespace.
  • not recommended to have multiple Tillers configured to manage resources in the same namespace.
  • a client-side Helm plugin. A plugin is a tool that can be accessed through the helm CLI, but which is not part of the built-in Helm codebase.
  • Helm plugins are add-on tools that integrate seamlessly with Helm. They provide a way to extend the core feature set of Helm, but without requiring every new feature to be written in Go and added to the core tool.
  • Helm plugins live in $(helm home)/plugins
  • The Helm plugin model is partially modeled on Git’s plugin model
  • helm referred to as the porcelain layer, with plugins being the plumbing.
  • helm plugin install https://github.com/technosophos/helm-template
  • command is the command that this plugin will execute when it is called.
  • Environment variables are interpolated before the plugin is executed.
  • The command itself is not executed in a shell. So you can’t oneline a shell script.
  • Helm is able to fetch Charts using HTTP/S
  • Variables like KUBECONFIG are set for the plugin if they are set in the outer environment.
  • In Kubernetes, granting a role to an application-specific service account is a best practice to ensure that your application is operating in the scope that you have specified.
  • restrict Tiller’s capabilities to install resources to certain namespaces, or to grant a Helm client running access to a Tiller instance.
  • Service account with cluster-admin role
  • The cluster-admin role is created by default in a Kubernetes cluster
  • Deploy Tiller in a namespace, restricted to deploying resources only in that namespace
  • Deploy Tiller in a namespace, restricted to deploying resources in another namespace
  • When running a Helm client in a pod, in order for the Helm client to talk to a Tiller instance, it will need certain privileges to be granted.
  • SSL Between Helm and Tiller
  • The Tiller authentication model uses client-side SSL certificates.
  • creating an internal CA, and using both the cryptographic and identity functions of SSL.
  • Helm is a powerful and flexible package-management and operations tool for Kubernetes.
  • default installation applies no security configurations
  • with a cluster that is well-secured in a private network with no data-sharing or no other users or teams.
  • With great power comes great responsibility.
  • Choose the Best Practices you should apply to your helm installation
  • Role-based access control, or RBAC
  • Tiller’s gRPC endpoint and its usage by Helm
  • Kubernetes employ a role-based access control (or RBAC) system (as do modern operating systems) to help mitigate the damage that can be done if credentials are misused or bugs exist.
  • In the default installation the gRPC endpoint that Tiller offers is available inside the cluster (not external to the cluster) without authentication configuration applied.
  • Tiller stores its release information in ConfigMaps. We suggest changing the default to Secrets.
  • release information
  • charts
  • charts are a kind of package that not only installs containers you may or may not have validated yourself, but it may also install into more than one namespace.
  • As with all shared software, in a controlled or shared environment you must validate all software you install yourself before you install it.
  • Helm’s provenance tools to ensure the provenance and integrity of charts
  •  
    "Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses."
張 旭

AskF5 | Manual Chapter: Working with Partitions - 0 views

  • During BIG-IP® system installation, the system automatically creates a partition named Common
  • An administrative partition is a logical container that you create, containing a defined set of BIG-IP® system objects.
  • No user can delete partition Common itself.
  • ...9 more annotations...
  • With respect to permissions, all users on the system except those with a user role of No Access have read access to objects in partition Common, and by default, partition Common is their current partition.
  • The current partition is the specific partition to which the system is currently set for a logged-in user.
  • A partition access assignment gives a user some level of access to the specified partition.
  • assigning partition access to a user does not necessarily give the user full access to all objects in the partition
  • user account objects also reside in partitions
  • when you first install the BIG-IP system, every existing user account (root and admin) resides in partition Common
  • the partition in which a user account object resides does not affect the partition or partitions to which that user is granted access to manage other BIG-IP objects
  • the object it references resides in partition Common
  • a referenced object must reside either in the same partition as the object that is referencing it
張 旭

MongoDB Performance - MongoDB Manual - 0 views

  • MongoDB uses a locking system to ensure data set consistency. If certain operations are long-running or a queue forms, performance will degrade as requests and operations wait for the lock.
  • performance limitations as a result of inadequate or inappropriate indexing strategies, or as a consequence of poor schema design patterns.
  • performance issues may be temporary and related to abnormal traffic load.
  • ...9 more annotations...
  • Lock-related slowdowns can be intermittent.
  • If globalLock.currentQueue.total is consistently high, then there is a chance that a large number of requests are waiting for a lock.
  • If globalLock.totalTime is high relative to uptime, the database has existed in a lock state for a significant amount of time.
  • For write-heavy applications, deploy sharding and add one or more shards to a sharded cluster to distribute load among mongod instances.
  • Unless constrained by system-wide limits, the maximum number of incoming connections supported by MongoDB is configured with the maxIncomingConnections setting.
  • When logLevel is set to 0, MongoDB records slow operations to the diagnostic log at a rate determined by slowOpSampleRate.
  • At higher logLevel settings, all operations appear in the diagnostic log regardless of their latency with the following exception
  • Full Time Diagnostic Data Collection (FTDC) mechanism. FTDC data files are compressed, are not human-readable, and inherit the same file access permissions as the MongoDB data files.
  • mongod processes store FTDC data files in a diagnostic.data directory under the instances storage.dbPath.
  •  
    "MongoDB uses a locking system to ensure data set consistency. If certain operations are long-running or a queue forms, performance will degrade as requests and operations wait for the lock."
張 旭

Production Notes - MongoDB Manual - 0 views

  • mongod will not start if dbPath contains data files created by a storage engine other than the one specified by --storageEngine.
  • mongod must possess read and write permissions for the specified dbPath.
  • WiredTiger supports concurrent access by readers and writers to the documents in a collection
  • ...9 more annotations...
  • Journaling guarantees that MongoDB can quickly recover write operations that were written to the journal but not written to data files in cases where mongod terminated due to a crash or other serious failure.
  • To use read concern level of "majority", replica sets must use WiredTiger storage engine.
  • Write concern describes the level of acknowledgement requested from MongoDB for write operations.
  • With stronger write concerns, clients must wait after sending a write operation until MongoDB confirms the write operation at the requested write concern level.
  • By default, authorization is not enabled, and mongod assumes a trusted environment
  • The HTTP interface is disabled by default. Do not enable the HTTP interface in production environments.
  • Avoid overloading the connection resources of a mongod or mongos instance by adjusting the connection pool size to suit your use case.
  • ensure that each mongod or mongos instance has access to two real cores or one multi-core physical CPU.
  • The WiredTiger storage engine is multithreaded and can take advantage of additional CPU cores
張 旭

手动安装 Prometheus · 从 Docker 到 Kubernetes 进阶手册 - 1 views

  • 参数storage.tsdb.path指定了 TSDB 数据的存储路径、通过storage.tsdb.retention设置了保留多长时间的数据,还有下面的web.enable-admin-api参数可以用来开启对 admin api 的访问权限,参数web.enable-lifecycle非常重要,用来开启支持热更新的,有了这个参数之后,prometheus.yml 配置文件只要更新了,通过执行localhost:9090/-/reload就会立即生效,所以一定要加上这个参数。
  • Prometheus 由多个组件组成,但是其中许多组件是可选的: Prometheus Server:用于抓取指标、存储时间序列数据 exporter:暴露指标让任务来抓 pushgateway:push 的方式将指标数据推送到该网关 alertmanager:处理报警的报警组件 adhoc:用于数据查询
  • scrape_configs 用于控制 prometheus 监控哪些资源。
  • ...6 more annotations...
  • prometheus 通过 HTTP 的方式来暴露的它本身的监控数据
  • prometheus 默认会通过目标的/metrics路径采集 metrics
  • 需要配置 rbac 认证,因为我们需要在 prometheus 中去访问 Kubernetes 的相关信息
  • 要获取的资源信息,在每一个 namespace 下面都有可能存在,所以我们这里使用的是 ClusterRole 的资源对象,值得一提的是我们这里的权限规则声明中有一个nonResourceURLs的属性,是用来对非资源型 metrics 进行操作的权限声明
  • 添加一个securityContext的属性,将其中的runAsUser设置为0,这是因为现在的 prometheus 运行过程中使用的用户是 nobody,否则会出现下面的permission denied之类的权限错误
  • PromQL其实就是 prometheus 便于数据聚合展示开发的一套 ad hoc 查询语言的,你想要查什么找对应函数取你的数据好了。
  •  
    "参数storage.tsdb.path指定了 TSDB 数据的存储路径、通过storage.tsdb.retention设置了保留多长时间的数据,还有下面的web.enable-admin-api参数可以用来开启对 admin api 的访问权限,参数web.enable-lifecycle非常重要,用来开启支持热更新的,有了这个参数之后,prometheus.yml 配置文件只要更新了,通过执行localhost:9090/-/reload就会立即生效,所以一定要加上这个参数。"
張 旭

Connection and Privileges Needed - 0 views

  • Percona XtraBackup needs to be able to connect to the database server and perform operations on the server and the datadir when creating a backup, when preparing in some scenarios and when restoring it.
  • When xtrabackup is used, there are two actors involved: the user invoking the program - a system user - and the user performing action in the database server - a database user.
  • these are different users in different places, even though they may have the same username.
  • ...1 more annotation...
  • Once connected to the server, in order to perform a backup you will need READ and EXECUTE permissions at a filesystem level in the server’s datadir.
  •  
    "Percona XtraBackup needs to be able to connect to the database server and perform operations on the server and the datadir when creating a backup, when preparing in some scenarios and when restoring it. "
張 旭

Deploy Replica Set With Keyfile Authentication - MongoDB Manual - 0 views

  • Keyfiles are bare-minimum forms of security and are best suited for testing or development environments.
  • With keyfile authentication, each mongod instances in the replica set uses the contents of the keyfile as the shared password for authenticating other members in the deployment.
  • On UNIX systems, the keyfile must not have group or world permissions.
  • ...3 more annotations...
  • Copy the keyfile to each server hosting the replica set members.
  • the user running the mongod instances is the owner of the file and can access the keyfile.
  • For each member in the replica set, start the mongod with either the security.keyFile configuration file setting or the --keyFile command-line option.
張 旭

Creating a cluster with kubeadm | Kubernetes - 0 views

  • (Recommended) If you have plans to upgrade this single control-plane kubeadm cluster to high availability you should specify the --control-plane-endpoint to set the shared endpoint for all control-plane nodes
  • set the --pod-network-cidr to a provider-specific value.
  • kubeadm tries to detect the container runtime by using a list of well known endpoints.
  • ...12 more annotations...
  • kubeadm uses the network interface associated with the default gateway to set the advertise address for this particular control-plane node's API server. To use a different network interface, specify the --apiserver-advertise-address=<ip-address> argument to kubeadm init
  • Do not share the admin.conf file with anyone and instead grant users custom permissions by generating them a kubeconfig file using the kubeadm kubeconfig user command.
  • The token is used for mutual authentication between the control-plane node and the joining nodes. The token included here is secret. Keep it safe, because anyone with this token can add authenticated nodes to your cluster.
  • You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.
  • Take care that your Pod network must not overlap with any of the host networks
  • Make sure that your Pod network plugin supports RBAC, and so do any manifests that you use to deploy it.
  • You can install only one Pod network per cluster.
  • The cluster created here has a single control-plane node, with a single etcd database running on it.
  • The node-role.kubernetes.io/control-plane label is such a restricted label and kubeadm manually applies it using a privileged client after a node has been created.
  • By default, your cluster will not schedule Pods on the control plane nodes for security reasons.
  • kubectl taint nodes --all node-role.kubernetes.io/control-plane-
  • remove the node-role.kubernetes.io/control-plane:NoSchedule taint from any nodes that have it, including the control plane nodes, meaning that the scheduler will then be able to schedule Pods everywhere.
1 - 18 of 18
Showing 20 items per page