Skip to main content

Home/ Larvata/ Group items tagged tasks

Rss Feed Group items tagged

crazylion lee

The Pragmatic Bookshelf | DevOps in Practice - 0 views

  •  
    "Delivering production software can often be a painful task. Long test periods and the integration between operations and development can ruin or delay a promising delivery. That's what DevOps can fix. DevOps is a cultural change that aims to smoothly integrate development and operations procedures, breaking the barriers between them and focusing on automation, collaboration, and sharing of knowledge and tools. This book shows you how to implement DevOps and Continuous Delivery practices to raise your system's deployment frequency, increasing your production application's stability and robustness."
crazylion lee

Nmap: the Network Mapper - Free Security Scanner - 1 views

shared by crazylion lee on 22 Nov 15 - No Cached
  •  
    "Nmap ("Network Mapper") is a free and open source (license) utility for network discovery and security auditing. Many systems and network administrators also find it useful for tasks such as network inventory, managing service upgrade schedules, and monitoring host or service uptime. Nmap uses raw IP packets in novel ways to determine what hosts are available on the network, what services (application name and version) those hosts are offering, what operating systems (and OS versions) they are running, what type of packet filters/firewalls are in use, and dozens of other characteristics. It was designed to rapidly scan large networks, but works fine against single hosts. Nmap runs on all major computer operating systems, and official binary packages are available for Linux, Windows, and Mac OS X. In addition to the classic command-line Nmap executable, the Nmap suite includes an advanced GUI and results viewer (Zenmap), a flexible data transfer, redirection, and debugging tool (Ncat), a utility for comparing scan results (Ndiff), and a packet generation and response analysis tool (Nping)."
張 旭

Creating Reusable Playbooks - Ansible Documentation - 0 views

  • Ansible pre-processes all static imports during Playbook parsing time
  • Dynamic includes are processed during runtime at the point in which that task is encountered.
  • advantage of using include* statements is looping. When a loop is used with an include, the included tasks or role will be executed once for each item in the loop.
  • ...1 more annotation...
  • loops cannot be used with imports at all
張 旭

Deploy services to a swarm | Docker Documentation - 0 views

  • Swarm services use a declarative model, which means that you define the desired state of the service, and rely upon Docker to maintain this state.
  • To create a single-replica service with no extra configuration, you only need to supply the image name.
  • A service can be in a pending state if its image is unavailable
  • ...12 more annotations...
  • If your image is available on a private registry which requires login, use the --with-registry-auth flag
  • When you update a service, Docker stops its containers and restarts them with the new configuration.
  • When updating an existing service, the flag is --publish-add. There is also a --publish-rm flag to remove a port that was previously published.
  • To update the command an existing service runs, you can use the --args flag.
  • force the service to use a specific version of the image
  • If the manager can’t resolve the tag to a digest, each worker node is responsible for resolving the tag to a digest, and different nodes may use different versions of the image.
  • After you create a service, its image is never updated unless you explicitly run docker service update with the --image flag as described below.
  • When you run service update with the --image flag, the swarm manager queries Docker Hub or your private Docker registry for the digest the tag currently points to and updates the service tasks to use that digest.
  • You can publish a service task’s port directly on the swarm node where that service is running.
  • You can rely on the routing mesh. When you publish a service port, the swarm makes the service accessible at the target port on every node, regardless of whether there is a task for the service running on that node or not.
  • To publish a service’s ports externally to the swarm, use the --publish <PUBLISHED-PORT>:<SERVICE-PORT> flag.
  • published port on every swarm node
crazylion lee

Bcfg2 - 0 views

  •  
    "Bcfg2 helps system administrators produce a consistent, reproducible, and verifiable description of their environment, and offers visualization and reporting tools to aid in day-to-day administrative tasks. It is the fifth generation of configuration management tools developed in the Mathematics and Computer Science Division of Argonne National Laboratory. "
張 旭

The Asset Pipeline - Ruby on Rails Guides - 0 views

  • provides a framework to concatenate and minify or compress JavaScript and CSS assets
  • adds the ability to write these assets in other languages and pre-processors such as CoffeeScript, Sass and ERB
  • invalidate the cache by altering this fingerprint
  • ...80 more annotations...
  • Rails 4 automatically adds the sass-rails, coffee-rails and uglifier gems to your Gemfile
  • reduce the number of requests that a browser makes to render a web page
  • Starting with version 3.1, Rails defaults to concatenating all JavaScript files into one master .js file and all CSS files into one master .css file
  • In production, Rails inserts an MD5 fingerprint into each filename so that the file is cached by the web browser
  • The technique sprockets uses for fingerprinting is to insert a hash of the content into the name, usually at the end.
  • asset minification or compression
  • The sass-rails gem is automatically used for CSS compression if included in Gemfile and no config.assets.css_compressor option is set.
  • Supported languages include Sass for CSS, CoffeeScript for JavaScript, and ERB for both by default.
  • When a filename is unique and based on its content, HTTP headers can be set to encourage caches everywhere (whether at CDNs, at ISPs, in networking equipment, or in web browsers) to keep their own copy of the content
  • asset pipeline is technically no longer a core feature of Rails 4
  • Rails uses for fingerprinting is to insert a hash of the content into the name, usually at the end
  • With the asset pipeline, the preferred location for these assets is now the app/assets directory.
  • Fingerprinting is enabled by default for production and disabled for all other environments
  • The files in app/assets are never served directly in production.
  • Paths are traversed in the order that they occur in the search path
  • You should use app/assets for files that must undergo some pre-processing before they are served.
  • By default .coffee and .scss files will not be precompiled on their own
  • app/assets is for assets that are owned by the application, such as custom images, JavaScript files or stylesheets.
  • lib/assets is for your own libraries' code that doesn't really fit into the scope of the application or those libraries which are shared across applications.
  • vendor/assets is for assets that are owned by outside entities, such as code for JavaScript plugins and CSS frameworks.
  • Any path under assets/* will be searched
  • By default these files will be ready to use by your application immediately using the require_tree directive.
  • By default, this means the files in app/assets take precedence, and will mask corresponding paths in lib and vendor
  • Sprockets uses files named index (with the relevant extensions) for a special purpose
  • Rails.application.config.assets.paths
  • causes turbolinks to check if an asset has been updated and if so loads it into the page
  • if you add an erb extension to a CSS asset (for example, application.css.erb), then helpers like asset_path are available in your CSS rules
  • If you add an erb extension to a JavaScript asset, making it something such as application.js.erb, then you can use the asset_path helper in your JavaScript code
  • The asset pipeline automatically evaluates ERB
  • data URI — a method of embedding the image data directly into the CSS file — you can use the asset_data_uri helper.
  • Sprockets will also look through the paths specified in config.assets.paths, which includes the standard application paths and any paths added by Rails engines.
  • image_tag
  • the closing tag cannot be of the style -%>
  • asset_data_uri
  • app/assets/javascripts/application.js
  • sass-rails provides -url and -path helpers (hyphenated in Sass, underscored in Ruby) for the following asset classes: image, font, video, audio, JavaScript and stylesheet.
  • Rails.application.config.assets.compress
  • In JavaScript files, the directives begin with //=
  • The require_tree directive tells Sprockets to recursively include all JavaScript files in the specified directory into the output.
  • manifest files contain directives — instructions that tell Sprockets which files to require in order to build a single CSS or JavaScript file.
  • You should not rely on any particular order among those
  • Sprockets uses manifest files to determine which assets to include and serve.
  • the family of require directives prevents files from being included twice in the output
  • which files to require in order to build a single CSS or JavaScript file
  • Directives are processed top to bottom, but the order in which files are included by require_tree is unspecified.
  • In JavaScript files, Sprockets directives begin with //=
  • If require_self is called more than once, only the last call is respected.
  • require directive is used to tell Sprockets the files you wish to require.
  • You need not supply the extensions explicitly. Sprockets assumes you are requiring a .js file when done from within a .js file
  • paths must be specified relative to the manifest file
  • require_directory
  • Rails 4 creates both app/assets/javascripts/application.js and app/assets/stylesheets/application.css regardless of whether the --skip-sprockets option is used when creating a new rails application.
  • The file extensions used on an asset determine what preprocessing is applied.
  • app/assets/stylesheets/application.css
  • Additional layers of preprocessing can be requested by adding other extensions, where each extension is processed in a right-to-left manner
  • require_self
  • use the Sass @import rule instead of these Sprockets directives.
  • Keep in mind that the order of these preprocessors is important
  • In development mode, assets are served as separate files in the order they are specified in the manifest file.
  • when these files are requested they are processed by the processors provided by the coffee-script and sass gems and then sent back to the browser as JavaScript and CSS respectively.
  • css.scss.erb
  • js.coffee.erb
  • Keep in mind the order of these preprocessors is important.
  • By default Rails assumes that assets have been precompiled and will be served as static assets by your web server
  • with the Asset Pipeline the :cache and :concat options aren't used anymore
  • Assets are compiled and cached on the first request after the server is started
  • RAILS_ENV=production bundle exec rake assets:precompile
  • Debug mode can also be enabled in Rails helper methods
  • If you set config.assets.initialize_on_precompile to false, be sure to test rake assets:precompile locally before deploying
  • By default Rails assumes assets have been precompiled and will be served as static assets by your web server.
  • a rake task to compile the asset manifests and other files in the pipeline
  • RAILS_ENV=production bin/rake assets:precompile
  • a recipe to handle this in deployment
  • links the folder specified in config.assets.prefix to shared/assets
  • config/initializers/assets.rb
  • The initialize_on_precompile change tells the precompile task to run without invoking Rails
  • The X-Sendfile header is a directive to the web server to ignore the response from the application, and instead serve a specified file from disk
  • the jquery-rails gem which comes with Rails as the standard JavaScript library gem.
  • Possible options for JavaScript compression are :closure, :uglifier and :yui
  • concatenate assets
張 旭

Volumes - Kubernetes - 0 views

  • On-disk files in a Container are ephemeral,
  • when a Container crashes, kubelet will restart it, but the files will be lost - the Container starts with a clean state
  • In Docker, a volume is simply a directory on disk or in another Container.
  • ...105 more annotations...
  • A Kubernetes volume, on the other hand, has an explicit lifetime - the same as the Pod that encloses it.
  • a volume outlives any Containers that run within the Pod, and data is preserved across Container restarts.
    • 張 旭
       
      Kubernetes Volume 是跟著 Pod 的生命週期在走
  • Kubernetes supports many types of volumes, and a Pod can use any number of them simultaneously.
  • To use a volume, a Pod specifies what volumes to provide for the Pod (the .spec.volumes field) and where to mount those into Containers (the .spec.containers.volumeMounts field).
  • A process in a container sees a filesystem view composed from their Docker image and volumes.
  • Volumes can not mount onto other volumes or have hard links to other volumes.
  • Each Container in the Pod must independently specify where to mount each volume
  • localnfs
  • cephfs
  • awsElasticBlockStore
  • glusterfs
  • vsphereVolume
  • An awsElasticBlockStore volume mounts an Amazon Web Services (AWS) EBS Volume into your Pod.
  • the contents of an EBS volume are preserved and the volume is merely unmounted.
  • an EBS volume can be pre-populated with data, and that data can be “handed off” between Pods.
  • create an EBS volume using aws ec2 create-volume
  • the nodes on which Pods are running must be AWS EC2 instances
  • EBS only supports a single EC2 instance mounting a volume
  • check that the size and EBS volume type are suitable for your use!
  • A cephfs volume allows an existing CephFS volume to be mounted into your Pod.
  • the contents of a cephfs volume are preserved and the volume is merely unmounted.
    • 張 旭
       
      相當於自己的 AWS EBS
  • CephFS can be mounted by multiple writers simultaneously.
  • have your own Ceph server running with the share exported
  • configMap
  • The configMap resource provides a way to inject configuration data into Pods
  • When referencing a configMap object, you can simply provide its name in the volume to reference it
  • volumeMounts: - name: config-vol mountPath: /etc/config volumes: - name: config-vol configMap: name: log-config items: - key: log_level path: log_level
  • create a ConfigMap before you can use it.
  • A Container using a ConfigMap as a subPath volume mount will not receive ConfigMap updates.
  • An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node.
  • When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever.
  • By default, emptyDir volumes are stored on whatever medium is backing the node - that might be disk or SSD or network storage, depending on your environment.
  • you can set the emptyDir.medium field to "Memory" to tell Kubernetes to mount a tmpfs (RAM-backed filesystem)
  • volumeMounts: - mountPath: /cache name: cache-volume volumes: - name: cache-volume emptyDir: {}
  • An fc volume allows an existing fibre channel volume to be mounted in a Pod.
  • configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them.
  • Flocker is an open-source clustered Container data volume manager. It provides management and orchestration of data volumes backed by a variety of storage backends.
  • emptyDir
  • flocker
  • A flocker volume allows a Flocker dataset to be mounted into a Pod
  • have your own Flocker installation running
  • A gcePersistentDisk volume mounts a Google Compute Engine (GCE) Persistent Disk into your Pod.
  • Using a PD on a Pod controlled by a ReplicationController will fail unless the PD is read-only or the replica count is 0 or 1
  • A glusterfs volume allows a Glusterfs (an open source networked filesystem) volume to be mounted into your Pod.
  • have your own GlusterFS installation running
  • A hostPath volume mounts a file or directory from the host node’s filesystem into your Pod.
  • a powerful escape hatch for some applications
  • access to Docker internals; use a hostPath of /var/lib/docker
  • allowing a Pod to specify whether a given hostPath should exist prior to the Pod running, whether it should be created, and what it should exist as
  • specify a type for a hostPath volume
  • the files or directories created on the underlying hosts are only writable by root.
  • hostPath: # directory location on host path: /data # this field is optional type: Directory
  • An iscsi volume allows an existing iSCSI (SCSI over IP) volume to be mounted into your Pod.
  • have your own iSCSI server running
  • A feature of iSCSI is that it can be mounted as read-only by multiple consumers simultaneously.
  • A local volume represents a mounted local storage device such as a disk, partition or directory.
  • Local volumes can only be used as a statically created PersistentVolume.
  • Compared to hostPath volumes, local volumes can be used in a durable and portable manner without manually scheduling Pods to nodes, as the system is aware of the volume’s node constraints by looking at the node affinity on the PersistentVolume.
  • If a node becomes unhealthy, then the local volume will also become inaccessible, and a Pod using it will not be able to run.
  • PersistentVolume spec using a local volume and nodeAffinity
  • PersistentVolume nodeAffinity is required when using local volumes. It enables the Kubernetes scheduler to correctly schedule Pods using local volumes to the correct node.
  • PersistentVolume volumeMode can now be set to “Block” (instead of the default value “Filesystem”) to expose the local volume as a raw block device.
  • When using local volumes, it is recommended to create a StorageClass with volumeBindingMode set to WaitForFirstConsumer
  • An nfs volume allows an existing NFS (Network File System) share to be mounted into your Pod.
  • NFS can be mounted by multiple writers simultaneously.
  • have your own NFS server running with the share exported
  • A persistentVolumeClaim volume is used to mount a PersistentVolume into a Pod.
  • PersistentVolumes are a way for users to “claim” durable storage (such as a GCE PersistentDisk or an iSCSI volume) without knowing the details of the particular cloud environment.
  • A projected volume maps several existing volume sources into the same directory.
  • All sources are required to be in the same namespace as the Pod. For more details, see the all-in-one volume design document.
  • Each projected volume source is listed in the spec under sources
  • A Container using a projected volume source as a subPath volume mount will not receive updates for those volume sources.
  • RBD volumes can only be mounted by a single consumer in read-write mode - no simultaneous writers allowed
  • A secret volume is used to pass sensitive information, such as passwords, to Pods
  • store secrets in the Kubernetes API and mount them as files for use by Pods
  • secret volumes are backed by tmpfs (a RAM-backed filesystem) so they are never written to non-volatile storage.
  • create a secret in the Kubernetes API before you can use it
  • A Container using a Secret as a subPath volume mount will not receive Secret updates.
  • StorageOS runs as a Container within your Kubernetes environment, making local or attached storage accessible from any node within the Kubernetes cluster.
  • Data can be replicated to protect against node failure. Thin provisioning and compression can improve utilization and reduce cost.
  • StorageOS provides block storage to Containers, accessible via a file system.
  • A vsphereVolume is used to mount a vSphere VMDK Volume into your Pod.
  • supports both VMFS and VSAN datastore.
  • create VMDK using one of the following methods before using with Pod.
  • share one volume for multiple uses in a single Pod.
  • The volumeMounts.subPath property can be used to specify a sub-path inside the referenced volume instead of its root.
  • volumeMounts: - name: workdir1 mountPath: /logs subPathExpr: $(POD_NAME)
  • env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name
  • Use the subPathExpr field to construct subPath directory names from Downward API environment variables
  • enable the VolumeSubpathEnvExpansion feature gate
  • The subPath and subPathExpr properties are mutually exclusive.
  • There is no limit on how much space an emptyDir or hostPath volume can consume, and no isolation between Containers or between Pods.
  • emptyDir and hostPath volumes will be able to request a certain amount of space using a resource specification, and to select the type of media to use, for clusters that have several media types.
  • the Container Storage Interface (CSI) and Flexvolume. They enable storage vendors to create custom storage plugins without adding them to the Kubernetes repository.
  • all volume plugins (like volume types listed above) were “in-tree” meaning they were built, linked, compiled, and shipped with the core Kubernetes binaries and extend the core Kubernetes API.
  • Container Storage Interface (CSI) defines a standard interface for container orchestration systems (like Kubernetes) to expose arbitrary storage systems to their container workloads.
  • Once a CSI compatible volume driver is deployed on a Kubernetes cluster, users may use the csi volume type to attach, mount, etc. the volumes exposed by the CSI driver.
  • The csi volume type does not support direct reference from Pod and may only be referenced in a Pod via a PersistentVolumeClaim object.
  • This feature requires CSIInlineVolume feature gate to be enabled:--feature-gates=CSIInlineVolume=true
  • In-tree plugins that support CSI Migration and have a corresponding CSI driver implemented are listed in the “Types of Volumes” section above.
  • Mount propagation allows for sharing volumes mounted by a Container to other Containers in the same Pod, or even to other Pods on the same node.
  • Mount propagation of a volume is controlled by mountPropagation field in Container.volumeMounts.
  • HostToContainer - This volume mount will receive all subsequent mounts that are mounted to this volume or any of its subdirectories.
  • Bidirectional - This volume mount behaves the same the HostToContainer mount. In addition, all volume mounts created by the Container will be propagated back to the host and to all Containers of all Pods that use the same volume.
  • Edit your Docker’s systemd service file. Set MountFlags as follows:MountFlags=shared
張 旭

Extend the Kubernetes API with CustomResourceDefinitions | Kubernetes - 0 views

  • When you create a new CustomResourceDefinition (CRD), the Kubernetes API Server creates a new RESTful resource path for each version you specify.
  • The CRD can be either namespaced or cluster-scoped, as specified in the CRD's scope field
  • deleting a namespace deletes all custom objects in that namespace.
  • ...7 more annotations...
  • CustomResourceDefinitions themselves are non-namespaced and are available to all namespaces.
  • Custom objects can contain custom fields. These fields can contain arbitrary JSON.
  • When you delete a CustomResourceDefinition, the server will uninstall the RESTful API endpoint and delete all custom objects stored in it
  • CustomResourceDefinitions store validated resource data in the cluster's persistence store, etcd.
  • By default, all unspecified fields for a custom resource, across all versions, are pruned.
  • The field json can store any JSON value, without anything being pruned.
  • Finalizers allow controllers to implement asynchronous pre-delete hooks.
張 旭

Kubernetes Components | Kubernetes - 0 views

  • A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications
  • Every cluster has at least one worker node.
  • The control plane manages the worker nodes and the Pods in the cluster.
  • ...29 more annotations...
  • The control plane's components make global decisions about the cluster
  • Control plane components can be run on any machine in the cluster.
  • for simplicity, set up scripts typically start all control plane components on the same machine, and do not run user containers on this machine
  • The API server is the front end for the Kubernetes control plane.
  • kube-apiserver is designed to scale horizontally—that is, it scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances.
  • Kubernetes cluster uses etcd as its backing store, make sure you have a back up plan for those data.
  • watches for newly created Pods with no assigned node, and selects a node for them to run on.
  • Factors taken into account for scheduling decisions include: individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and deadlines.
  • each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.
  • Node controller
  • Job controller
  • Endpoints controller
  • Service Account & Token controllers
  • The cloud controller manager lets you link your cluster into your cloud provider's API, and separates out the components that interact with that cloud platform from components that only interact with your cluster.
  • If you are running Kubernetes on your own premises, or in a learning environment inside your own PC, the cluster does not have a cloud controller manager.
  • An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
  • The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy.
  • The kubelet doesn't manage containers which were not created by Kubernetes.
  • kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
  • kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
  • kube-proxy uses the operating system packet filtering layer if there is one and it's available.
  • Kubernetes supports several container runtimes: Docker, containerd, CRI-O, and any implementation of the Kubernetes CRI (Container Runtime Interface).
  • Addons use Kubernetes resources (DaemonSet, Deployment, etc) to implement cluster features
  • namespaced resources for addons belong within the kube-system namespace.
  • all Kubernetes clusters should have cluster DNS,
  • Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.
  • Containers started by Kubernetes automatically include this DNS server in their DNS searches.
  • Container Resource Monitoring records generic time-series metrics about containers in a central database, and provides a UI for browsing that data.
  • A cluster-level logging mechanism is responsible for saving container logs to a central log store with search/browsing interface.
張 旭

Production environment | Kubernetes - 0 views

  • to promote an existing cluster for production use
  • Separating the control plane from the worker nodes.
  • Having enough worker nodes available
  • ...22 more annotations...
  • You can use role-based access control (RBAC) and other security mechanisms to make sure that users and workloads can get access to the resources they need, while keeping workloads, and the cluster itself, secure. You can set limits on the resources that users and workloads can access by managing policies and container resources.
  • you need to plan how to scale to relieve increased pressure from more requests to the control plane and worker nodes or scale down to reduce unused resources.
  • Managed control plane: Let the provider manage the scale and availability of the cluster's control plane, as well as handle patches and upgrades.
  • The simplest Kubernetes cluster has the entire control plane and worker node services running on the same machine.
  • You can deploy a control plane using tools such as kubeadm, kops, and kubespray.
  • Secure communications between control plane services are implemented using certificates.
  • Certificates are automatically generated during deployment or you can generate them using your own certificate authority.
  • Separate and backup etcd service: The etcd services can either run on the same machines as other control plane services or run on separate machines
  • Create multiple control plane systems: For high availability, the control plane should not be limited to a single machine
  • Some deployment tools set up Raft consensus algorithm to do leader election of Kubernetes services. If the primary goes away, another service elects itself and take over.
  • Groups of zones are referred to as regions.
  • if you installed with kubeadm, there are instructions to help you with Certificate Management and Upgrading kubeadm clusters.
  • Production-quality workloads need to be resilient and anything they rely on needs to be resilient (such as CoreDNS).
  • Add nodes to the cluster: If you are managing your own cluster you can add nodes by setting up your own machines and either adding them manually or having them register themselves to the cluster’s apiserver.
  • Set up node health checks: For important workloads, you want to make sure that the nodes and pods running on those nodes are healthy.
  • Authentication: The apiserver can authenticate users using client certificates, bearer tokens, an authenticating proxy, or HTTP basic auth.
  • Authorization: When you set out to authorize your regular users, you will probably choose between RBAC and ABAC authorization.
  • Role-based access control (RBAC): Lets you assign access to your cluster by allowing specific sets of permissions to authenticated users. Permissions can be assigned for a specific namespace (Role) or across the entire cluster (ClusterRole).
  • Attribute-based access control (ABAC): Lets you create policies based on resource attributes in the cluster and will allow or deny access based on those attributes.
  • Set limits on workload resources
  • Set namespace limits: Set per-namespace quotas on things like memory and CPU
  • Prepare for DNS demand: If you expect workloads to massively scale up, your DNS service must be ready to scale up as well.
張 旭

Building a RESTful API in a Rails application - 0 views

  • designing and implementing a REST API in an intentionally simplistic task management web application, and will cover some best practices to ensure maintainability of the code.
  • each individual request should have no context of the requests that came before it.
  • each request that modifies the database should act on one and only one row of one and only one table
  • ...10 more annotations...
  • The resource endpoints should return representations of the resource as data, usually XML or JSON.
  • POST for create, PUT for update, PATCH for upsert (update and insert).
  • an existing API should never be modified, except for critical bugfixes
  • Rather than changing existing endpoints, expose a new version
  • using unique database ids in the route chain allows users to access short routes, and simplifies resource lookup
  • while exposing internal database ids to the consumer and requiring the consumer to maintain a reference to ids on their end
  • The downfall is longer nested routes
  • require reauthentication on a per-request level
  • Devise.secure_compare helps avoid timing attacks
  • Defensive programming is a software design principle that dictates that a piece of software should be designed to continue functioning in unforeseen circumstances.
張 旭

Serverless Architectures - 0 views

  • Serverless was first used to describe applications that significantly or fully depend on 3rd party applications / services (‘in the cloud’) to manage server-side logic and state.
  • ‘rich client’ applications (think single page web apps, or mobile apps) that use the vast ecosystem of cloud accessible databases (like Parse, Firebase), authentication services (Auth0, AWS Cognito), etc.
  • ‘(Mobile) Backend as a Service’
  • ...33 more annotations...
  • Serverless can also mean applications where some amount of server-side logic is still written by the application developer but unlike traditional architectures is run in stateless compute containers that are event-triggered, ephemeral (may only last for one invocation), and fully managed by a 3rd party.
  • ‘Functions as a service
  • AWS Lambda is one of the most popular implementations of FaaS at present,
  • A good example is Auth0 - they started initially with BaaS ‘Authentication as a Service’, but with Auth0 Webtask they are entering the FaaS space.
  • a typical ecommerce app
  • a backend data-processing service
  • with zero administration.
  • FaaS offerings do not require coding to a specific framework or library.
  • Horizontal scaling is completely automatic, elastic, and managed by the provider
  • Functions in FaaS are triggered by event types defined by the provider.
  • a FaaS-supported message broker
  • from a deployment-unit point of view FaaS functions are stateless.
  • allowed the client direct access to a subset of our database
  • deleted the authentication logic in the original application and have replaced it with a third party BaaS service
  • The client is in fact well on its way to becoming a Single Page Application.
  • implement a FaaS function that responds to http requests via an API Gateway
  • port the search code from the Pet Store server to the Pet Store Search function
  • replaced a long lived consumer application with a FaaS function that runs within the event driven context
  • server applications - is a key difference when comparing with other modern architectural trends like containers and PaaS
  • the only code that needs to change when moving to FaaS is the ‘main method / startup’ code, in that it is deleted, and likely the specific code that is the top-level message handler (the ‘message listener interface’ implementation), but this might only be a change in method signature
  • With FaaS you need to write the function ahead of time to assume parallelism
  • Most providers also allow functions to be triggered as a response to inbound http requests, typically in some kind of API gateway
  • you should assume that for any given invocation of a function none of the in-process or host state that you create will be available to any subsequent invocation.
  • FaaS functions are either naturally stateless
  • store state across requests or for further input to handle a request.
  • certain classes of long lived task are not suited to FaaS functions without re-architecture
  • if you were writing a low-latency trading application you probably wouldn’t want to use FaaS systems at this time
  • An API Gateway is an HTTP server where routes / endpoints are defined in configuration and each route is associated with a FaaS function.
  • API Gateway will allow mapping from http request parameters to inputs arguments for the FaaS function
  • API Gateways may also perform authentication, input validation, response code mapping, etc.
  • the Serverless Framework makes working with API Gateway + Lambda significantly easier than using the first principles provided by AWS.
  • Apex - a project to ‘Build, deploy, and manage AWS Lambda functions with ease.'
  • 'Serverless' to mean the union of a couple of other ideas - 'Backend as a Service' and 'Functions as a Service'.
張 旭

Rails Database Best Practices - 0 views

  • Databases are extremely feature rich and are really freakin fast when used properly
  • create succinct helpers for accessing subsets of data that are relevant in specific situations
  • Relations are chainable
  • ...24 more annotations...
  • Return an ActiveRecord::Relation
  • Filtering in Ruby is slower
  • Please don't do this
  • trigger the query and therefore, we lose our Relation
  • leaving trivial ordering out of scopes all together.
  • where
  • where
  • .merge() makes it easy to use scopes from other models that have been joined into the query, reducing potential duplication.
  • ActiveRecord provides an easy API for doing many things with our database, but it also makes it pretty easy to do things inefficiently. The layer of abstraction hides what’s really happening.
  • first pure SQL, then ActiveRecord
  • Databases can only do fast lookups for columns with indexes, otherwise it’s doing a sequential scan
  • Add an index on every id column as well as any column that is used in a where clause.
  • use a Query class to encapsulate the potentially gnarly query.
  • subqueries
  • this Query returns an ActiveRecord::Relation
  • where
  • where
  • Single Responsibility Principle
  • Avoid ad-hoc queries outside of Scopes and Query Objects
  • encapsulate data access into scopes and Query objects
  • An ad-hoc query embedded in a controller (or view, task, etc) is harder to test in isolation and cannot be reused
  • to scopes and Query objects
    • 張 旭
       
      將查詢方式都封裝成 scope 或 query 物件。
  • Every databases provides more datatypes than your ORM might have you believe
  • Both Postgres and MySQL have full-text search capabilities
張 旭

Getting Started with Rails - Ruby on Rails Guides - 0 views

  • A controller's purpose is to receive specific requests for the application.
  • Routing decides which controller receives which requests
  • The view should just display that information
  • ...55 more annotations...
  • view templates are written in a language called ERB (Embedded Ruby) which is converted by the request cycle in Rails before being sent to the user.
  • Each action's purpose is to collect information to provide it to a view.
  • A view's purpose is to display this information in a human readable format.
  • routing file which holds entries in a special DSL (domain-specific language) that tells Rails how to connect incoming requests to controllers and actions.
  • You can create, read, update and destroy items for a resource and these operations are referred to as CRUD operations
  • A controller is simply a class that is defined to inherit from ApplicationController.
  • If not found, then it will attempt to load a template called application/new. It looks for one here because the PostsController inherits from ApplicationController
  • :formats specifies the format of template to be served in response. The default format is :html, and so Rails is looking for an HTML template.
  • :handlers, is telling us what template handlers could be used to render our template.
  • When you call form_for, you pass it an identifying object for this form. In this case, it's the symbol :post. This tells the form_for helper what this form is for.
  • that the action attribute for the form is pointing at /posts/new
  • When a form is submitted, the fields of the form are sent to Rails as parameters.
  • parameters can then be referenced inside the controller actions, typically to perform a particular task
  • params method is the object which represents the parameters (or fields) coming in from the form.
  • Active Record is smart enough to automatically map column names to model attributes,
  • Rails uses rake commands to run migrations, and it's possible to undo a migration after it's been applied to your database
  • every Rails model can be initialized with its respective attributes, which are automatically mapped to the respective database columns.
  • migration creates a method named change which will be called when you run this migration.
  • The action defined in this method is also reversible, which means Rails knows how to reverse the change made by this migration, in case you want to reverse it later
  • Migration filenames include a timestamp to ensure that they're processed in the order that they were created.
  • @post.save returns a boolean indicating whether the model was saved or not.
  • prevents an attacker from setting the model's attributes by manipulating the hash passed to the model.
  • If you want to link to an action in the same controller, you don't need to specify the :controller option, as Rails will use the current controller by default.
  • inherits from ActiveRecord::Base
  • Active Record supplies a great deal of functionality to your Rails models for free, including basic database CRUD (Create, Read, Update, Destroy) operations, data validation, as well as sophisticated search support and the ability to relate multiple models to one another.
  • Rails includes methods to help you validate the data that you send to models
  • Rails can validate a variety of conditions in a model, including the presence or uniqueness of columns, their format, and the existence of associated objects.
  • redirect_to will tell the browser to issue another request.
  • rendering is done within the same request as the form submission
  • Each request for a comment has to keep track of the post to which the comment is attached, thus the initial call to the find method of the Post model to get the post in question.
  • pluralize is a rails helper that takes a number and a string as its arguments. If the number is greater than one, the string will be automatically pluralized.
  • The render method is used so that the @post object is passed back to the new template when it is rendered.
  • The method: :patch option tells Rails that we want this form to be submitted via the PATCH HTTP method which is the HTTP method you're expected to use to update resources according to the REST protocol.
  • it accepts a hash containing the attributes that you want to update.
  • field_with_errors. You can define a css rule to make them standout
  • belongs_to :post, which sets up an Active Record association
  • creates comments as a nested resource within posts
  • call destroy on Active Record objects when you want to delete them from the database.
  • Rails allows you to use the dependent option of an association to achieve this.
  • store all external data as UTF-8
  • you're better off ensuring that all external data is UTF-8
  • use UTF-8 as the internal storage of your database
  • Rails defaults to converting data from your database into UTF-8 at the boundary.
  • :patch
  • By default forms built with the form_for helper are sent via POST
  • The :method and :'data-confirm' options are used as HTML5 attributes so that when the link is clicked, Rails will first show a confirm dialog to the user, and then submit the link with method delete. This is done via the JavaScript file jquery_ujs which is automatically included into your application's layout (app/views/layouts/application.html.erb) when you generated the application.
  • Without this file, the confirmation dialog box wouldn't appear.
  • just defines the partial template we want to render
  • As the render method iterates over the @post.comments collection, it assigns each comment to
  • a local variable named the same as the partial
  • use the authentication system
  • require and permit
  • the method is often made private to make sure it can't be called outside its intended context.
  • standard CRUD actions in each controller in the following order: index, show, new, edit, create, update and destroy.
  • must be placed before any private or protected method in the controller in order to work
張 旭

Use swarm mode routing mesh | Docker Documentation - 0 views

  • Docker Engine swarm mode makes it easy to publish ports for services to make them available to resources outside the swarm.
  • All nodes participate in an ingress routing mesh.
  • routing mesh enables each node in the swarm to accept connections on published ports for any service running in the swarm, even if there’s no task running on the node.
  • ...6 more annotations...
  • Port 7946 TCP/UDP for container network discovery
  • Port 4789 UDP for the container ingress network.
  • When you access port 8080 on any node, the swarm load balancer routes your request to an active container.
  • The routing mesh listens on the published port for any IP address assigned to the node.
  • publish a port for an existing service
  • To use an external load balancer without the routing mesh, set --endpoint-mode to dnsrr instead of the default value of vip
張 旭

Boosting your kubectl productivity ♦︎ Learnk8s - 0 views

  • kubectl is your cockpit to control Kubernetes.
  • kubectl is a client for the Kubernetes API
  • Kubernetes API is an HTTP REST API.
  • ...75 more annotations...
  • This API is the real Kubernetes user interface.
  • Kubernetes is fully controlled through this API
  • every Kubernetes operation is exposed as an API endpoint and can be executed by an HTTP request to this endpoint.
  • the main job of kubectl is to carry out HTTP requests to the Kubernetes API
  • Kubernetes maintains an internal state of resources, and all Kubernetes operations are CRUD operations on these resources.
  • Kubernetes is a fully resource-centred system
  • Kubernetes API reference is organised as a list of resource types with their associated operations.
  • This is how kubectl works for all commands that interact with the Kubernetes cluster.
  • kubectl simply makes HTTP requests to the appropriate Kubernetes API endpoints.
  • it's totally possible to control Kubernetes with a tool like curl by manually issuing HTTP requests to the Kubernetes API.
  • Kubernetes consists of a set of independent components that run as separate processes on the nodes of a cluster.
  • components on the master nodes
  • Storage backend: stores resource definitions (usually etcd is used)
  • API server: provides Kubernetes API and manages storage backend
  • Controller manager: ensures resource statuses match specifications
  • Scheduler: schedules Pods to worker nodes
  • component on the worker nodes
  • Kubelet: manages execution of containers on a worker node
  • triggers the ReplicaSet controller, which is a sub-process of the controller manager.
  • the scheduler, who watches for Pod definitions that are not yet scheduled to a worker node.
  • creating and updating resources in the storage backend on the master node.
  • The kubelet of the worker node your ReplicaSet Pods have been scheduled to instructs the configured container runtime (which may be Docker) to download the required container images and run the containers.
  • Kubernetes components (except the API server and the storage backend) work by watching for resource changes in the storage backend and manipulating resources in the storage backend.
  • However, these components do not access the storage backend directly, but only through the Kubernetes API.
    • 張 旭
       
      很精彩,相互之間都是使用 API call 溝通,良好的微服務行為。
  • double usage of the Kubernetes API for internal components as well as for external users is a fundamental design concept of Kubernetes.
  • All other Kubernetes components and users read, watch, and manipulate the state (i.e. resources) of Kubernetes through the Kubernetes API
  • The storage backend stores the state (i.e. resources) of Kubernetes.
  • command completion is a shell feature that works by the means of a completion script.
  • A completion script is a shell script that defines the completion behaviour for a specific command. Sourcing a completion script enables completion for the corresponding command.
  • kubectl completion zsh
  • /etc/bash_completion.d directory (create it, if it doesn't exist)
  • source <(kubectl completion bash)
  • source <(kubectl completion zsh)
  • autoload -Uz compinit compinit
  • the API reference, which contains the full specifications of all resources.
  • kubectl api-resources
  • displays the resource names in their plural form (e.g. deployments instead of deployment). It also displays the shortname (e.g. deploy) for those resources that have one. Don't worry about these differences. All of these name variants are equivalent for kubectl.
  • .spec
  • custom columns output format comes in. It lets you freely define the columns and the data to display in them. You can choose any field of a resource to be displayed as a separate column in the output
  • kubectl get pods -o custom-columns='NAME:metadata.name,NODE:spec.nodeName'
  • kubectl explain pod.spec.
  • kubectl explain pod.metadata.
  • browse the resource specifications and try it out with any fields you like!
  • JSONPath is a language to extract data from JSON documents (it is similar to XPath for XML).
  • with kubectl explain, only a subset of the JSONPath capabilities is supported
  • Many fields of Kubernetes resources are lists, and this operator allows you to select items of these lists. It is often used with a wildcard as [*] to select all items of the list.
  • kubectl get pods -o custom-columns='NAME:metadata.name,IMAGES:spec.containers[*].image'
  • a Pod may contain more than one container.
  • The availability zones for each node are obtained through the special failure-domain.beta.kubernetes.io/zone label.
  • kubectl get nodes -o yaml kubectl get nodes -o json
  • The default kubeconfig file is ~/.kube/config
  • with multiple clusters, then you have connection parameters for multiple clusters configured in your kubeconfig file.
  • Within a cluster, you can set up multiple namespaces (a namespace is kind of "virtual" clusters within a physical cluster)
  • overwrite the default kubeconfig file with the --kubeconfig option for every kubectl command.
  • Namespace: the namespace to use when connecting to the cluster
  • a one-to-one mapping between clusters and contexts.
  • When kubectl reads a kubeconfig file, it always uses the information from the current context.
  • just change the current context in the kubeconfig file
  • to switch to another namespace in the same cluster, you can change the value of the namespace element of the current context
  • kubectl also provides the --cluster, --user, --namespace, and --context options that allow you to overwrite individual elements and the current context itself, regardless of what is set in the kubeconfig file.
  • for switching between clusters and namespaces is kubectx.
  • kubectl config get-contexts
  • just have to download the shell scripts named kubectl-ctx and kubectl-ns to any directory in your PATH and make them executable (for example, with chmod +x)
  • kubectl proxy
  • kubectl get roles
  • kubectl get pod
  • Kubectl plugins are distributed as simple executable files with a name of the form kubectl-x. The prefix kubectl- is mandatory,
  • To install a plugin, you just have to copy the kubectl-x file to any directory in your PATH and make it executable (for example, with chmod +x)
  • krew itself is a kubectl plugin
  • check out the kubectl-plugins GitHub topic
  • The executable can be of any type, a Bash script, a compiled Go program, a Python script, it really doesn't matter. The only requirement is that it can be directly executed by the operating system.
  • kubectl plugins can be written in any programming or scripting language.
  • you can write more sophisticated plugins with real programming languages, for example, using a Kubernetes client library. If you use Go, you can also use the cli-runtime library, which exists specifically for writing kubectl plugins.
  • a kubeconfig file consists of a set of contexts
  • changing the current context means changing the cluster, if you have only a single context per cluster.
張 旭

dry-rb - Home - 0 views

  •  
    "dry-rb is a collection of next-generation Ruby libraries, each intended to encapsulate a common task"
張 旭

The Twelve-Factor App - 0 views

  • The process formation is the array of processes that are used to do the app’s regular business
  • one-off administrative or maintenance tasks for the app
  • One-off admin processes should be run in an identical environment as the regular long-running processes of the app.
  • ...2 more annotations...
  • Admin code must ship with application code to avoid synchronization issues.
  • Twelve-factor strongly favors languages which provide a REPL shell out of the box, and which make it easy to run one-off scripts.
張 旭

Scalable architecture without magic (and how to build it if you're not Google) - DEV Co... - 0 views

  • Don’t mess up write-first and read-first databases.
  • keep them stateless.
  • you should know how to make a scalable setup on bare metal.
  • ...29 more annotations...
  • Different programming languages are for different tasks.
  • Go or C which are compiled to run on bare metal.
  • To run NodeJS on multiple cores, you have to use something like PM2, but since this you have to keep your code stateless.
  • Python have very rich and sugary syntax that’s great for working with data while keeping your code small and expressive.
  • SQL is almost always slower than NoSQL
  • databases are often read-first or write-first
  • write-first, just like Cassandra.
  • store all of your data to your databases and leave nothing at backend
  • Functional code is stateless by default
  • It’s better to go for stateless right from the very beginning.
  • deliver exactly the same responses for same requests.
  • Sessions? Store them at Redis and allow all of your servers to access it.
  • Only the first user will trigger a data query, and all others will be receiving exactly the same data straight from the RAM
  • never, never cache user input
  • Only the server output should be cached
  • Varnish is a great cache option that works with HTTP responses, so it may work with any backend.
  • a rate limiter – if there’s not enough time have passed since last request, the ongoing request will be denied.
  • different requests blasting every 10ms can bring your server down
  • Just set up entry relations and allow your database to calculate external keys for you
  • the query planner will always be faster than your backend.
  • Backend should have different responsibilities: hashing, building web pages from data and templates, managing sessions and so on.
  • For anything related to data management or data models, move it to your database as procedures or queries.
  • a distributed database.
  • your code has to be stateless
  • Move anything related to the data to the database.
  • For load-balancing a database, go for cluster.
  • DB is balancing requests, as well as your backend.
  • Users from different continents are separated with DNS.
  • Keep is scalable, keep is stateless.
  •  
    "Don't mess up write-first and read-first databases."
張 旭

Queues - Laravel - The PHP Framework For Web Artisans - 0 views

  • Laravel queues provide a unified API across a variety of different queue backends, such as Beanstalk, Amazon SQS, Redis, or even a relational database.
  • The queue configuration file is stored in config/queue.php
  • a synchronous driver that will execute jobs immediately (for local use)
  • ...56 more annotations...
  • A null queue driver is also included which discards queued jobs.
  • In your config/queue.php configuration file, there is a connections configuration option.
  • any given queue connection may have multiple "queues" which may be thought of as different stacks or piles of queued jobs.
  • each connection configuration example in the queue configuration file contains a queue attribute.
  • if you dispatch a job without explicitly defining which queue it should be dispatched to, the job will be placed on the queue that is defined in the queue attribute of the connection configuration
  • pushing jobs to multiple queues can be especially useful for applications that wish to prioritize or segment how jobs are processed
  • specify which queues it should process by priority.
  • If your Redis queue connection uses a Redis Cluster, your queue names must contain a key hash tag.
  • ensure all of the Redis keys for a given queue are placed into the same hash slot
  • all of the queueable jobs for your application are stored in the app/Jobs directory.
  • Job classes are very simple, normally containing only a handle method which is called when the job is processed by the queue.
  • we were able to pass an Eloquent model directly into the queued job's constructor. Because of the SerializesModels trait that the job is using, Eloquent models will be gracefully serialized and unserialized when the job is processing.
  • When the job is actually handled, the queue system will automatically re-retrieve the full model instance from the database.
  • The handle method is called when the job is processed by the queue
  • The arguments passed to the dispatch method will be given to the job's constructor
  • delay the execution of a queued job, you may use the delay method when dispatching a job.
  • dispatch a job immediately (synchronously), you may use the dispatchNow method.
  • When using this method, the job will not be queued and will be run immediately within the current process
  • specify a list of queued jobs that should be run in sequence.
  • Deleting jobs using the $this->delete() method will not prevent chained jobs from being processed. The chain will only stop executing if a job in the chain fails.
  • this does not push jobs to different queue "connections" as defined by your queue configuration file, but only to specific queues within a single connection.
  • To specify the queue, use the onQueue method when dispatching the job
  • To specify the connection, use the onConnection method when dispatching the job
  • defining the maximum number of attempts on the job class itself.
  • to defining how many times a job may be attempted before it fails, you may define a time at which the job should timeout.
  • using the funnel method, you may limit jobs of a given type to only be processed by one worker at a time
  • using the throttle method, you may throttle a given type of job to only run 10 times every 60 seconds.
  • If an exception is thrown while the job is being processed, the job will automatically be released back onto the queue so it may be attempted again.
  • dispatch a Closure. This is great for quick, simple tasks that need to be executed outside of the current request cycle
  • When dispatching Closures to the queue, the Closure's code contents is cryptographically signed so it can not be modified in transit.
  • Laravel includes a queue worker that will process new jobs as they are pushed onto the queue.
  • once the queue:work command has started, it will continue to run until it is manually stopped or you close your terminal
  • queue workers are long-lived processes and store the booted application state in memory.
  • they will not notice changes in your code base after they have been started.
  • during your deployment process, be sure to restart your queue workers.
  • customize your queue worker even further by only processing particular queues for a given connection
  • The --once option may be used to instruct the worker to only process a single job from the queue
  • The --stop-when-empty option may be used to instruct the worker to process all jobs and then exit gracefully.
  • Daemon queue workers do not "reboot" the framework before processing each job.
  • you should free any heavy resources after each job completes.
  • Since queue workers are long-lived processes, they will not pick up changes to your code without being restarted.
  • restart the workers during your deployment process.
  • php artisan queue:restart
  • The queue uses the cache to store restart signals
  • the queue workers will die when the queue:restart command is executed, you should be running a process manager such as Supervisor to automatically restart the queue workers.
  • each queue connection defines a retry_after option. This option specifies how many seconds the queue connection should wait before retrying a job that is being processed.
  • The --timeout option specifies how long the Laravel queue master process will wait before killing off a child queue worker that is processing a job.
  • When jobs are available on the queue, the worker will keep processing jobs with no delay in between them.
  • While sleeping, the worker will not process any new jobs - the jobs will be processed after the worker wakes up again
  • the numprocs directive will instruct Supervisor to run 8 queue:work processes and monitor all of them, automatically restarting them if they fail.
  • Laravel includes a convenient way to specify the maximum number of times a job should be attempted.
  • define a failed method directly on your job class, allowing you to perform job specific clean-up when a failure occurs.
  • a great opportunity to notify your team via email or Slack.
  • php artisan queue:retry all
  • php artisan queue:flush
  • When injecting an Eloquent model into a job, it is automatically serialized before being placed on the queue and restored when the job is processed
‹ Previous 21 - 40 of 59 Next ›
Showing 20 items per page