Skip to main content

Home/ Larvata/ Group items tagged ios

Rss Feed Group items tagged

張 旭

An App's Brief Journey from Source to Image · Cloud Native Buildpack Document... - 0 views

  • A buildpack’s job is to gather everything your app needs to build and run, and it often does this job quickly and quietly.
  • a platform sequentially tests groups of buildpacks against your app’s source code.
  • transforming your source code into a runnable app image.
  • ...2 more annotations...
  • Detection criteria is specific to each buildpack – for instance, an NPM buildpack might look for a package.json, and a Go buildpack might look for Go source files.
  • A builder is essentially an image containing buildpacks.
張 旭

Helm | - 0 views

  • Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses.
  • kubectl cluster-info
  • Role-Based Access Control (RBAC) enabled
  • ...133 more annotations...
  • initialize the local CLI
  • install Tiller into your Kubernetes cluster
  • helm install
  • helm init --upgrade
  • By default, when Tiller is installed, it does not have authentication enabled.
  • helm repo update
  • Without a max history set the history is kept indefinitely, leaving a large number of records for helm and tiller to maintain.
  • helm init --upgrade
  • Whenever you install a chart, a new release is created.
  • one chart can be installed multiple times into the same cluster. And each can be independently managed and upgraded.
  • helm list function will show you a list of all deployed releases.
  • helm delete
  • helm status
  • you can audit a cluster’s history, and even undelete a release (with helm rollback).
  • the Helm server (Tiller).
  • The Helm client (helm)
  • brew install kubernetes-helm
  • Tiller, the server portion of Helm, typically runs inside of your Kubernetes cluster.
  • it can also be run locally, and configured to talk to a remote Kubernetes cluster.
  • Role-Based Access Control - RBAC for short
  • create a service account for Tiller with the right roles and permissions to access resources.
  • run Tiller in an RBAC-enabled Kubernetes cluster.
  • run kubectl get pods --namespace kube-system and see Tiller running.
  • helm inspect
  • Helm will look for Tiller in the kube-system namespace unless --tiller-namespace or TILLER_NAMESPACE is set.
  • For development, it is sometimes easier to work on Tiller locally, and configure it to connect to a remote Kubernetes cluster.
  • even when running locally, Tiller will store release configuration in ConfigMaps inside of Kubernetes.
  • helm version should show you both the client and server version.
  • Tiller stores its data in Kubernetes ConfigMaps, you can safely delete and re-install Tiller without worrying about losing any data.
  • helm reset
  • The --node-selectors flag allows us to specify the node labels required for scheduling the Tiller pod.
  • --override allows you to specify properties of Tiller’s deployment manifest.
  • helm init --override manipulates the specified properties of the final manifest (there is no “values” file).
  • The --output flag allows us skip the installation of Tiller’s deployment manifest and simply output the deployment manifest to stdout in either JSON or YAML format.
  • By default, tiller stores release information in ConfigMaps in the namespace where it is running.
  • switch from the default backend to the secrets backend, you’ll have to do the migration for this on your own.
  • a beta SQL storage backend that stores release information in an SQL database (only postgres has been tested so far).
  • Once you have the Helm Client and Tiller successfully installed, you can move on to using Helm to manage charts.
  • Helm requires that kubelet have access to a copy of the socat program to proxy connections to the Tiller API.
  • A Release is an instance of a chart running in a Kubernetes cluster. One chart can often be installed many times into the same cluster.
  • helm init --client-only
  • helm init --dry-run --debug
  • A panic in Tiller is almost always the result of a failure to negotiate with the Kubernetes API server
  • Tiller and Helm have to negotiate a common version to make sure that they can safely communicate without breaking API assumptions
  • helm delete --purge
  • Helm stores some files in $HELM_HOME, which is located by default in ~/.helm
  • A Chart is a Helm package. It contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster.
  • it like the Kubernetes equivalent of a Homebrew formula, an Apt dpkg, or a Yum RPM file.
  • A Repository is the place where charts can be collected and shared.
  • Set the $HELM_HOME environment variable
  • each time it is installed, a new release is created.
  • Helm installs charts into Kubernetes, creating a new release for each installation. And to find new charts, you can search Helm chart repositories.
  • chart repository is named stable by default
  • helm search shows you all of the available charts
  • helm inspect
  • To install a new package, use the helm install command. At its simplest, it takes only one argument: The name of the chart.
  • If you want to use your own release name, simply use the --name flag on helm install
  • additional configuration steps you can or should take.
  • Helm does not wait until all of the resources are running before it exits. Many charts require Docker images that are over 600M in size, and may take a long time to install into the cluster.
  • helm status
  • helm inspect values
  • helm inspect values stable/mariadb
  • override any of these settings in a YAML formatted file, and then pass that file during installation.
  • helm install -f config.yaml stable/mariadb
  • --values (or -f): Specify a YAML file with overrides.
  • --set (and its variants --set-string and --set-file): Specify overrides on the command line.
  • Values that have been --set can be cleared by running helm upgrade with --reset-values specified.
  • Chart designers are encouraged to consider the --set usage when designing the format of a values.yaml file.
  • --set-file key=filepath is another variant of --set. It reads the file and use its content as a value.
  • inject a multi-line text into values without dealing with indentation in YAML.
  • An unpacked chart directory
  • When a new version of a chart is released, or when you want to change the configuration of your release, you can use the helm upgrade command.
  • Kubernetes charts can be large and complex, Helm tries to perform the least invasive upgrade.
  • It will only update things that have changed since the last release
  • $ helm upgrade -f panda.yaml happy-panda stable/mariadb
  • deployment
  • If both are used, --set values are merged into --values with higher precedence.
  • The helm get command is a useful tool for looking at a release in the cluster.
  • helm rollback
  • A release version is an incremental revision. Every time an install, upgrade, or rollback happens, the revision number is incremented by 1.
  • helm history
  • a release name cannot be re-used.
  • you can rollback a deleted resource, and have it re-activate.
  • helm repo list
  • helm repo add
  • helm repo update
  • The Chart Development Guide explains how to develop your own charts.
  • helm create
  • helm lint
  • helm package
  • Charts that are archived can be loaded into chart repositories.
  • chart repository server
  • Tiller can be installed into any namespace.
  • Limiting Tiller to only be able to install into specific namespaces and/or resource types is controlled by Kubernetes RBAC roles and rolebindings
  • Release names are unique PER TILLER INSTANCE
  • Charts should only contain resources that exist in a single namespace.
  • not recommended to have multiple Tillers configured to manage resources in the same namespace.
  • a client-side Helm plugin. A plugin is a tool that can be accessed through the helm CLI, but which is not part of the built-in Helm codebase.
  • Helm plugins are add-on tools that integrate seamlessly with Helm. They provide a way to extend the core feature set of Helm, but without requiring every new feature to be written in Go and added to the core tool.
  • Helm plugins live in $(helm home)/plugins
  • The Helm plugin model is partially modeled on Git’s plugin model
  • helm referred to as the porcelain layer, with plugins being the plumbing.
  • helm plugin install https://github.com/technosophos/helm-template
  • command is the command that this plugin will execute when it is called.
  • Environment variables are interpolated before the plugin is executed.
  • The command itself is not executed in a shell. So you can’t oneline a shell script.
  • Helm is able to fetch Charts using HTTP/S
  • Variables like KUBECONFIG are set for the plugin if they are set in the outer environment.
  • In Kubernetes, granting a role to an application-specific service account is a best practice to ensure that your application is operating in the scope that you have specified.
  • restrict Tiller’s capabilities to install resources to certain namespaces, or to grant a Helm client running access to a Tiller instance.
  • Service account with cluster-admin role
  • The cluster-admin role is created by default in a Kubernetes cluster
  • Deploy Tiller in a namespace, restricted to deploying resources only in that namespace
  • Deploy Tiller in a namespace, restricted to deploying resources in another namespace
  • When running a Helm client in a pod, in order for the Helm client to talk to a Tiller instance, it will need certain privileges to be granted.
  • SSL Between Helm and Tiller
  • The Tiller authentication model uses client-side SSL certificates.
  • creating an internal CA, and using both the cryptographic and identity functions of SSL.
  • Helm is a powerful and flexible package-management and operations tool for Kubernetes.
  • default installation applies no security configurations
  • with a cluster that is well-secured in a private network with no data-sharing or no other users or teams.
  • With great power comes great responsibility.
  • Choose the Best Practices you should apply to your helm installation
  • Role-based access control, or RBAC
  • Tiller’s gRPC endpoint and its usage by Helm
  • Kubernetes employ a role-based access control (or RBAC) system (as do modern operating systems) to help mitigate the damage that can be done if credentials are misused or bugs exist.
  • In the default installation the gRPC endpoint that Tiller offers is available inside the cluster (not external to the cluster) without authentication configuration applied.
  • Tiller stores its release information in ConfigMaps. We suggest changing the default to Secrets.
  • release information
  • charts
  • charts are a kind of package that not only installs containers you may or may not have validated yourself, but it may also install into more than one namespace.
  • As with all shared software, in a controlled or shared environment you must validate all software you install yourself before you install it.
  • Helm’s provenance tools to ensure the provenance and integrity of charts
  •  
    "Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses."
張 旭

vSphere Cloud Provider | vSphere Storage for Kubernetes - 0 views

  • Containers are stateless and ephemeral but applications are stateful and need persistent storage.
  • Cloud Provider
  • Kubernetes cloud providers are an interface to integrate various node (i.e. hosts), load balancers and networking routes
  • ...8 more annotations...
  • VMware offers a Cloud Provider known as the vSphere Cloud Provider (VCP) for Kubernetes which allows Pods to use enterprise grade persistent storage.
  • A vSphere datastore is an abstraction which hides storage details (such as LUNs) and provides a uniform interface for storing persistent data.
  • the datastores can be of the type vSAN, VMFS, NFS & VVol.
  • VMFS (Virtual Machine File System) is a cluster file system that allows virtualization to scale beyond a single node for multiple VMware ESX servers.
  • NFS (Network File System) is a distributed file protocol to access storage over network like local storage.
  • vSphere Cloud Provider supports every storage primitive exposed by Kubernetes
  • Kubernetes PVs are defined in Pod specifications.
  • PVCs when using Dynamic Provisioning (preferred).
張 旭

Modules - Configuration Language - Terraform by HashiCorp - 0 views

  • provider blocks can appear in any module, it is recommended that they be placed only in the root module of a configuration
  • In all cases it is recommended to keep explicit provider configurations only in the root module and pass them (whether implicitly or explicitly) down to descendent modules
  • Provider configurations are used for all operations on associated resources, including destroying remote objects and refreshing state.
  • ...5 more annotations...
  • all resources created for a particular provider configuration must be destroyed before that provider configuration is removed, unless the related resources are re-configured to use a different provider configuration first.
  • a child module automatically inherits default (un-aliased) provider configurations from its parent.
  • recommended in the common case where only a single configuration is needed for each provider across the entire configuration.
  • the providers argument within a module block can be used to define explicitly which provider configs are made available to the child module.
  • Once the providers argument is used in a module block, it overrides all of the default inheritance behavior, so it is necessary to enumerate mappings for all of the required providers.
張 旭

How to create reusable infrastructure with Terraform modules - 0 views

  • auto scaling schedule
  • The easiest way to create a versioned module is to put the code for the module in a separate Git repository and to set the source parameter to that repository’s URL.
張 旭

Running Terraform in Automation | Terraform - HashiCorp Learn - 0 views

  • In default usage, terraform init downloads and installs the plugins for any providers used in the configuration automatically, placing them in a subdirectory of the .terraform directory.
  • allows each configuration to potentially use different versions of plugins.
  • In automation environments, it can be desirable to disable this behavior and instead provide a fixed set of plugins already installed on the system where Terraform is running. This then avoids the overhead of re-downloading the plugins on each execution
  • ...12 more annotations...
  • the desire for an interactive approval step between plan and apply.
  • terraform init -input=false to initialize the working directory.
  • terraform plan -out=tfplan -input=false to create a plan and save it to the local file tfplan.
  • terraform apply -input=false tfplan to apply the plan stored in the file tfplan.
  • the environment variable TF_IN_AUTOMATION is set to any non-empty value, Terraform makes some minor adjustments to its output to de-emphasize specific commands to run.
  • it can be difficult or impossible to ensure that the plan and apply subcommands are run on the same machine, in the same directory, with all of the same files present.
  • to allow only one plan to be outstanding at a time.
  • forcing plans to be approved (or dismissed) in sequence
  • -auto-approve
  • The -auto-approve option tells Terraform not to require interactive approval of the plan before applying it.
  • obtain the archive created in the previous step and extract it at the same absolute path. This re-creates everything that was present after plan, avoiding strange issues where local files were created during the plan step.
  • a "build artifact"
  •  
    "In default usage, terraform init downloads and installs the plugins for any providers used in the configuration automatically, placing them in a subdirectory of the .terraform directory. "
張 旭

State: Workspaces - Terraform by HashiCorp - 0 views

  • The persistent data stored in the backend belongs to a workspace.
  • Certain backends support multiple named workspaces, allowing multiple states to be associated with a single configuration.
  • Terraform starts with a single workspace named "default". This workspace is special both because it is the default and also because it cannot ever be deleted.
  • ...12 more annotations...
  • Within your Terraform configuration, you may include the name of the current workspace using the ${terraform.workspace} interpolation sequence.
  • changing behavior based on the workspace.
  • Named workspaces allow conveniently switching between multiple instances of a single configuration within its single backend.
  • A common use for multiple workspaces is to create a parallel, distinct copy of a set of infrastructure in order to test a set of changes before modifying the main production infrastructure.
  • Non-default workspaces are often related to feature branches in version control.
  • Workspaces alone are not a suitable tool for system decomposition, because each subsystem should have its own separate configuration and backend, and will thus have its own distinct set of workspaces.
  • In particular, organizations commonly want to create a strong separation between multiple deployments of the same infrastructure serving different development stages (e.g. staging vs. production) or different internal teams.
  • use one or more re-usable modules to represent the common elements, and then represent each instance as a separate configuration that instantiates those common elements in the context of a different backend.
  • If a Terraform state for one configuration is stored in a remote backend that is accessible to other configurations then terraform_remote_state can be used to directly consume its root module outputs from those other configurations.
  • For server addresses, use a provider-specific resource to create a DNS record with a predictable name and then either use that name directly or use the dns provider to retrieve the published addresses in other configurations.
  • Workspaces are technically equivalent to renaming your state file.
  • using a remote backend instead is recommended when there are multiple collaborators.
  •  
    "The persistent data stored in the backend belongs to a workspace."
張 旭

How to Write a Git Commit Message - 1 views

  • a well-crafted Git commit message is the best way to communicate context about a change to fellow developers (and indeed to their future selves).
  • A diff will tell you what changed, but only the commit message can properly tell you why.
  • a commit message shows whether a developer is a good collaborator
  • ...22 more annotations...
  • a well-cared for log is a beautiful and useful thing
  • Reviewing others’ commits and pull requests becomes something worth doing, and suddenly can be done independently.
  • Understanding why something happened months or years ago becomes not only possible but efficient.
  • how to write an individual commit message.
  • Markup syntax, wrap margins, grammar, capitalization, punctuation.
  • What should it not contain?
  • issue tracking IDs
  • pull request numbers
  • The seven rules of a great Git commit message
  • Use the body to explain what and why vs. how
  • Use the imperative mood in the subject line
  • it’s a good idea to begin the commit message with a single short (less than 50 character) line summarizing the change, followed by a blank line and then a more thorough description.
  • forces the author to think for a moment about the most concise way to explain what’s going on.
  • If you’re having a hard time summarizing, you might be committing too many changes at once.
  • shoot for 50 characters, but consider 72 the hard limit
  • Imperative mood just means “spoken or written as if giving a command or instruction”.
  • Git itself uses the imperative whenever it creates a commit on your behalf.
  • when you write your commit messages in the imperative, you’re following Git’s own built-in conventions.
  • A properly formed Git commit subject line should always be able to complete the following sentence: If applied, this commit will your subject line here
  • explaining what changed and why
  • Code is generally self-explanatory in this regard (and if the code is so complex that it needs to be explained in prose, that’s what source comments are for).
  • there are tab completion scripts that take much of the pain out of remembering the subcommands and switches.
張 旭

探索 Docker bridge 的正确姿势,亲测有效! | DaoCloud - 1 views

  • Docker bridge 和 Linux bridge 二者,初看如出一辙,再看又相去甚远
  • Linux bridge 模式下,Linux Kernel 会创建出一个虚拟网桥 ,用以实现主机网络接口与虚拟网络接口间的通信
  • Linux bridge 像一台虚拟交换机
  • ...15 more annotations...
  • Docker Daemon 会创建出一个名为 docker0 的虚拟网桥 ,用来连接宿主机与容器,或者连接不同的容器
  • veth pair 技术的特性可以保证无论哪一个 veth 接收到网络报文,都会无条件地传输给另一方
  • 在桥接模式下,Docker Daemon 将 veth0 附加到 docker0 网桥上,保证宿主机的报文有能力发往 veth0。
  • 将 veth1 添加到 Docker 容器所属的网络命名空间[注释2],保证宿主机的网络报文若发往 veth0 可以立即被 veth1 收到
  • NATP 包含两种转换方式:SNAT 和 DNAT
  • 目的 NAT (Destination NAT,DNAT): 修改数据包的目的地址
  • 容器的 IP 与端口对外都是不可见的
  • 数据包的目的地址为宿主机的 ip 和端口
  • 将数据包发送附加到 docker0 网桥上的 veth0 接口,veth0 接口再将数据包发送给容器内部的 veth1 接口,容器接收数据包并作出响应
  • 源 NAT (Source NAT,SNAT): 修改数据包的源地址
  • 宿主机上的 docker0 网桥发现数据包的目的地址为外界的 IP 和端口,便会将数据包转发给 eth0 ,并从 eth0 发出去。由于存在 SNAT 规则,会将数据包的源地址转换为宿主机的 ip 和端口
  • Docker 容器对外是不可见的
  • veth pair是用于不同network namespace间进行通信的方式,veth pair 将一个 network namespace 数据发往另一个 network namespace 的 veth
  • 网络命名空间是用于隔离网络资源(/proc/net、IP 地址、网卡、路由等)
  • NAT 为网络地址转换(Network Address Translation)的缩写
張 旭

Rails Environment Variables · RailsApps - 1 views

  • You can pass local configuration settings to an application using environment variables.
  • Operating systems (Linux, Mac OS X, Windows) provide mechanisms to set local environment variables, as does Heroku and other deployment platforms.
  • In general, you shouldn’t save email account credentials or private API keys to a shared git repository.
  • ...10 more annotations...
  • You could “hardcode” your Gmail username and password into the file but that would expose it to everyone who has access to your git repository.
  • It’s important to learn to use the Unix shell if you’re commited to improving your skills as a developer.
  • The gem reads a config/application.yml file and sets environment variables before anything else is configured in the Rails application.
  • make sure this file is listed in the .gitignore file so it isn’t checked into the git repository
  • Rails provides a config.before_configuration
  • YAML.load(File.open(env_file)).each do |key, value| ENV[key.to_s] = value end if File.exists?(env_file)
  • Heroku is a popular choice for low cost, easily configured Rails application hosting.
  • heroku config:add
  • the dotenv Ruby gem
  • Foreman is a tool for starting and configuring multiple processes in a complex application
張 旭

CertSimple | An nginx config for 2017 - 0 views

  • HAProxy can't terminate a HTTP/2 connection itself.
  • a server OS which includes OpenSSL 1.02 to have ALPN.
  • a new nginx (anything newer than 1.9.5 supports HTTP/2)
  • ...6 more annotations...
  • we like HTTPS/non-www since HTTPS is needed for current browsers and non-www is short.
  • visit the Mozilla TLS Generator to get the latest cipher suites and TLS versions
  • add the necessary headers for GeoIP and proper logging.
  • HTML5 SSE simpler than websockets
  • nginx -t
  • Scan your site with SSL Labs scan
crazylion lee

Cleaver - 0 views

  •  
    "30-second slideshows for hackers"
張 旭

Automated Nginx Reverse Proxy for Docker - 0 views

  • Docker containers are assigned random IPs and ports which makes addressing them much more complicated from a client perspsective
  • Binding the container to the hosts port can prevent multiple containers from running on the same host. For example, only one container can bind to port 80 at a time.
  • Docker provides a remote API to inspect containers and access their IP, Ports and other configuration meta-data.
  • ...1 more annotation...
  • nginx template can be used to generate a reverse proxy configuration for docker containers using virtual hosts for routing.
張 旭

A Good Vimrc - 1 views

  • Don't put any lines in your vimrc that you don't understand.
  • the absolute worst way to make your environment better is to just copy it wholesale from others
  • adding features one by one to your vimrc aids in overall Vim comprehension
  • ...3 more annotations...
  • custom keybindings
  • Vim doesn't model undo as a simple stack. In Vim it's a tree.
  • Don't put anything in your .vimrc you don't understand!
crazylion lee

GitHub - tidwall/evio: Fast event-loop networking for Go - 0 views

shared by crazylion lee on 05 Nov 17 - No Cached
  •  
    "evio is an event loop networking framework that is fast and small. It makes direct epoll and kqueue syscalls rather than using the standard Go net package, and works in a similar manner as libuv and libevent."
crazylion lee

Server.js - server.js - 0 views

  •  
    "server.js for Node"
crazylion lee

Uppy - 0 views

shared by crazylion lee on 09 Jan 18 - No Cached
張 旭

Open source load testing tool review 2020 - 0 views

  • Hey is a simple tool, written in Go, with good performance and the most common features you'll need to run simple static URL tests.
  • Hey supports HTTP/2, which neither Wrk nor Apachebench does
  • Apachebench is very fast, so often you will not need more than one CPU core to generate enough traffic
  • ...16 more annotations...
  • Hey has rate limiting, which can be used to run fixed-rate tests.
  • Vegeta was designed to be run on the command line; it reads from stdin a list of HTTP transactions to generate, and sends results in binary format to stdout,
  • Vegeta is a really strong tool that caters to people who want a tool to test simple, static URLs (perhaps API end points) but also want a bit more functionality.
  • Vegeta can even be used as a Golang library/package if you want to create your own load testing tool.
  • Wrk is so damn fast
  • being fast and measuring correctly is about all that Wrk does
  • k6 is scriptable in plain Javascript
  • k6 is average or better. In some categories (documentation, scripting API, command line UX) it is outstanding.
  • Jmeter is a huge beast compared to most other tools.
  • Siege is a simple tool, similar to e.g. Apachebench in that it has no scripting and is primarily used when you want to hit a single, static URL repeatedly.
  • A good way of testing the testing tools is to not test them on your code, but on some third-party thing that is sure to be very high-performing.
  • use a tool like e.g. top to keep track of Nginx CPU usage while testing. If you see just one process, and see it using close to 100% CPU, it means you could be CPU-bound on the target side.
  • If you see multiple Nginx processes but only one is using a lot of CPU, it means your load testing tool is only talking to that particular worker process.
  • Network delay is also important to take into account as it sets an upper limit on the number of requests per second you can push through.
  • If, say, the Nginx default page requires a transfer of 250 bytes to load, it means that if the servers are connected via a 100 Mbit/s link, the theoretical max RPS rate would be around 100,000,000 divided by 8 (bits per byte) divided by 250 => 100M/2000 = 50,000 RPS. Though that is a very optimistic calculation - protocol overhead will make the actual number a lot lower so in the case above I would start to get worried bandwidth was an issue if I saw I could push through max 30,000 RPS, or something like that.
  • Wrk managed to push through over 50,000 RPS and that made 8 Nginx workers on the target system consume about 600% CPU.
« First ‹ Previous 201 - 220 of 270 Next › Last »
Showing 20 items per page