Skip to main content

Home/ Larvata/ Group items tagged HTTP

Rss Feed Group items tagged

張 旭

Volumes - Kubernetes - 0 views

  • On-disk files in a Container are ephemeral,
  • when a Container crashes, kubelet will restart it, but the files will be lost - the Container starts with a clean state
  • In Docker, a volume is simply a directory on disk or in another Container.
  • ...105 more annotations...
  • A Kubernetes volume, on the other hand, has an explicit lifetime - the same as the Pod that encloses it.
  • a volume outlives any Containers that run within the Pod, and data is preserved across Container restarts.
    • 張 旭
       
      Kubernetes Volume 是跟著 Pod 的生命週期在走
  • Kubernetes supports many types of volumes, and a Pod can use any number of them simultaneously.
  • To use a volume, a Pod specifies what volumes to provide for the Pod (the .spec.volumes field) and where to mount those into Containers (the .spec.containers.volumeMounts field).
  • A process in a container sees a filesystem view composed from their Docker image and volumes.
  • Volumes can not mount onto other volumes or have hard links to other volumes.
  • Each Container in the Pod must independently specify where to mount each volume
  • localnfs
  • cephfs
  • awsElasticBlockStore
  • glusterfs
  • vsphereVolume
  • An awsElasticBlockStore volume mounts an Amazon Web Services (AWS) EBS Volume into your Pod.
  • the contents of an EBS volume are preserved and the volume is merely unmounted.
  • an EBS volume can be pre-populated with data, and that data can be “handed off” between Pods.
  • create an EBS volume using aws ec2 create-volume
  • the nodes on which Pods are running must be AWS EC2 instances
  • EBS only supports a single EC2 instance mounting a volume
  • check that the size and EBS volume type are suitable for your use!
  • A cephfs volume allows an existing CephFS volume to be mounted into your Pod.
  • the contents of a cephfs volume are preserved and the volume is merely unmounted.
    • 張 旭
       
      相當於自己的 AWS EBS
  • CephFS can be mounted by multiple writers simultaneously.
  • have your own Ceph server running with the share exported
  • configMap
  • The configMap resource provides a way to inject configuration data into Pods
  • When referencing a configMap object, you can simply provide its name in the volume to reference it
  • volumeMounts: - name: config-vol mountPath: /etc/config volumes: - name: config-vol configMap: name: log-config items: - key: log_level path: log_level
  • create a ConfigMap before you can use it.
  • A Container using a ConfigMap as a subPath volume mount will not receive ConfigMap updates.
  • An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node.
  • When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever.
  • By default, emptyDir volumes are stored on whatever medium is backing the node - that might be disk or SSD or network storage, depending on your environment.
  • you can set the emptyDir.medium field to "Memory" to tell Kubernetes to mount a tmpfs (RAM-backed filesystem)
  • volumeMounts: - mountPath: /cache name: cache-volume volumes: - name: cache-volume emptyDir: {}
  • An fc volume allows an existing fibre channel volume to be mounted in a Pod.
  • configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them.
  • Flocker is an open-source clustered Container data volume manager. It provides management and orchestration of data volumes backed by a variety of storage backends.
  • emptyDir
  • flocker
  • A flocker volume allows a Flocker dataset to be mounted into a Pod
  • have your own Flocker installation running
  • A gcePersistentDisk volume mounts a Google Compute Engine (GCE) Persistent Disk into your Pod.
  • Using a PD on a Pod controlled by a ReplicationController will fail unless the PD is read-only or the replica count is 0 or 1
  • A glusterfs volume allows a Glusterfs (an open source networked filesystem) volume to be mounted into your Pod.
  • have your own GlusterFS installation running
  • A hostPath volume mounts a file or directory from the host node’s filesystem into your Pod.
  • a powerful escape hatch for some applications
  • access to Docker internals; use a hostPath of /var/lib/docker
  • allowing a Pod to specify whether a given hostPath should exist prior to the Pod running, whether it should be created, and what it should exist as
  • specify a type for a hostPath volume
  • the files or directories created on the underlying hosts are only writable by root.
  • hostPath: # directory location on host path: /data # this field is optional type: Directory
  • An iscsi volume allows an existing iSCSI (SCSI over IP) volume to be mounted into your Pod.
  • have your own iSCSI server running
  • A feature of iSCSI is that it can be mounted as read-only by multiple consumers simultaneously.
  • A local volume represents a mounted local storage device such as a disk, partition or directory.
  • Local volumes can only be used as a statically created PersistentVolume.
  • Compared to hostPath volumes, local volumes can be used in a durable and portable manner without manually scheduling Pods to nodes, as the system is aware of the volume’s node constraints by looking at the node affinity on the PersistentVolume.
  • If a node becomes unhealthy, then the local volume will also become inaccessible, and a Pod using it will not be able to run.
  • PersistentVolume spec using a local volume and nodeAffinity
  • PersistentVolume nodeAffinity is required when using local volumes. It enables the Kubernetes scheduler to correctly schedule Pods using local volumes to the correct node.
  • PersistentVolume volumeMode can now be set to “Block” (instead of the default value “Filesystem”) to expose the local volume as a raw block device.
  • When using local volumes, it is recommended to create a StorageClass with volumeBindingMode set to WaitForFirstConsumer
  • An nfs volume allows an existing NFS (Network File System) share to be mounted into your Pod.
  • NFS can be mounted by multiple writers simultaneously.
  • have your own NFS server running with the share exported
  • A persistentVolumeClaim volume is used to mount a PersistentVolume into a Pod.
  • PersistentVolumes are a way for users to “claim” durable storage (such as a GCE PersistentDisk or an iSCSI volume) without knowing the details of the particular cloud environment.
  • A projected volume maps several existing volume sources into the same directory.
  • All sources are required to be in the same namespace as the Pod. For more details, see the all-in-one volume design document.
  • Each projected volume source is listed in the spec under sources
  • A Container using a projected volume source as a subPath volume mount will not receive updates for those volume sources.
  • RBD volumes can only be mounted by a single consumer in read-write mode - no simultaneous writers allowed
  • A secret volume is used to pass sensitive information, such as passwords, to Pods
  • store secrets in the Kubernetes API and mount them as files for use by Pods
  • secret volumes are backed by tmpfs (a RAM-backed filesystem) so they are never written to non-volatile storage.
  • create a secret in the Kubernetes API before you can use it
  • A Container using a Secret as a subPath volume mount will not receive Secret updates.
  • StorageOS runs as a Container within your Kubernetes environment, making local or attached storage accessible from any node within the Kubernetes cluster.
  • Data can be replicated to protect against node failure. Thin provisioning and compression can improve utilization and reduce cost.
  • StorageOS provides block storage to Containers, accessible via a file system.
  • A vsphereVolume is used to mount a vSphere VMDK Volume into your Pod.
  • supports both VMFS and VSAN datastore.
  • create VMDK using one of the following methods before using with Pod.
  • share one volume for multiple uses in a single Pod.
  • The volumeMounts.subPath property can be used to specify a sub-path inside the referenced volume instead of its root.
  • volumeMounts: - name: workdir1 mountPath: /logs subPathExpr: $(POD_NAME)
  • env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name
  • Use the subPathExpr field to construct subPath directory names from Downward API environment variables
  • enable the VolumeSubpathEnvExpansion feature gate
  • The subPath and subPathExpr properties are mutually exclusive.
  • There is no limit on how much space an emptyDir or hostPath volume can consume, and no isolation between Containers or between Pods.
  • emptyDir and hostPath volumes will be able to request a certain amount of space using a resource specification, and to select the type of media to use, for clusters that have several media types.
  • the Container Storage Interface (CSI) and Flexvolume. They enable storage vendors to create custom storage plugins without adding them to the Kubernetes repository.
  • all volume plugins (like volume types listed above) were “in-tree” meaning they were built, linked, compiled, and shipped with the core Kubernetes binaries and extend the core Kubernetes API.
  • Container Storage Interface (CSI) defines a standard interface for container orchestration systems (like Kubernetes) to expose arbitrary storage systems to their container workloads.
  • Once a CSI compatible volume driver is deployed on a Kubernetes cluster, users may use the csi volume type to attach, mount, etc. the volumes exposed by the CSI driver.
  • The csi volume type does not support direct reference from Pod and may only be referenced in a Pod via a PersistentVolumeClaim object.
  • This feature requires CSIInlineVolume feature gate to be enabled:--feature-gates=CSIInlineVolume=true
  • In-tree plugins that support CSI Migration and have a corresponding CSI driver implemented are listed in the “Types of Volumes” section above.
  • Mount propagation allows for sharing volumes mounted by a Container to other Containers in the same Pod, or even to other Pods on the same node.
  • Mount propagation of a volume is controlled by mountPropagation field in Container.volumeMounts.
  • HostToContainer - This volume mount will receive all subsequent mounts that are mounted to this volume or any of its subdirectories.
  • Bidirectional - This volume mount behaves the same the HostToContainer mount. In addition, all volume mounts created by the Container will be propagated back to the host and to all Containers of all Pods that use the same volume.
  • Edit your Docker’s systemd service file. Set MountFlags as follows:MountFlags=shared
張 旭

MongoDB Performance Tuning: Everything You Need to Know - Stackify - 0 views

  • db.serverStatus().globalLock
  • db.serverStatus().locks
  • globalLock.currentQueue.total: This number can indicate a possible concurrency issue if it’s consistently high. This can happen if a lot of requests are waiting for a lock to be released.
  • ...35 more annotations...
  • globalLock.totalTime: If this is higher than the total database uptime, the database has been in a lock state for too long.
  • Unlike relational databases such as MySQL or PostgreSQL, MongoDB uses JSON-like documents for storing data.
  • Databases operate in an environment that consists of numerous reads, writes, and updates.
  • When a lock occurs, no other operation can read or modify the data until the operation that initiated the lock is finished.
  • locks.deadlockCount: Number of times the lock acquisitions have encountered deadlocks
  • Is the database frequently locking from queries? This might indicate issues with the schema design, query structure, or system architecture.
  • For version 3.2 on, WiredTiger is the default.
  • MMAPv1 locks whole collections, not individual documents.
  • WiredTiger performs locking at the document level.
  • When the MMAPv1 storage engine is in use, MongoDB will use memory-mapped files to store data.
  • All available memory will be allocated for this usage if the data set is large enough.
  • db.serverStatus().mem
  • mem.resident: Roughly equivalent to the amount of RAM in megabytes that the database process uses
  • If mem.resident exceeds the value of system memory and there’s a large amount of unmapped data on disk, we’ve most likely exceeded system capacity.
  • If the value of mem.mapped is greater than the amount of system memory, some operations will experience page faults.
  • The WiredTiger storage engine is a significant improvement over MMAPv1 in performance and concurrency.
  • By default, MongoDB will reserve 50 percent of the available memory for the WiredTiger data cache.
  • wiredTiger.cache.bytes currently in the cache – This is the size of the data currently in the cache.
  • wiredTiger.cache.tracked dirty bytes in the cache – This is the size of the dirty data in the cache.
  • we can look at the wiredTiger.cache.bytes read into cache value for read-heavy applications. If this value is consistently high, increasing the cache size may improve overall read performance.
  • check whether the application is read-heavy. If it is, increase the size of the replica set and distribute the read operations to secondary members of the set.
  • write-heavy, use sharding within a sharded cluster to distribute the load.
  • Replication is the propagation of data from one node to another
  • Replication sets handle this replication.
  • Sometimes, data isn’t replicated as quickly as we’d like.
  • a particularly thorny problem if the lag between a primary and secondary node is high and the secondary becomes the primary
  • use the db.printSlaveReplicationInfo() or the rs.printSlaveReplicationInfo() command to see the status of a replica set from the perspective of the secondary member of the set.
  • shows how far behind the secondary members are from the primary. This number should be as low as possible.
  • monitor this metric closely.
  • watch for any spikes in replication delay.
  • Always investigate these issues to understand the reasons for the lag.
  • One replica set is primary. All others are secondary.
  • it’s not normal for nodes to change back and forth between primary and secondary.
  • use the profiler to gain a deeper understanding of the database’s behavior.
  • Enabling the profiler can affect system performance, due to the additional activity.
  •  
    "globalLock.currentQueue.total: This number can indicate a possible concurrency issue if it's consistently high. This can happen if a lot of requests are waiting for a lock to be released."
張 旭

Using NGINX Logging for Application Performance Monitoring - 0 views

  • taking advantage of the flexibility of NGINX access logging is application performance monitoring (APM).
  • it’s simple to get detailed visibility into the performance of your applications by adding timing values to your code and passing them as response headers for inclusion in the NGINX access log.
  • $request_time – Full request time, starting when NGINX reads the first byte from the client and ending when NGINX sends the last byte of the response body
  • ...3 more annotations...
  • $upstream_response_time – Time between establishing a connection to an upstream server and receiving the last byte of the response body
  • capture timings in the application itself and include them as response headers, which NGINX then captures in its access log.
  • $upstream_header_time – Time between establishing a connection to an upstream server and receiving the first byte of the response header
張 旭

What is Kubernetes Ingress? | IBM - 0 views

  • expose an application to the outside of your Kubernetes cluster,
  • ClusterIP, NodePort, LoadBalancer, and Ingress.
  • A service is essentially a frontend for your application that automatically reroutes traffic to available pods in an evenly distributed way.
  • ...23 more annotations...
  • Services are an abstract way of exposing an application running on a set of pods as a network service.
  • Pods are immutable, which means that when they die, they are not resurrected. The Kubernetes cluster creates new pods in the same node or in a new node once a pod dies. 
  • A service provides a single point of access from outside the Kubernetes cluster and allows you to dynamically access a group of replica pods. 
  • For internal application access within a Kubernetes cluster, ClusterIP is the preferred method
  • To expose a service to external network requests, NodePort, LoadBalancer, and Ingress are possible options.
  • Kubernetes Ingress is an API object that provides routing rules to manage external users' access to the services in a Kubernetes cluster, typically via HTTPS/HTTP.
  • content-based routing, support for multiple protocols, and authentication.
  • Ingress is made up of an Ingress API object and the Ingress Controller.
  • Kubernetes Ingress is an API object that describes the desired state for exposing services to the outside of the Kubernetes cluster.
  • An Ingress Controller reads and processes the Ingress Resource information and usually runs as pods within the Kubernetes cluster.  
  • If Kubernetes Ingress is the API object that provides routing rules to manage external access to services, Ingress Controller is the actual implementation of the Ingress API.
  • The Ingress Controller is usually a load balancer for routing external traffic to your Kubernetes cluster and is responsible for L4-L7 Network Services. 
  • Layer 7 (L7) refers to the application level of the OSI stack—external connections load-balanced across pods, based on requests.
  • if Kubernetes Ingress is a computer, then Ingress Controller is a programmer using the computer and taking action.
  • Ingress Rules are a set of rules for processing inbound HTTP traffic. An Ingress with no rules sends all traffic to a single default backend service. 
  • the Ingress Controller is an application that runs in a Kubernetes cluster and configures an HTTP load balancer according to Ingress Resources.
  • The load balancer can be a software load balancer running in the cluster or a hardware or cloud load balancer running externally.
  • ClusterIP is the preferred option for internal service access and uses an internal IP address to access the service
  • A NodePort is a virtual machine (VM) used to expose a service on a Static Port number.
  • a NodePort would be used to expose a single service (with no load-balancing requirements for multiple services).
  • Ingress enables you to consolidate the traffic-routing rules into a single resource and runs as part of a Kubernetes cluster.
  • An application is accessed from the Internet via Port 80 (HTTP) or Port 443 (HTTPS), and Ingress is an object that allows access to your Kubernetes services from outside the Kubernetes cluster. 
  • To implement Ingress, you need to configure an Ingress Controller in your cluster—it is responsible for processing Ingress Resource information and allowing traffic based on the Ingress Rules.
張 旭

What is Data Definition Language (DDL) and how is it used? - 1 views

  • Data Definition Language (DDL) is used to create and modify the structure of objects in a database using predefined commands and a specific syntax.
  • DDL includes Structured Query Language (SQL) statements to create and drop databases, aliases, locations, indexes, tables and sequences.
  • Since DDL includes SQL statements to define changes in the database schema, it is considered a subset of SQL.
  • ...6 more annotations...
  • Data Manipulation Language (DML), commands are used to modify data in a database. DML statements control access to the database data.
  • DDL commands are used to create, delete or alter the structure of objects in a database but not its data.
  • DDL deals with descriptions of the database schema and is useful for creating new tables, indexes, sequences, stogroups, etc. and to define the attributes of these objects, such as data type, field length and alternate table names (aliases).
  • Data Query Language (DQL) is used to get data within the schema objects of a database and also to query it and impose order upon it.
  • DQL is also a subset of SQL. One of the most common commands in DQL is SELECT.
  • The most common command types in DDL are CREATE, ALTER and DROP.
張 旭

What ChatOps Solutions Should You Use Today? | PäksTech - 0 views

shared by 張 旭 on 16 Feb 22 - No Cached
  • The big elephant in the room is of course Hubot, which now hasn’t seen new commits in over three years.
  • Botkit bots are written in JavaScript and they run on Node.js
  • Errbot is a chatbot written in Python, it comes with a ton of features, and it is extendable with custom plugins.
  • ...8 more annotations...
  • by default they react to !commands in your chatroom. Commands can also trigger on regular expression matches, with or without a bot prefix.
  • Errbot also supports Markdown responses with Jinja2 templating.
  • Errbot supports webhooks; It has a small web server that can translate endpoints to your custom plugins.
  • It’s recommended that you configure this behind a web server such as nginx or Apache.
  • It works with the If This Then That (IFTTT) principle, meaning that you define a set of rules that the system then uses to take action.
  • Lita is a chat bot written in Ruby. Like the other bots I’ve mentioned, it is also open source and supports different chat platforms via plugins.
  • Gort is a newer entrant to the ChatOps space. As the name suggests it has been written in Go, and it is still under active development.
  • can persist information in databases, supports advanced parsers, and is extendable with custom skills.
張 旭

phusion/baseimage-docker - 1 views

    • 張 旭
       
      原始的 docker 在執行命令時,預設就是將傳入的 COMMAND 當成 PID 1 的程序,執行完畢就結束這個  docker,其他的 daemons 並不會執行,而 baseimage 解決了這個問題。
    • crazylion lee
       
      好棒棒
  • docker exec
  • Through SSH
  • ...57 more annotations...
  • docker exec -t -i YOUR-CONTAINER-ID bash -l
  • Login to the container
  • Baseimage-docker only advocates running multiple OS processes inside a single container.
  • Password and challenge-response authentication are disabled by default. Only key authentication is allowed.
  • A tool for running a command as another user
  • The Docker developers advocate the philosophy of running a single logical service per container. A logical service can consist of multiple OS processes.
  • All syslog messages are forwarded to "docker logs".
  • Baseimage-docker advocates running multiple OS processes inside a single container, and a single logical service can consist of multiple OS processes.
  • Baseimage-docker provides tools to encourage running processes as different users
  • sometimes it makes sense to run multiple services in a single container, and sometimes it doesn't.
  • Splitting your logical service into multiple OS processes also makes sense from a security standpoint.
  • using environment variables to pass parameters to containers is very much the "Docker way"
  • Baseimage-docker provides a facility to run a single one-shot command, while solving all of the aforementioned problems
  • the shell script must run the daemon without letting it daemonize/fork it.
  • All executable scripts in /etc/my_init.d, if this directory exists. The scripts are run in lexicographic order.
  • variables will also be passed to all child processes
  • Environment variables on Unix are inherited on a per-process basis
  • there is no good central place for defining environment variables for all applications and services
  • centrally defining environment variables
  • One of the ideas behind Docker is that containers should be stateless, easily restartable, and behave like a black box.
  • a one-shot command in a new container
  • immediately exit after the command exits,
  • However the downside of this approach is that the init system is not started. That is, while invoking COMMAND, important daemons such as cron and syslog are not running. Also, orphaned child processes are not properly reaped, because COMMAND is PID 1.
  • add additional daemons (e.g. your own app) to the image by creating runit entries.
  • Nginx is one such example: it removes all environment variables unless you explicitly instruct it to retain them through the env configuration option.
  • Mechanisms for easily running multiple processes, without violating the Docker philosophy
  • Ubuntu is not designed to be run inside Docker
  • According to the Unix process model, the init process -- PID 1 -- inherits all orphaned child processes and must reap them
  • Syslog-ng seems to be much more stable
  • cron daemon
  • Rotates and compresses logs
  • /sbin/setuser
  • A tool for installing apt packages that automatically cleans up after itself.
  • a single logical service inside a single container
  • A daemon is a program which runs in the background of its system, such as a web server.
  • The shell script must be called run, must be executable, and is to be placed in the directory /etc/service/<NAME>. runsv will switch to the directory and invoke ./run after your container starts.
  • If any script exits with a non-zero exit code, the booting will fail.
  • If your process is started with a shell script, make sure you exec the actual process, otherwise the shell will receive the signal and not your process.
  • any environment variables set with docker run --env or with the ENV command in the Dockerfile, will be picked up by my_init
  • not possible for a child process to change the environment variables of other processes
  • they will not see the environment variables that were originally passed by Docker.
  • We ignore HOME, SHELL, USER and a bunch of other environment variables on purpose, because not ignoring them will break multi-user containers.
  • my_init imports environment variables from the directory /etc/container_environment
  • /etc/container_environment.sh - a dump of the environment variables in Bash format.
  • modify the environment variables in my_init (and therefore the environment variables in all child processes that are spawned after that point in time), by altering the files in /etc/container_environment
  • my_init only activates changes in /etc/container_environment when running startup scripts
  • environment variables don't contain sensitive data, then you can also relax the permissions
  • Syslog messages are forwarded to the console
  • syslog-ng is started separately before the runit supervisor process, and shutdown after runit exits.
  • RUN apt-get update && apt-get upgrade -y -o Dpkg::Options::="--force-confold"
  • /sbin/my_init --skip-startup-files --quiet --
  • By default, no keys are installed, so nobody can login
  • provide a pregenerated, insecure key (PuTTY format)
  • RUN /usr/sbin/enable_insecure_key
  • docker run YOUR_IMAGE /sbin/my_init --enable-insecure-key
  • RUN cat /tmp/your_key.pub >> /root/.ssh/authorized_keys && rm -f /tmp/your_key.pub
  • The default baseimage-docker installs syslog-ng, cron and sshd services during the build process
張 旭

The Twelve-Factor App - 0 views

  • Keep development, staging, and production as similar as possible
  • Developers write code, ops engineers deploy it.
  • The twelve-factor app is designed for continuous deployment by keeping the gap between development and production small
  • ...4 more annotations...
  • Backing services, such as the app’s database, queueing system, or cache, is one area where dev/prod parity is important
  • The twelve-factor developer resists the urge to use different backing services between development and production, even when adapters theoretically abstract away any differences in backing services.
  • declarative provisioning tools such as Chef and Puppet combined with light-weight virtual environments such as Docker and Vagrant allow developers to run local environments which closely approximate production environments.
  • all deploys of the app (developer environments, staging, production) should be using the same type and version of each of the backing services.
  •  
    "as similar as possible "
張 旭

HTTPS 升级指南 - 阮一峰的网络日志 - 0 views

  • 域名认证(Domain Validation):最低级别认证,可以确认申请人拥有这个域名。
  • 公司认证(Company Validation):确认域名所有人是哪一家公司,证书里面会包含公司信息。
  • 扩展认证(Extended Validation):最高级别的认证,浏览器地址栏会显示公司名。
  • ...8 more annotations...
  • 多域名
  • 单域名
  • 通配符
  • 网站的响应头里面,加入一个强制性声明
  • Strict-Transport-Security: max-age=31536000; includeSubDomains
  • 确保浏览器只在使用 HTTPS 时,才发送Cookie。
  • Set-Cookie:
  • ; Secure
張 旭

What's the difference between Prometheus and Zabbix? - Stack Overflow - 0 views

  • Zabbix has core written in C and webUI based on PHP
  • Zabbix stores data in RDBMS (MySQL, PostgreSQL, Oracle, sqlite) of user's choice.
  • Prometheus uses its own database embedded into backend process
  • ...8 more annotations...
  • Zabbix by default uses "pull" model when a server connects to agents on each monitoring machine, agents periodically gather the info and send it to a server.
  • Prometheus prefers "pull" model when a server gather info from client machines.
  • Prometheus requires an application to be instrumented with Prometheus client library (available in different programming languages) for preparing metrics.
  • expose metrics for Prometheus (similar to "agents" for Zabbix)
  • Zabbix uses its own tcp-based communication protocol between agents and a server.
  • Prometheus uses HTTP with protocol buffers (+ text format for ease of use with curl).
  • Prometheus offers basic tool for exploring gathered data and visualizing it in simple graphs on its native server and also offers a minimal dashboard builder PromDash. But Prometheus is and is designed to be supported by modern visualizing tools like Grafana.
  • Prometheus offers solution for alerting that is separated from its core into Alertmanager application.
張 旭

Rails Environment Variables · RailsApps - 1 views

  • You can pass local configuration settings to an application using environment variables.
  • Operating systems (Linux, Mac OS X, Windows) provide mechanisms to set local environment variables, as does Heroku and other deployment platforms.
  • In general, you shouldn’t save email account credentials or private API keys to a shared git repository.
  • ...10 more annotations...
  • You could “hardcode” your Gmail username and password into the file but that would expose it to everyone who has access to your git repository.
  • It’s important to learn to use the Unix shell if you’re commited to improving your skills as a developer.
  • The gem reads a config/application.yml file and sets environment variables before anything else is configured in the Rails application.
  • make sure this file is listed in the .gitignore file so it isn’t checked into the git repository
  • Rails provides a config.before_configuration
  • YAML.load(File.open(env_file)).each do |key, value| ENV[key.to_s] = value end if File.exists?(env_file)
  • Heroku is a popular choice for low cost, easily configured Rails application hosting.
  • heroku config:add
  • the dotenv Ruby gem
  • Foreman is a tool for starting and configuring multiple processes in a complex application
張 旭

bbatsov/rails-style-guide: A community-driven Ruby on Rails 4 style guide - 0 views

  • custom initialization code in config/initializers. The code in initializers executes on application startup
  • Keep initialization code for each gem in a separate file with the same name as the gem
  • Mark additional assets for precompilation
  • ...90 more annotations...
  • config/environments/production.rb
  • Create an additional staging environment that closely resembles the production one
  • Keep any additional configuration in YAML files under the config/ directory
  • Rails::Application.config_for(:yaml_file)
  • Use nested routes to express better the relationship between ActiveRecord models
  • nest routes more than 1 level deep then use the shallow: true option
  • namespaced routes to group related actions
  • Don't use match to define any routes unless there is need to map multiple request types among [:get, :post, :patch, :put, :delete] to a single action using :via option.
  • Keep the controllers skinny
  • all the business logic should naturally reside in the model
  • Share no more than two instance variables between a controller and a view.
  • using a template
  • Prefer render plain: over render text
  • Prefer corresponding symbols to numeric HTTP status codes
  • without abbreviations
  • Keep your models for business logic and data-persistence only
  • Avoid altering ActiveRecord defaults (table names, primary key, etc)
  • Group macro-style methods (has_many, validates, etc) in the beginning of the class definition
  • Prefer has_many :through to has_and_belongs_to_many
  • self[:attribute]
  • self[:attribute] = value
  • validates
  • Keep custom validators under app/validators
  • Consider extracting custom validators to a shared gem
  • preferable to make a class method instead which serves the same purpose of the named scope
  • returns an ActiveRecord::Relation object
  • .update_attributes
  • Override the to_param method of the model
  • Use the friendly_id gem. It allows creation of human-readable URLs by using some descriptive attribute of the model instead of its id
  • find_each to iterate over a collection of AR objects
  • .find_each
  • .find_each
  • Looping through a collection of records from the database (using the all method, for example) is very inefficient since it will try to instantiate all the objects at once
  • always call before_destroy callbacks that perform validation with prepend: true
  • Define the dependent option to the has_many and has_one associations
  • always use the exception raising bang! method or handle the method return value.
  • When persisting AR objects
  • Avoid string interpolation in queries
  • param will be properly escaped
  • Consider using named placeholders instead of positional placeholders
  • use of find over where when you need to retrieve a single record by id
  • use of find_by over where and find_by_attribute
  • use of where.not over SQL
  • use heredocs with squish
  • Keep the schema.rb (or structure.sql) under version control.
  • Use rake db:schema:load instead of rake db:migrate to initialize an empty database
  • Enforce default values in the migrations themselves instead of in the application layer
  • change_column_default
  • imposing data integrity from the Rails app is impossible
  • use the change method instead of up and down methods.
  • constructive migrations
  • use models in migrations, make sure you define them so that you don't end up with broken migrations in the future
  • Don't use non-reversible migration commands in the change method.
  • In this case, block will be used by create_table in rollback
  • Never call the model layer directly from a view
  • Never make complex formatting in the views, export the formatting to a method in the view helper or the model.
  • When the labels of an ActiveRecord model need to be translated, use the activerecord scope
  • Separate the texts used in the views from translations of ActiveRecord attributes
  • Place the locale files for the models in a folder locales/models
  • the texts used in the views in folder locales/views
  • config/application.rb config.i18n.load_path += Dir[Rails.root.join('config', 'locales', '**', '*.{rb,yml}')]
  • I18n.t
  • I18n.l
  • Use "lazy" lookup for the texts used in views.
  • Use the dot-separated keys in the controllers and models
  • Reserve app/assets for custom stylesheets, javascripts, or images
  • Third party code such as jQuery or bootstrap should be placed in vendor/assets
  • Provide both HTML and plain-text view templates
  • config.action_mailer.raise_delivery_errors = true
  • Use a local SMTP server like Mailcatcher in the development environment
  • Provide default settings for the host name
  • The _url methods include the host name and the _path methods don't
  • _url
  • Format the from and to addresses properly
  • default from:
  • sending html emails all styles should be inline
  • Sending emails while generating page response should be avoided. It causes delays in loading of the page and request can timeout if multiple email are sent.
  • .start_with?
  • .end_with?
  • &.
  • Config your timezone accordingly in application.rb
  • config.active_record.default_timezone = :local
  • it can be only :utc or :local
  • Don't use Time.parse
  • Time.zone.parse
  • Don't use Time.now
  • Time.zone.now
  • Put gems used only for development or testing in the appropriate group in the Gemfile
  • Add all OS X specific gems to a darwin group in the Gemfile, and all Linux specific gems to a linux group
  • Do not remove the Gemfile.lock from version control.
張 旭

鳥哥的 Linux 私房菜 -- 第一章、Linux是什麼與如何學習 - 0 views

  • Linux就是核心與系統呼叫介面那兩層
  • 核心與硬體的關係非常的強烈
  • Linux提供了一個完整的作業系統當中最底層的硬體控制與資源管理的完整架構, 這個架構是沿襲Unix良好的傳統來的,所以相當的穩定而功能強大
  • ...31 more annotations...
  • Linux的核心是由Linus Torvalds在1991年的時候給他開發出來的, 並且丟到網路上提供大家下載,後來大家覺得這個小東西(Linux Kernel)相當的小而精巧, 所以慢慢的就有相當多的朋友投入這個小東西的研究領域裡面去
  • 1960年代初期麻省理工學院(MIT)發展了所謂的: 『相容分時系統(Compatible Time-Sharing System, CTSS)』, 它可以讓大型主機透過提供數個終端機(terminal)以連線進入主機,來利用主機的資源進行運算工作
  • 為了更加強化大型主機的功能,以讓主機的資源可以提供更多使用者來利用,所以在1965年前後, 由貝爾實驗室(Bell)、麻省理工學院(MIT)及奇異公司(GE, 或稱為通用電器)共同發起了Multics的計畫
  • 以組合語言(Assembler)寫出了一組核心程式,同時包括一些核心工具程式, 以及一個小小的檔案系統。那個系統就是Unix的原型! 當時Thompson將Multics龐大的複雜系統簡化了不少,於是同實驗室的朋友都戲稱這個系統為:Unics。(當時尚未有Unix的名稱)
  • 所有的程式或系統裝置都是檔案
  • 不管建構編輯器還是附屬檔案,所寫的程式只有一個目的,且要有效的完成目標。
  • Dennis Ritchie (註3) 將B語言重新改寫成C語言,再以C語言重新改寫與編譯Unics的核心, 最後正名與發行出Unix的正式版本!
  • 由於Unix是以較高階的C語言寫的,相對於組合語言需要與硬體有密切的配合, 高階的C語言與硬體的相關性就沒有這麼大了!所以,這個改變也使得Unix很容易被移植到不同的機器上面喔!
  • AT&T此時對於Unix是採取較開放的態度,此外,Unix是以高階的C語言寫成的, 理論上是具有可移植性的!亦即只要取得Unix的原始碼,並且針對大型主機的特性加以修訂原有的原始碼(Source Code), 就可能將Unix移植到另一部不同的主機上頭了。
  • 柏克萊大學的Bill Joy (註4)在取得了Unix的核心原始碼後,著手修改成適合自己機器的版本, 並且同時增加了很多工具軟體與編譯程式,最終將它命名為Berkeley Software Distribution (BSD)。
  • 每一家公司自己出的Unix雖然在架構上面大同小異,但是卻真的僅能支援自身的硬體, 所以囉,早先的Unix只能與伺服器(Server)或者是大型工作站(Workstation)劃上等號!
  • AT&T在1979年發行的第七版Unix中,特別提到了 『不可對學生提供原始碼』的嚴格限制!
  • 純種的Unix指的就是System V以及BSD
  • AT&T自家的System V
  • 既然1979年的Unix第七版可以在Intel的x86架構上面進行移植, 那麼是否意味著可以將Unix改寫並移植到x86上面了呢?在這個想法上, 譚寧邦教授於是乎自己動手寫了Minix這個Unix Like的核心程式!
  • 『既然作業系統太複雜,我就先寫可以在Unix上面運行的小程式,這總可以了吧?』
  • 如果能夠寫出一個不錯的編譯器,那不就是大家都需要的軟體了嗎? 因此他便開始撰寫C語言的編譯器,那就是現在相當有名的GNU C Compiler(gcc)!
  • 他還撰寫了更多可以被呼叫的C函式庫(GNU C library),以及可以被使用來操作作業系統的基本介面BASH shell! 這些都在1990年左右完成了!
  • 有鑑於圖形使用者介面(Graphical User Interface, GUI) 的需求日益加重,在1984年由MIT與其他協力廠商首次發表了X Window System ,並且更在1988年成立了非營利性質的XFree86這個組織。所謂的XFree86其實是 X Window System + Free + x86的整合名稱呢!
  • 譚寧邦教授為了教育需要而撰寫的Minix系統! 他在購買了最新的Intel 386的個人電腦後,就立即安裝了Minix這個作業系統。 另外,上個小節當中也談到,Minix這個作業系統是有附上原始碼的, 所以托瓦茲也經由這個原始碼學習到了很多的核心程式設計的設計概念喔!
  • 托瓦茲自己也說:『我始終是個性能癖』^_^。 為了徹底發揮386的效能,於是托瓦茲花了不少時間在測試386機器上! 他的重要測試就是在測試386的多功性能。首先,他寫了三個小程式,一個程式會持續輸出A、一個會持續輸出B, 最後一個會將兩個程式進行切換。他將三個程式同時執行,結果,他看到螢幕上很順利的一直出現ABABAB...... 他知道,他成功了! ^_^
  • 為了讓所有的軟體都可以在Linux上執行,於是托瓦茲開始參考標準的POSIX規範。
  • POSIX是可攜式作業系統介面(Portable Operating System Interface)的縮寫,重點在規範核心與應用程式之間的介面, 這是由美國電器與電子工程師學會(IEEE)所發佈的一項標準喔
  • 因為托瓦茲放置核心的那個FTP網站的目錄為:Linux, 從此,大家便稱這個核心為Linux了。(請注意,此時的Linux就是那個kernel喔! 另外,托瓦茲所丟到該目錄下的第一個核心版本為0.02呢!)
  • Linux其實就是一個作業系統最底層的核心及其提供的核心工具。 他是GNU GPL授權模式,所以,任何人均可取得原始碼與可執行這個核心程式,並且可以修改。
  • Linux參考POSIX設計規範,於是相容於Unix作業系統,故亦可稱之為Unix Like的一種
  • 為了讓使用者能夠接觸到Linux,於是很多的商業公司或非營利團體, 就將Linux Kernel(含tools)與可運行的軟體整合起來,加上自己具有創意的工具程式, 這個工具程式可以讓使用者以光碟/DVD或者透過網路直接安裝/管理Linux系統。 這個『Kernel + Softwares + Tools + 可完整安裝程序』的咚咚,我們稱之為Linux distribution, 一般中文翻譯成可完整安裝套件,或者Linux發佈商套件等。
  • 在1994年終於完成的Linux的核心正式版!version 1.0。 這一版同時還加入了X Window System的支援呢!且於1996年完成了2.0版、2011 年釋出 3.0 版,更於 2015 年 4 月釋出了 4.0 版哩! 發展相當迅速喔!此外,托瓦茲指明了企鵝為Linux的吉祥物。
  • Linux本身就是個最陽春的作業系統,其開發網站設立在http://www.kernel.org,我們亦稱Linux作業系統最底層的資料為『核心(Kernel)』。
  • 常見的 Linux distributions 分類有『商業、社群』分類法,或『RPM、DPKG』分類法
  • 事實上鳥哥認為distributions主要分為兩大系統,一種是使用RPM方式安裝軟體的系統,包括Red Hat, Fedora, SuSE等都是這類; 一種則是使用Debian的dpkg方式安裝軟體的系統,包括Debian, Ubuntu, B2D等等。
張 旭

Logstash Alternatives: Pros & Cons of 5 Log Shippers [2019] - Sematext - 0 views

  • In this case, Elasticsearch. And because Elasticsearch can be down or struggling, or the network can be down, the shipper would ideally be able to buffer and retry
  • Logstash is typically used for collecting, parsing, and storing logs for future use as part of log management.
  • Logstash’s biggest con or “Achille’s heel” has always been performance and resource consumption (the default heap size is 1GB).
  • ...37 more annotations...
  • This can be a problem for high traffic deployments, when Logstash servers would need to be comparable with the Elasticsearch ones.
  • Filebeat was made to be that lightweight log shipper that pushes to Logstash or Elasticsearch.
  • differences between Logstash and Filebeat are that Logstash has more functionality, while Filebeat takes less resources.
  • Filebeat is just a tiny binary with no dependencies.
  • For example, how aggressive it should be in searching for new files to tail and when to close file handles when a file didn’t get changes for a while.
  • For example, the apache module will point Filebeat to default access.log and error.log paths
  • Filebeat’s scope is very limited,
  • Initially it could only send logs to Logstash and Elasticsearch, but now it can send to Kafka and Redis, and in 5.x it also gains filtering capabilities.
  • Filebeat can parse JSON
  • you can push directly from Filebeat to Elasticsearch, and have Elasticsearch do both parsing and storing.
  • You shouldn’t need a buffer when tailing files because, just as Logstash, Filebeat remembers where it left off
  • For larger deployments, you’d typically use Kafka as a queue instead, because Filebeat can talk to Kafka as well
  • The default syslog daemon on most Linux distros, rsyslog can do so much more than just picking logs from the syslog socket and writing to /var/log/messages.
  • It can tail files, parse them, buffer (on disk and in memory) and ship to a number of destinations, including Elasticsearch.
  • rsyslog is the fastest shipper
  • Its grammar-based parsing module (mmnormalize) works at constant speed no matter the number of rules (we tested this claim).
  • use it as a simple router/shipper, any decent machine will be limited by network bandwidth
  • It’s also one of the lightest parsers you can find, depending on the configured memory buffers.
  • rsyslog requires more work to get the configuration right
  • the main difference between Logstash and rsyslog is that Logstash is easier to use while rsyslog lighter.
  • rsyslog fits well in scenarios where you either need something very light yet capable (an appliance, a small VM, collecting syslog from within a Docker container).
  • rsyslog also works well when you need that ultimate performance.
  • syslog-ng as an alternative to rsyslog (though historically it was actually the other way around).
  • a modular syslog daemon, that can do much more than just syslog
  • Unlike rsyslog, it features a clear, consistent configuration format and has nice documentation.
  • Similarly to rsyslog, you’d probably want to deploy syslog-ng on boxes where resources are tight, yet you do want to perform potentially complex processing.
  • syslog-ng has an easier, more polished feel than rsyslog, but likely not that ultimate performance
  • Fluentd was built on the idea of logging in JSON wherever possible (which is a practice we totally agree with) so that log shippers down the line don’t have to guess which substring is which field of which type.
  • Fluentd plugins are in Ruby and very easy to write.
  • structured data through Fluentd, it’s not made to have the flexibility of other shippers on this list (Filebeat excluded).
  • Fluent Bit, which is to Fluentd similar to how Filebeat is for Logstash.
  • Fluentd is a good fit when you have diverse or exotic sources and destinations for your logs, because of the number of plugins.
  • Splunk isn’t a log shipper, it’s a commercial logging solution
  • Graylog is another complete logging solution, an open-source alternative to Splunk.
  • everything goes through graylog-server, from authentication to queries.
  • Graylog is nice because you have a complete logging solution, but it’s going to be harder to customize than an ELK stack.
  • it depends
張 旭

Syntax - Configuration Language | Terraform | HashiCorp Developer - 0 views

  • the native syntax of the Terraform language, which is a rich language designed to be relatively easy for humans to read and write.
  • Terraform's configuration language is based on a more general language called HCL, and HCL's documentation usually uses the word "attribute" instead of "argument."
  • A particular block type may have any number of required labels, or it may require none
  • ...34 more annotations...
  • After the block type keyword and any labels, the block body is delimited by the { and } characters
  • Identifiers can contain letters, digits, underscores (_), and hyphens (-). The first character of an identifier must not be a digit, to avoid ambiguity with literal numbers.
  • The # single-line comment style is the default comment style and should be used in most cases.
  • he idiomatic style is to use the Unix convention
  • Indent two spaces for each nesting level.
  • align their equals signs
  • Use empty lines to separate logical groups of arguments within a block.
  • Use one blank line to separate the arguments from the blocks.
  • "meta-arguments" (as defined by the Terraform language semantics)
  • Avoid separating multiple blocks of the same type with other blocks of a different type, unless the block types are defined by semantics to form a family.
  • Resource names must start with a letter or underscore, and may contain only letters, digits, underscores, and dashes.
  • Each resource is associated with a single resource type, which determines the kind of infrastructure object it manages and what arguments and other attributes the resource supports.
  • Each resource type is implemented by a provider, which is a plugin for Terraform that offers a collection of resource types.
  • By convention, resource type names start with their provider's preferred local name.
  • Most publicly available providers are distributed on the Terraform Registry, which also hosts their documentation.
  • The Terraform language defines several meta-arguments, which can be used with any resource type to change the behavior of resources.
  • use precondition and postcondition blocks to specify assumptions and guarantees about how the resource operates.
  • Some resource types provide a special timeouts nested block argument that allows you to customize how long certain operations are allowed to take before being considered to have failed.
  • Timeouts are handled entirely by the resource type implementation in the provider
  • Most resource types do not support the timeouts block at all.
  • A resource block declares that you want a particular infrastructure object to exist with the given settings.
  • Destroy resources that exist in the state but no longer exist in the configuration.
  • Destroy and re-create resources whose arguments have changed but which cannot be updated in-place due to remote API limitations.
  • Expressions within a Terraform module can access information about resources in the same module, and you can use that information to help configure other resources. Use the <RESOURCE TYPE>.<NAME>.<ATTRIBUTE> syntax to reference a resource attribute in an expression.
  • resources often provide read-only attributes with information obtained from the remote API; this often includes things that can't be known until the resource is created, like the resource's unique random ID.
  • data sources, which are a special type of resource used only for looking up information.
  • some dependencies cannot be recognized implicitly in configuration.
  • local-only resource types exist for generating private keys, issuing self-signed TLS certificates, and even generating random ids.
  • The behavior of local-only resources is the same as all other resources, but their result data exists only within the Terraform state.
  • The count meta-argument accepts a whole number, and creates that many instances of the resource or module.
  • count.index — The distinct index number (starting with 0) corresponding to this instance.
  • the count value must be known before Terraform performs any remote resource actions. This means count can't refer to any resource attributes that aren't known until after a configuration is applied
  • Within nested provisioner or connection blocks, the special self object refers to the current resource instance, not the resource block as a whole.
  • This was fragile, because the resource instances were still identified by their index instead of the string values in the list.
  •  
    "the native syntax of the Terraform language, which is a rich language designed to be relatively easy for humans to read and write. "
張 旭

Data Sources - Configuration Language | Terraform | HashiCorp Developer - 0 views

  • Each provider may offer data sources alongside its set of resource types.
  • When distinguishing from data resources, the primary kind of resource (as declared by a resource block) is known as a managed resource.
  • Each data resource is associated with a single data source, which determines the kind of object (or objects) it reads and what query constraint arguments are available.
  • ...4 more annotations...
  • Terraform reads data resources during the planning phase when possible, but announces in the plan when it must defer reading resources until the apply phase to preserve the order of operations.
  • local-only data sources exist for rendering templates, reading local files, and rendering AWS IAM policies.
  • As with managed resources, when count or for_each is present it is important to distinguish the resource itself from the multiple resource instances it creates. Each instance will separately read from its data source with its own variant of the constraint arguments, producing an indexed result.
  • Data instance arguments may refer to computed values, in which case the attributes of the instance itself cannot be resolved until all of its arguments are defined. I
張 旭

Installing Addons | Kubernetes - 0 views

  • Calico is a networking and network policy provider. Calico supports a flexible set of networking options so you can choose the most efficient option for your situation, including non-overlay and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, pods, and (if using Istio & Envoy) applications at the service mesh layer.
  • Cilium is a networking, observability, and security solution with an eBPF-based data plane. Cilium provides a simple flat Layer 3 network with the ability to span multiple clusters in either a native routing or overlay/encapsulation mode, and can enforce network policies on L3-L7 using an identity-based security model that is decoupled from network addressing. Cilium can act as a replacement for kube-proxy; it also offers additional, opt-in observability and security features.
  • CoreDNS is a flexible, extensible DNS server which can be installed as the in-cluster DNS for pods.
  • ...1 more annotation...
  • The node problem detector runs on Linux nodes and reports system issues as either Events or Node conditions.
crazylion lee

tsenart/vegeta: HTTP load testing tool and library. It's over 9000! - 0 views

  •  
    "HTTP load testing tool and library"
crazylion lee

Gobot - 0 views

shared by crazylion lee on 17 Jan 14 - No Cached
  •  
    Gobot is set of libraries in the Go programming language for robotics and physical computing. It provides a simple, yet powerful way to create solutions that incorporate multiple, different hardware devices at the same time. Want to use Ruby on robots? Check out our sister project Artoo (http://artoo.io). Want to use Node.js? Check out our sister project Cylon (http://cylonjs.com).
‹ Previous 21 - 40 of 1420 Next › Last »
Showing 20 items per page