Skip to main content

Home/ Larvata/ Group items tagged tcp

Rss Feed Group items tagged

張 旭

kube-proxy | Kubernetes - 0 views

  • The Kubernetes network proxy runs on each node. This reflects services as defined in the Kubernetes API on each node and can do simple TCP, UDP, and SCTP stream forwarding or round robin TCP, UDP, and SCTP forwarding across a set of backends.
  • Service cluster IPs and ports are currently found through Docker-links-compatible environment variables specifying ports opened by the service proxy.
  •  
    "The Kubernetes network proxy runs on each node. This reflects services as defined in the Kubernetes API on each node and can do simple TCP, UDP, and SCTP stream forwarding or round robin TCP, UDP, and SCTP forwarding across a set of backends."
crazylion lee

The TCP/IP Guide - Introduction To The TCP/IP Guide - 0 views

  •  
    " Introduction To The TCP/IP Guide"
snow9816

Apache 中 KeepAlive 配置的合理使用 - 煎炸熊の記事本 - 0 views

  • KeepAlive 配置指令決定當處理完用戶發起的 HTTP 請求後是否立即關閉 TCP 連接,如果 KeepAlive 設置為On,那麼用戶完成一次訪問後,不會立即斷開連接,如果還有請求,那麼會繼續在這一次 TCP 連接中完成,而不用重複建立新的 TCP 連接和關閉TCP 連接,可以提高用戶訪問速度。
  • 那麼我們考慮3種情況: 1。用戶瀏覽一個網頁時,除了網頁本身外,還引用了多個 javascript 文件,多個 css 文件,多個圖片文件,並且這些文件都在同一個 HTTP 服務器上。 2。用戶瀏覽一個網頁時,除了網頁本身外,還引用一個 javascript 文件,一個圖片文件。 3。用戶瀏覽的是一個動態網頁,由程序即時生成內容,並且不引用其他內容。 對於上面3中情況,我認為:1 最適合打開 KeepAlive ,2 隨意,3 最適合關閉 KeepAlive
  • 假設 KeepAlive 的超時時間為 10 秒種,服務器每秒處理 50個獨立用戶訪問,那麼系統中 Apache 的總進程數就是 10 * 50 = 500 個,如果一個進程佔用 4M 內存,那麼總共會消耗 2G內存
張 旭

作業系統 - 維基百科,自由的百科全書 - 0 views

  • 作業系統位於底層硬體與使用者之間,是兩者溝通的橋樑。
  • 行程管理(Processing management)
  • 安全機制(Security)
  • ...20 more annotations...
  • 記憶體管理(Memory management)
  • 核心 - 作業系統之最核心部分,通常執行在最高特權級,負責提供基礎性、結構性的功能。
  • 驅動程式 - 最底層的、直接控制和監視各類硬體的部分,它們的職責是隱藏硬體的具體細節,並向其他部分提供一個抽象的、通用的介面。
  • 作業系統的分類沒有一個單一的標準,可以根據工作方式分為批次處理作業系統、分時作業系統、即時作業系統、網路作業系統和分散式作業系統等
  • 根據帕金森定律:「你給程式再多記憶體,程式也會想盡辦法耗光」
  • 大部分的現代電腦記憶體架構都是階層式的,最快且數量最少的暫存器為首,然後是快取、記憶體以及最慢的磁碟儲存裝置。
  • 虛擬記憶體管理的功能大幅增加每個行程可獲得的記憶空間
  • 當年運用馮·諾伊曼結構建造電腦時,每個中央處理器最多只能同時執行一個行程。
  • 現代的作業系統,即使只擁有一個CPU,也可以利用多行程(multitask)功能同時執行多個行程。行程管理指的是作業系統調整多個行程的功能。
  • 作業系統尚有擔負起行程間通訊(IPC)、行程異常終止處理以及死結(Dead Lock)偵測及處理等較為艱深的問題。
  • 檔案系統,通常指稱管理磁碟資料的系統,可將資料以目錄或檔案的型式儲存。每個檔案系統都有自己的特殊格式與功能,例如日誌管理或不需磁碟重整。
  • 現代的作業系統都具備操作主流網路通訊協定TCP/IP的能力。也就是說這樣的作業系統可以進入網路世界,並且與其他系統分享諸如檔案、印表機與掃描器等資源。
  • 作業系統提供外界直接或間接存取數種資源的管道
  • 作業系統有能力認證資源存取的請求
  • 通常是一個正在執行的程式發出的資源請求。在某些系統上,一個程式一旦可執行就可做任何事情(例如DOS時代的病毒),但通常作業系統會給程式一個識別代號,並且在此程式發出請求時,檢查其代號與所需資源的存取權限關係。
  • 一個高安全等級的系統也會提供記錄選項,允許記錄各種請求對資源存取的行為(例如「誰曾經讀了這個檔案?」)
  • 大部分的作業系統都包含圖形化使用者介面(GUI)。有幾類較舊的作業系統將圖形化使用者介面與核心緊密結合,例如最早的Windows與Mac OS實作產品。
  • 驅動程式(Device driver)是指某類設計來與硬體互動的電腦軟體。通常是一設計完善的裝置互動介面,利用與此硬體連接的電腦匯排流或通訊子系統,提供對此裝置下令與接收資訊的功能;以及最終目的,將訊息提供給作業系統或應用程式。
  • 驅動程式是針對特定硬體與特定作業系統設計的軟體,通常以作業系統核心模組、應用軟體包或普通電腦程式的形式在作業系統核心底下執行,以達到通透順暢地與硬體互動的效果
  • 適合的驅動程式一旦安裝,相對應的新裝置就可以無誤地執行。此新驅動程式可以讓此裝置完美地切合在作業系統中,讓使用者察覺不到這是作業系統原本沒有的功能。
  •  
    "作業系統位於底層硬體與使用者之間,是兩者溝通的橋樑。"
crazylion lee

OpenSSH/Cookbook/Multiplexing - Wikibooks, open books for an open world - 0 views

  •  
    " Multiplexing is the ability to send more than one signal over a single line or connection. With multiplexing, OpenSSH can re-use an existing TCP connection for multiple concurrent SSH sessions rather than creating a new one each time."
crazylion lee

Swoole®: PHP的异步、并行、高性能网络通信引擎 - 0 views

  •  
    " PHP的异步、并行、高性能网络通信引擎,使用纯C语言编写,提供了PHP语言的异步多线程服务器,异步TCP/UDP网络客户端,异步MySQL,异步Redis,数据库连接池,AsyncTask,消息队列,毫秒定时器,异步文件读写,异步DNS查询。 Swoole内置了Http/WebSocket服务器端/客户端、Http2.0服务器端。"
張 旭

http - nginx upload client_max_body_size issue - Stack Overflow - 0 views

  • nginx "fails fast" when the client informs it that it's going to send a body larger than the client_max_body_size by sending a 413 response and closing the connection.
  • Because nginx closes the connection, the client sends data to the closed socket, causing a TCP RST.
  • Most clients don't read responses until the entire request body is sent.
  • ...2 more annotations...
  • Client body and buffers are key because nginx must buffer incoming data.
  • The clean setting frees up memory and consumption limits by instructing nginx to store incoming buffer in a file and then clean this file later from disk by deleting it.
張 旭

What's the Docker Swarm "-advertise-addr"? - Blog | BoxBoat - 0 views

  • To put it simply, the --advertise-addr is the address other nodes in the Docker swarm use to connect into your node.
  • a port number which defaults to 2377
  • The --listen-addr is the address that the swarm service listens on for incoming connections.
  • ...2 more annotations...
  • The default for --listen-addr is to listen on all interfaces on TCP port 2377 (0.0.0.0:2377)
  • Depending on your network architecture, you may want your swarm management interface only accessible on a management network that could be separate from a data and/or public network that are each attached to a physical server.
張 旭

NAT Gateways - Amazon Virtual Private Cloud - 0 views

  • a network address translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services
  • but prevent the internet from initiating a connection with those instances
  • NAT gateways are not supported for IPv6 traffic
  • ...11 more annotations...
  • must specify the public subnet in which the NAT gateway should reside
  • update the route table associated with one or more of your private subnets to point Internet-bound traffic to the NAT gateway.
  • NAT gateway is created in a specific Availability Zone and implemented with redundancy in that zone.
  • ensure that resources use the NAT gateway in the same Availability Zone
  • The main route table sends internet traffic from the instances in the private subnet to the NAT gateway. The NAT gateway sends the traffic to the internet gateway using the NAT gateway’s Elastic IP address as the source IP address
  • A NAT gateway supports 5 Gbps of bandwidth and automatically scales up to 45 Gbps
  • You can associate exactly one Elastic IP address with a NAT gateway
  • A NAT gateway supports the following protocols: TCP, UDP, and ICMP
  • cannot associate a security group with a NAT gateway.
  • create a NAT gateway in the same subnet as your NAT instance, and then replace the existing route in your route table that points to the NAT instance with a route that points to the NAT gateway
  • A NAT gateway cannot send traffic over VPC endpoints, VPN connections, AWS Direct Connect, or VPC peering connections.
張 旭

Use swarm mode routing mesh | Docker Documentation - 0 views

  • Docker Engine swarm mode makes it easy to publish ports for services to make them available to resources outside the swarm.
  • All nodes participate in an ingress routing mesh.
  • routing mesh enables each node in the swarm to accept connections on published ports for any service running in the swarm, even if there’s no task running on the node.
  • ...6 more annotations...
  • Port 7946 TCP/UDP for container network discovery
  • Port 4789 UDP for the container ingress network.
  • When you access port 8080 on any node, the swarm load balancer routes your request to an active container.
  • The routing mesh listens on the published port for any IP address assigned to the node.
  • publish a port for an existing service
  • To use an external load balancer without the routing mesh, set --endpoint-mode to dnsrr instead of the default value of vip
張 旭

Let's Encrypt & Docker - Træfik - 0 views

  • automatically discover any services on the Docker host and let Træfik reconfigure itself automatically when containers get created (or shut down) so HTTP traffic can be routed accordingly.
  • use Træfik as a layer-7 load balancer with SSL termination for a set of micro-services used to run a web application.
  • Docker containers can only communicate with each other over TCP when they share at least one network.
  • ...15 more annotations...
  • Docker under the hood creates IPTable rules so containers can't reach other containers unless you'd want to
  • Træfik can listen to Docker events and reconfigure its own internal configuration when containers are created (or shut down).
  • Enable the Docker provider and listen for container events on the Docker unix socket we've mounted earlier.
  • Enable automatic request and configuration of SSL certificates using Let's Encrypt. These certificates will be stored in the acme.json file, which you can back-up yourself and store off-premises.
  • there isn't a single container that has any published ports to the host -- everything is routed through Docker networks.
  • Thanks to Docker labels, we can tell Træfik how to create its internal routing configuration.
  • container labels and service labels
  • With the traefik.enable label, we tell Træfik to include this container in its internal configuration.
  • tell Træfik to use the web network to route HTTP traffic to this container.
  • Service labels allow managing many routes for the same container.
  • When both container labels and service labels are defined, container labels are just used as default values for missing service labels but no frontend/backend are going to be defined only with these labels.
  • In the example, two service names are defined : basic and admin. They allow creating two frontends and two backends.
  • Always specify the correct port where the container expects HTTP traffic using traefik.port label.
  • all containers that are placed in the same network as Træfik will automatically be reachable from the outside world
  • With the traefik.frontend.auth.basic label, it's possible for Træfik to provide a HTTP basic-auth challenge for the endpoints you provide the label for.
張 旭

你到底知不知道什麼是 Kubernetes? | Hwchiu Learning Note - 0 views

  • Storage(儲存) 實際上一直都不是一個簡單處理的問題,從軟體面來看實際上牽扯到非常多的層級,譬如 Linux Kernel, FileSystem, Block/File-Level, Cache, Snapshot, Object Storage 等各式各樣的議題可以討論。
  • DRBD
  • 異地備援,容錯機制,快照,重複資料刪除等超多相關的議題基本上從來沒有一個完美的解法能夠滿足所有使用情境。
  • ...20 more annotations...
  • 管理者可能會直接在 NFS Server 上進行 MDADM 來設定相關的 Block Device 並且基於上面提供 Export 供 NFS 使用,甚至底層套用不同的檔案系統 (EXT4/BTF4) 來獲取不同的功能與效能。
  • Kubernetes 就只是 NFS Client 的角色
  • CSI(Container Storage Interface)。CSI 本身作為 Kubernetes 與 Storage Solution 的中介層。
  • 基本上 Pod 裡面每個 Container 會使用 Volume 這個物件來代表容器內的掛載點,而在外部實際上會透過 PVC 以及 PV 的方式來描述這個 Volume 背後的儲存方案伺服器的資訊。
  • 整體會透過 CSI 的元件們與最外面實際上的儲存設備連接,所有儲存相關的功能是否有實現,有支援全部都要仰賴最後面的實際提供者, kubernetes 只透過 CSI 的標準去執行。
  • 在網路部分也有與之對應的 CNI(Container Network Interface). kubernetes 透過 CNI 這個介面來與後方的 網路解決方案 溝通
  • CNI 最基本的要求就是在在對應的階段為對應的容器提供網路能力
  • 目前最常見也是 IPv4 + TCP/UDP 的傳輸方式,因此才會看到大部分的 CNI 都在講這些。
  • 希望所有容器彼此之間可以透過 IPv4 來互相存取彼此,不論是同節點或是跨節點的容器們都要可以滿足這個需求。
  • 容器間到底怎麼傳輸的,需不需要封裝,透過什麼網卡,要不要透過 NAT 處理? 這一切都是 CNI 介面背後的實現
  • 外部網路存取容器服務 (Service/Ingress)
  • kubernetes 在 Service/Ingress 中間自行實現了一個模組,大抵上稱為 kube-proxy, 其底層可以使用 iptables, IPVS, user-space software 等不同的實現方法,這部分是跟 CNI 完全無關。
  • CNI 跟 Service/Ingress 是會衝突的,也有可能彼此沒有配合,這中間沒有絕對的穩定整合。
  • CNI 一般會處理的部份,包含了容器內的 網卡數量,網卡名稱,網卡IP, 以及容器與外部節點的連接能力等
  • CRI (Container Runtime Interface) 或是 Device Plugin
  • 對於 kubernetes 來說,其實本身並不在意到底底下的容器化技術實際上是怎麼實現的,你要用 Docker, rkt, CRI-O 都無所謂,甚至背後是一個偽裝成 Container 的 Virtaul Machine virtlet 都可以。
  • 去思考到底為什麼自己本身的服務需要容器化,容器化可以帶來什麼優點
  • 太多太多的人都認為只要寫一個 Dockerfile 將原先的應用程式們全部包裝起來放在一起就是一個很好的容器 來使用了。
  • 最後就會發現根本把 Container 當作 Virtual Machine 來使用,然後再補一句 Contaienr 根本不好用啊
  • 容器化 不是把直接 Virtual Machine 的使用習慣換個環境使用就叫做 容器化,而是要從概念上去暸解與使用
張 旭

What's the difference between Prometheus and Zabbix? - Stack Overflow - 0 views

  • Zabbix has core written in C and webUI based on PHP
  • Zabbix stores data in RDBMS (MySQL, PostgreSQL, Oracle, sqlite) of user's choice.
  • Prometheus uses its own database embedded into backend process
  • ...8 more annotations...
  • Zabbix by default uses "pull" model when a server connects to agents on each monitoring machine, agents periodically gather the info and send it to a server.
  • Prometheus prefers "pull" model when a server gather info from client machines.
  • Prometheus requires an application to be instrumented with Prometheus client library (available in different programming languages) for preparing metrics.
  • expose metrics for Prometheus (similar to "agents" for Zabbix)
  • Zabbix uses its own tcp-based communication protocol between agents and a server.
  • Prometheus uses HTTP with protocol buffers (+ text format for ease of use with curl).
  • Prometheus offers basic tool for exploring gathered data and visualizing it in simple graphs on its native server and also offers a minimal dashboard builder PromDash. But Prometheus is and is designed to be supported by modern visualizing tools like Grafana.
  • Prometheus offers solution for alerting that is separated from its core into Alertmanager application.
張 旭

DNS Records: An Introduction - 0 views

  • Domain names are best understood by reading from right to left.
  • the top-level domain, or TLD
  • Every term to the left of the TLD is separated by a period and considered a more specific subdomain
  • ...40 more annotations...
  • Name servers host a domain’s DNS information in a text file called a zone file.
  • Start of Authority (SOA) records
  • specifying DNS records, which match domain names to IP addresses.
  • Every domain’s zone file contains the domain administrator’s email address, the name servers, and the DNS records.
  • Your ISP’s DNS resolver queries a root nameserver for the proper TLD nameserver. In other words, it asks the root nameserver, *Where can I find the nameserver for .com domains?*
  • In actuality, ISPs cache a lot of DNS information after they’ve looked it up the first time.
  • caching is a good thing, but it can be a problem if you’ve recently made a change to your DNS information
  • An A record points your domain or subdomain to your Linode’s IP address,
  • use an asterisk (*) as your subdomain
  • An AAAA record is just like an A record, but for IPv6 IP addresses.
  • An AXFR record is a type of DNS record used for DNS replication
  • DNS Certification Authority Authorization uses DNS to allow the holder of a domain to specify which certificate authorities are allowed to issue certificates for that domain.
  • A CNAME record or Canonical Name record matches a domain or subdomain to a different domain.
  • Some mail servers handle mail oddly for domains with CNAME records, so you should not use a CNAME record for a domain that gets email.
  • MX records cannot reference CNAME-defined hostnames.
  • Chaining or looping CNAME records is not recommended.
  • a CNAME record does not function the same way as a URL redirect.
  • A DKIM record or DomainKeys Identified Mail record displays the public key for authenticating messages that have been signed with the DKIM protocol
  • DKIM records are implemented as text records.
  • An MX record or mail exchanger record sets the mail delivery destination for a domain or subdomain.
  • An MX record should ideally point to a domain that is also the hostname for its server.
  • Priority allows you to designate a fallback server (or servers) for mail for a particular domain. Lower numbers have a higher priority.
  • NS records or name server records set the nameservers for a domain or subdomain.
  • You can also set up different nameservers for any of your subdomains
  • Primary nameservers get configured at your registrar and secondary subdomain nameservers get configured in the primary domain’s zone file.
  • The order of NS records does not matter. DNS requests are sent randomly to the different servers
  • A PTR record or pointer record matches up an IP address to a domain or subdomain, allowing reverse DNS queries to function.
  • opposite service an A record does
  • PTR records are usually set with your hosting provider. They are not part of your domain’s zone file.
  • An SOA record or Start of Authority record labels a zone file with the name of the host where it was originally created.
  • Minimum TTL: The minimum amount of time other servers should keep data cached from this zone file.
  • An SPF record or Sender Policy Framework record lists the designated mail servers for a domain or subdomain.
  • An SPF record for your domain tells other receiving mail servers which outgoing server(s) are valid sources of email so they can reject spoofed mail from your domain that has originated from unauthorized servers.
  • Make sure your SPF records are not too strict.
  • An SRV record or service record matches up a specific service that runs on your domain or subdomain to a target domain.
  • Service: The name of the service must be preceded by an underscore (_) and followed by a period (.)
  • Protocol: The name of the protocol must be proceeded by an underscore (_) and followed by a period (.)
  • Port: The TCP or UDP port on which the service runs.
  • Target: The target domain or subdomain. This domain must have an A or AAAA record that resolves to an IP address.
  • A TXT record or text record provides information about the domain in question to other resources on the internet.
  •  
    "Domain names are best understood by reading from right to left."
張 旭

Understanding the Nginx Configuration File Structure and Configuration Contexts | Digit... - 0 views

  • discussing the basic structure of an Nginx configuration file along with some guidelines on how to design your files
  • /etc/nginx/nginx.conf
  • In Nginx parlance, the areas that these brackets define are called "contexts" because they contain configuration details that are separated according to their area of concern
  • ...50 more annotations...
  • contexts can be layered within one another
  • if a directive is valid in multiple nested scopes, a declaration in a broader context will be passed on to any child contexts as default values.
  • The children contexts can override these values at will
  • Nginx will error out on reading a configuration file with directives that are declared in the wrong context.
  • The most general context is the "main" or "global" context
  • Any directive that exist entirely outside of these blocks is said to inhabit the "main" context
  • The main context represents the broadest environment for Nginx configuration.
  • The "events" context is contained within the "main" context. It is used to set global options that affect how Nginx handles connections at a general level.
  • Nginx uses an event-based connection processing model, so the directives defined within this context determine how worker processes should handle connections.
  • the connection processing method is automatically selected based on the most efficient choice that the platform has available
  • a worker will only take a single connection at a time
  • When configuring Nginx as a web server or reverse proxy, the "http" context will hold the majority of the configuration.
  • The http context is a sibling of the events context, so they should be listed side-by-side, rather than nested
  • fine-tune the TCP keep alive settings (keepalive_disable, keepalive_requests, and keepalive_timeout)
  • The "server" context is declared within the "http" context.
  • multiple declarations
  • each instance defines a specific virtual server to handle client requests
  • Each client request will be handled according to the configuration defined in a single server context, so Nginx must decide which server context is most appropriate based on details of the request.
  • listen: The ip address / port combination that this server block is designed to respond to.
  • server_name: This directive is the other component used to select a server block for processing.
  • "Host" header
  • configure files to try to respond to requests (try_files)
  • issue redirects and rewrites (return and rewrite)
  • set arbitrary variables (set)
  • Location contexts share many relational qualities with server contexts
  • multiple location contexts can be defined, each location is used to handle a certain type of client request, and each location is selected by virtue of matching the location definition against the client request through a selection algorithm
  • Location blocks live within server contexts and, unlike server blocks, can be nested inside one another.
  • While server contexts are selected based on the requested IP address/port combination and the host name in the "Host" header, location blocks further divide up the request handling within a server block by looking at the request URI
  • The request URI is the portion of the request that comes after the domain name or IP address/port combination.
  • New directives at this level allow you to reach locations outside of the document root (alias), mark the location as only internally accessible (internal), and proxy to other servers or locations (using http, fastcgi, scgi, and uwsgi proxying).
  • These can then be used to do A/B testing by providing different content to different hosts.
  • configures Perl handlers for the location they appear in
  • set the value of a variable depending on the value of another variable
  • used to map MIME types to the file extensions that should be associated with them.
  • this context defines a named pool of servers that Nginx can then proxy requests to
  • The upstream context should be placed within the http context, outside of any specific server contexts.
  • The upstream context can then be referenced by name within server or location blocks to pass requests of a certain type to the pool of servers that have been defined.
  • function as a high performance mail proxy server
  • The mail context is defined within the "main" or "global" context (outside of the http context).
  • Nginx has the ability to redirect authentication requests to an external authentication server
  • the if directive in Nginx will execute the instructions contained if a given test returns "true".
  • Since Nginx will test conditions of a request with many other purpose-made directives, if should not be used for most forms of conditional execution.
  • The limit_except context is used to restrict the use of certain HTTP methods within a location context.
  • The result of the above example is that any client can use the GET and HEAD verbs, but only clients coming from the 192.168.1.1/24 subnet are allowed to use other methods.
  • Many directives are valid in more than one context
  • it is usually best to declare directives in the highest context to which they are applicable, and overriding them in lower contexts as necessary.
  • Declaring at higher levels provides you with a sane default
  • Nginx already engages in a well-documented selection algorithm for things like selecting server blocks and location blocks.
  • instead of relying on rewrites to get a user supplied request into the format that you would like to work with, you should try to set up two blocks for the request, one of which represents the desired method, and the other that catches messy requests and redirects (and possibly rewrites) them to your correct block.
  • incorrect requests can get by with a redirect rather than a rewrite, which should execute with lower overhead.
張 旭

An Introduction to HAProxy and Load Balancing Concepts | DigitalOcean - 0 views

  • HAProxy, which stands for High Availability Proxy
  • improve the performance and reliability of a server environment by distributing the workload across multiple servers (e.g. web, application, database).
  • ACLs are used to test some condition and perform an action (e.g. select a server, or block a request) based on the test result.
  • ...28 more annotations...
  • Access Control List (ACL)
  • ACLs allows flexible network traffic forwarding based on a variety of factors like pattern-matching and the number of connections to a backend
  • A backend is a set of servers that receives forwarded requests
  • adding more servers to your backend will increase your potential load capacity by spreading the load over multiple servers
  • mode http specifies that layer 7 proxying will be used
  • specifies the load balancing algorithm
  • health checks
  • A frontend defines how requests should be forwarded to backends
  • use_backend rules, which define which backends to use depending on which ACL conditions are matched, and/or a default_backend rule that handles every other case
  • A frontend can be configured to various types of network traffic
  • Load balancing this way will forward user traffic based on IP range and port
  • Generally, all of the servers in the web-backend should be serving identical content--otherwise the user might receive inconsistent content.
  • Using layer 7 allows the load balancer to forward requests to different backend servers based on the content of the user's request.
  • allows you to run multiple web application servers under the same domain and port
  • acl url_blog path_beg /blog matches a request if the path of the user's request begins with /blog.
  • Round Robin selects servers in turns
  • Selects the server with the least number of connections--it is recommended for longer sessions
  • This selects which server to use based on a hash of the source IP
  • ensure that a user will connect to the same server
  • require that a user continues to connect to the same backend server. This persistence is achieved through sticky sessions, using the appsession parameter in the backend that requires it.
  • HAProxy uses health checks to determine if a backend server is available to process requests.
  • The default health check is to try to establish a TCP connection to the server
  • If a server fails a health check, and therefore is unable to serve requests, it is automatically disabled in the backend
  • For certain types of backends, like database servers in certain situations, the default health check is insufficient to determine whether a server is still healthy.
  • However, your load balancer is a single point of failure in these setups; if it goes down or gets overwhelmed with requests, it can cause high latency or downtime for your service.
  • A high availability (HA) setup is an infrastructure without a single point of failure
  • a static IP address that can be remapped from one server to another.
  • If that load balancer fails, your failover mechanism will detect it and automatically reassign the IP address to one of the passive servers.
張 旭

GitLab Auto DevOps 深入淺出,自動部署,連設定檔不用?! | 五倍紅寶石・專業程式教育 - 0 views

  • 一個 K8S 的 Cluster,Auto DevOps 將會把網站部署到這個 Cluster
  • 需要有一個 wildcard 的 DNS 讓部署在這個環境的網站有 Domain name
  • 一個可以跑 Docker 的 GitLab Runner,將會為由它來執行 CI / CD 的流程。
  • ...37 more annotations...
  • 其實 Auto DevOps 就是一份官方寫好的 gitlab-ci.yml,在啟動 Auto DevOps 的專案裡,如果找不到 gitlab-ci.yml 檔,那就會直接用官方 gitlab-ci.yml 去跑 CI / CD 流程。
  • Pod 是 K8S 中可以被部署的最小元件,一個 Pod 是由一到多個 Container 組成,同個 Pod 的不同 Container 之間彼此共享網路資源。
  • 每個 Pod 都會有它的 yaml 檔,用以描述 Pod 會使用的 Image 還有連接的 Port 等資訊。
  • Node 又分成 Worker Node 和 Master Node 兩種
  • Helm 透過參數 (parameter) 跟模板 (template) 的方式,讓我們可以在只修改參數的方式重複利用模板。
  • 為了要有 CI CD 的功能我們會把 .gitlab-ci.yml 放在專案的根目錄裡, GitLab 會依造 .gitlab-ci.yml 的設定產生 CI/CD Pipeline,每個 Pipeline 裡面可能有多個 Job,這時候就會需要有 GitLab Runner 來執行這些 Job 並把執行的結果回傳給 GitLab 讓它知道這個 Job 是否有正常執行。
  • 把專案打包成 Docker Image 這工作又或是 helm 的操作都會在 Container 內執行
  • CI/CD Pipeline 是由 stage 還有 job 組成的,stage 是有順序性的,前面的 stage 完成後才會開始下一個 stage。
  • 每個 stage 裡面包含一到多個 Job
  • Auto Devops 裡也會大量用到這種在指定 Container 內運行的工作。
  • 可以通過 health checks
  • 開 private 的話還要注意使用 Container Registry 的權限問題
  • 申請好的 wildcard 的 DNS
  • Auto Devops 也提供只要設定環境變數就能一定程度客製化的選項
  • 特別注意 namespace 有沒有設定對,不然會找不到資料喔
  • Auto Devops,如果想要進一步的客製化,而且是改 GitLab 環境變數都無法實現的客製化,這時候還是得回到 .gitlab-ci.yml 設定檔
  • 在 Docker in Docker 的環境用 Dockerfile 打包 Image
  • 用 helm upgrade 把 chart 部署到 K8S 上
  • GitLab CI 的環境變數主要有三個來源,優先度高到低依序為Settings > CI/CD 介面定義的變數gitlab_ci.yml 定義環境變數GitLab 預設環境變數
  • 把專案打包成 Docker Image 首先需要在專案下新增一份 Dockerfile
  • Auto Devops 裡面的做法,用 herokuish 提供的 Image 來打包專案
  • 在 Runner 的環境中是沒有 docker 指令可以用的,所以這邊啟動一個 Docker Container 在裡面執行就可以用 docker 指令了。
  • 其中 $CI_COMMIT_SHA $CI_COMMIT_BEFORE_SHA 這兩個都是 GitLab 預設環境變數,代表這次 commit 還有上次 commit 的 SHA 值。
  • dind 則是直接啟動 docker daemon,此外 dind 還會自動產生 TLS certificates
  • 為了在 Docker Container 內運行 Docker,會把 Host 上面的 Docker API 分享給 Container。
  • docker:stable 有執行 docker 需要的執行檔,他裡面也包含要啟動 docker 的程式(docker daemon),但啟動 Container 的 entrypoint 是 sh
  • docker:dind 繼承自 docker:stable,而且它 entrypoint 就是啟動 docker 的腳本,此外還會做完 TLS certificates
  • Container 要去連 Host 上的 Docker API 。但現在連線失敗卻是找 http://docker:2375,現在的 dind 已經不是被當做 services 來用了,而是要直接在裡面跑 Docker,所以他應該是要 unix:///var/run/docker.sock 用這種連線,於是把環境變數 DOCKER_HOST 從 tcp://docker:2375 改成空字串,讓 docker daemon 走預設連線就能成功囉!
  • auto-deploy preparationhelm init 建立 helm 專案設定 tiller 在背景執行設定 cluster 的 namespace
  • auto-deploy deploy使用 helm upgrade 部署 chart 到 K8S 上透過 --set 來設定要注入 template 的參數
  • set -x,這樣就能在執行前,顯示指令內容。
  • 用 helm repo list 看看現在有註冊哪些 Chart Repository
  • helm fetch gitlab/auto-deploy-app --untar
  • nohup 可以讓你在離線或登出系統後,還能夠讓工作繼續進行
  • 在不特別設定 CI_APPLICATION_REPOSITORY 的情況下,image_repository 的值就是預設環境變數 CI_REGISTRY_IMAGE/CI_COMMIT_REF_SLUG
  • A:-B 的意思是如果有 A 就用它,沒有就用 B
  • 研究 Auto Devops 難度最高的地方就是太多工具整合在一起,搞不清楚他們之間的關係,出錯也不知道從何查起
張 旭

鳥哥的 Linux 私房菜 -- 第零章、計算機概論 - 0 views

  • 但因為 CPU 的運算速度比其他的設備都要來的快,又為了要滿足 FSB 的頻率,因此廠商就在 CPU 內部再進行加速, 於是就有所謂的外頻與倍頻了。
  • 中央處理器 (Central Processing Unit, CPU),CPU 為一個具有特定功能的晶片, 裡頭含有微指令集,如果你想要讓主機進行什麼特異的功能,就得要參考這顆 CPU 是否有相關內建的微指令集才可以。
  • CPU 內又可分為兩個主要的單元,分別是: 算數邏輯單元與控制單元。
  • ...63 more annotations...
  • CPU 讀取的資料都是從主記憶體來的! 主記憶體內的資料則是從輸入單元所傳輸進來!而 CPU 處理完畢的資料也必須要先寫回主記憶體中,最後資料才從主記憶體傳輸到輸出單元。
  • 重點在於 CPU 與主記憶體。 特別要看的是實線部分的傳輸方向,基本上資料都是流經過主記憶體再轉出去的!
  • CPU 實際要處理的資料則完全來自於主記憶體 (不管是程式還是一般文件資料)!這是個很重要的概念喔! 這也是為什麼當你的記憶體不足時,系統的效能就很糟糕!
  • 常見到的兩種主要 CPU 架構, 分別是:精簡指令集 (RISC) 與複雜指令集 (CISC) 系統。
  • 微指令集較為精簡,每個指令的執行時間都很短,完成的動作也很單純,指令的執行效能較佳; 但是若要做複雜的事情,就要由多個指令來完成。
  • CISC在微指令集的每個小指令可以執行一些較低階的硬體操作,指令數目多而且複雜, 每條指令的長度並不相同。因為指令執行較為複雜所以每條指令花費的時間較長, 但每條個別指令可以處理的工作較為豐富。
  • 多媒體微指令集:MMX, SSE, SSE2, SSE3, SSE4, AMD-3DNow! 虛擬化微指令集:Intel-VT, AMD-SVM 省電功能:Intel-SpeedStep, AMD-PowerNow! 64/32位元相容技術:AMD-AMD64, Intel-EM64T
  • 若光以效能來說,目前的個人電腦效能已經夠快了,甚至已經比工作站等級以上的電腦運算速度還要快! 但是工作站電腦強調的是穩定不當機,並且運算過程要完全正確,因此工作站以上等級的電腦在設計時的考量與個人電腦並不相同啦
  • 1 Byte = 8 bits
  • 檔案容量使用的是二進位的方式,所以 1 GBytes 的檔案大小實際上為:1024x1024x1024 Bytes 這麼大! 速度單位則常使用十進位,例如 1GHz 就是 1000x1000x1000 Hz 的意思。
  • CPU的運算速度常使用 MHz 或者是 GHz 之類的單位,這個 Hz 其實就是秒分之一
  • 在網路傳輸方面,由於網路使用的是 bit 為單位,因此網路常使用的單位為 Mbps 是 Mbits per second,亦即是每秒多少 Mbit
  • (1)北橋:負責連結速度較快的CPU、主記憶體與顯示卡界面等元件
  • (2)南橋:負責連接速度較慢的裝置介面, 包括硬碟、USB、網路卡等等
  • CPU內部含有微指令集,不同的微指令集會導致CPU工作效率的優劣
  • 時脈就是CPU每秒鐘可以進行的工作次數。 所以時脈越高表示這顆CPU單位時間內可以作更多的事情。
  • 早期的 CPU 架構主要透過北橋來連結系統最重要的 CPU、主記憶體與顯示卡裝置。因為所有的設備都得掉透過北橋來連結,因此每個設備的工作頻率應該要相同。
  • 前端匯流排 (FSB)
  • 外頻指的是CPU與外部元件進行資料傳輸時的速度
  • 倍頻則是 CPU 內部用來加速工作效能的一個倍數
  • 新的 CPU 設計中, 已經將記憶體控制器整合到 CPU 內部,而連結 CPU 與記憶體、顯示卡的控制器的設計,在Intel部份使用 QPI (Quick Path Interconnect) 與 DMI 技術,而 AMD 部份則使用 Hyper Transport 了,這些技術都可以讓 CPU 直接與主記憶體、顯示卡等設備分別進行溝通,而不需要透過外部的連結晶片了。
  • 如何知道主記憶體能提供的資料量呢?此時還是得要藉由 CPU 內的記憶體控制晶片與主記憶體間的傳輸速度『前端匯流排速度(Front Side Bus, FSB)
  • 主記憶體也是有其工作的時脈,這個時脈限制還是來自於 CPU 內的記憶體控制器所決定的。
  • CPU每次能夠處理的資料量稱為字組大小(word size), 字組大小依據CPU的設計而有32位元與64位元。我們現在所稱的電腦是32或64位元主要是依據這個 CPU解析的字組大小而來的
  • 早期的32位元CPU中,因為CPU每次能夠解析的資料量有限, 因此由主記憶體傳來的資料量就有所限制了。這也導致32位元的CPU最多只能支援最大到4GBytes的記憶體。
  • 在每一個 CPU 內部將重要的暫存器 (register) 分成兩群, 而讓程序分別使用這兩群暫存器。
  • 可以有兩個程序『同時競爭 CPU 的運算單元』,而非透過作業系統的多工切換!
  • 大多發現 HT 雖然可以提昇效能,不過,有些情況下卻可能導致效能降低喔!因為,實際上明明就僅有一個運算單元
  • 個人電腦的主記憶體主要元件為動態隨機存取記憶體(Dynamic Random Access Memory, DRAM), 隨機存取記憶體只有在通電時才能記錄與使用,斷電後資料就消失了。因此我們也稱這種RAM為揮發性記憶體。
  • 要啟用雙通道的功能你必須要安插兩支(或四支)主記憶體,這兩支記憶體最好連型號都一模一樣比較好, 這是因為啟動雙通道記憶體功能時,資料是同步寫入/讀出這一對主記憶體中,如此才能夠提升整體的頻寬啊!
  • 第二層快取(L2 cache)整合到CPU內部,因此這個L2記憶體的速度必須要CPU時脈相同。 使用DRAM是無法達到這個時脈速度的,此時就需要靜態隨機存取記憶體(Static Random Access Memory, SRAM)的幫忙了。
  • BIOS(Basic Input Output System)是一套程式,這套程式是寫死到主機板上面的一個記憶體晶片中, 這個記憶體晶片在沒有通電時也能夠將資料記錄下來,那就是唯讀記憶體(Read Only Memory, ROM)。
  • BIOS對於個人電腦來說是非常重要的, 因為他是系統在開機的時候首先會去讀取的一個小程式
  • 由於磁碟盤是圓的,並且透過機器手臂去讀寫資料,磁碟盤要轉動才能夠讓機器手臂讀寫。因此,通常資料寫入當然就是以圓圈轉圈的方式讀寫囉! 所以,當初設計就是在類似磁碟盤同心圓上面切出一個一個的小區塊,這些小區塊整合成一個圓形,讓機器手臂上的讀寫頭去存取。 這個小區塊就是磁碟的最小物理儲存單位,稱之為磁區 (sector),那同一個同心圓的磁區組合成的圓就是所謂的磁軌(track)。 由於磁碟裡面可能會有多個磁碟盤,因此在所有磁碟盤上面的同一個磁軌可以組合成所謂的磁柱 (cylinder)。
  • 原本硬碟的磁區都是設計成 512byte 的容量,但因為近期以來硬碟的容量越來越大,為了減少資料量的拆解,所以新的高容量硬碟已經有 4Kbyte 的磁區設計
  • 拿快閃記憶體去製作成高容量的設備,這些設備的連接界面也是透過 SATA 或 SAS,而且外型還做的跟傳統磁碟一樣
  • 固態硬碟最大的好處是,它沒有馬達不需要轉動,而是透過記憶體直接讀寫的特性,因此除了沒資料延遲且快速之外,還很省電
  • 硬碟主要是利用主軸馬達轉動磁碟盤來存取,因此轉速的快慢會影響到效能
  • 使用作業系統的正常關機方式,才能夠有比較好的硬碟保養啊!因為他會讓硬碟的機械手臂歸回原位啊!
  • I/O位址有點類似每個裝置的門牌號碼,每個裝置都有他自己的位址,一般來說,不能有兩個裝置使用同一個I/O位址, 否則系統就會不曉得該如何運作這兩個裝置了。
  • IRQ就可以想成是各個門牌連接到郵件中心(CPU)的專門路徑囉! 各裝置可以透過IRQ中斷通道來告知CPU該裝置的工作情況,以方便CPU進行工作分配的任務。
  • BIOS為寫入到主機板上某一塊 flash 或 EEPROM 的程式,他可以在開機的時候執行,以載入CMOS當中的參數, 並嘗試呼叫儲存裝置中的開機程式,進一步進入作業系統當中。
  • 電腦都只有記錄0/1而已,甚至記錄的資料都是使用byte/bit等單位來記錄的
  • 常用的英文編碼表為ASCII系統,這個編碼系統中, 每個符號(英文、數字或符號等)都會佔用1bytes的記錄, 因此總共會有28=256種變化
  • 中文字當中的編碼系統早期最常用的就是big5這個編碼表了。 每個中文字會佔用2bytes,理論上最多可以有216=65536,亦即最多可達6萬多個中文字。
  • 國際組織ISO/IEC跳出來制訂了所謂的Unicode編碼系統, 我們常常稱呼的UTF8或萬國碼的編碼
  • CPU其實是具有微指令集的。因此,我們需要CPU幫忙工作時,就得要參考微指令集的內容, 然後撰寫讓CPU讀的懂的指令碼給CPU執行,這樣就能夠讓CPU運作了。
  • 編譯器』來將這些人類能夠寫的程式語言轉譯成為機器能看懂得機器碼
  • 當你需要將運作的資料寫入記憶體中,你就得要自行分配一個記憶體區塊出來讓自己的資料能夠填上去, 所以你還得要瞭解到記憶體的位址是如何定位的,啊!眼淚還是不知不覺的流了下來... 怎麼寫程式這麼麻煩啊!
  • 作業系統(Operating System, OS)其實也是一組程式, 這組程式的重點在於管理電腦的所有活動以及驅動系統中的所有硬體。
  • 作業系統的功能就是讓CPU可以開始判斷邏輯與運算數值、 讓主記憶體可以開始載入/讀出資料與程式碼、讓硬碟可以開始被存取、讓網路卡可以開始傳輸資料、 讓所有周邊可以開始運轉等等。
  • 只有核心有提供的功能,你的電腦系統才能幫你完成!舉例來說,你的核心並不支援TCP/IP的網路協定, 那麼無論你購買了什麼樣的網卡,這個核心都無法提供網路能力的!
  • 核心程式所放置到記憶體當中的區塊是受保護的! 並且開機後就一直常駐在記憶體當中。
  • 作業系統通常會提供一整組的開發介面給工程師來開發軟體! 工程師只要遵守該開發介面那就很容易開發軟體了!
  • 系統呼叫介面(System call interface)
  • 程序管理(Process control)
  • 記憶體管理(Memory management)
  • 檔案系統管理(Filesystem management)
  • 通常核心會提供虛擬記憶體的功能,當記憶體不足時可以提供記憶體置換(swap)的功能
  • 裝置的驅動(Device drivers)
  • 『可載入模組』功能,可以將驅動程式編輯成模組,就不需要重新的編譯核心
  • 驅動程式可以說是作業系統裡面相當重要的一環
  • 作業系統通常會提供一個開發介面給硬體開發商, 讓他們可以根據這個介面設計可以驅動他們硬體的『驅動程式』,如此一來,只要使用者安裝驅動程式後, 自然就可以在他們的作業系統上面驅動這塊顯示卡了。
  •  
    "但因為 CPU 的運算速度比其他的設備都要來的快,又為了要滿足 FSB 的頻率,因此廠商就在 CPU 內部再進行加速, 於是就有所謂的外頻與倍頻了。"
張 旭

Kubernetes Deployments: The Ultimate Guide - Semaphore - 1 views

  • Continuous integration gives you confidence in your code. To extend that confidence to the release process, your deployment operations need to come with a safety belt.
  • these Kubernetes objects ensure that you can progressively deploy, roll back and scale your applications without downtime.
  • A pod is just a group of containers (it can be a group of one container) that run on the same machine, and share a few things together.
  • ...34 more annotations...
  • the containers within a pod can communicate with each other over localhost
  • From a network perspective, all the processes in these containers are local.
  • we can never create a standalone container: the closest we can do is create a pod, with a single container in it.
  • Kubernetes is a declarative system (by opposition to imperative systems).
  • All we can do, is describe what we want to have, and wait for Kubernetes to take action to reconcile what we have, with what we want to have.
  • In other words, we can say, “I would like a 40-feet long blue container with yellow doors“, and Kubernetes will find such a container for us. If it doesn’t exist, it will build it; if there is already one but it’s green with red doors, it will paint it for us; if there is already a container of the right size and color, Kubernetes will do nothing, since what we have already matches what we want.
  • The specification of a replica set looks very much like the specification of a pod, except that it carries a number, indicating how many replicas
  • What happens if we change that definition? Suddenly, there are zero pods matching the new specification.
  • the creation of new pods could happen in a more gradual manner.
  • the specification for a deployment looks very much like the one for a replica set: it features a pod specification, and a number of replicas.
  • Deployments, however, don’t create or delete pods directly.
  • When we update a deployment and adjust the number of replicas, it passes that update down to the replica set.
  • When we update the pod specification, the deployment creates a new replica set with the updated pod specification. That replica set has an initial size of zero. Then, the size of that replica set is progressively increased, while decreasing the size of the other replica set.
  • we are going to fade in (turn up the volume) on the new replica set, while we fade out (turn down the volume) on the old one.
  • During the whole process, requests are sent to pods of both the old and new replica sets, without any downtime for our users.
  • A readiness probe is a test that we add to a container specification.
  • Kubernetes supports three ways of implementing readiness probes:Running a command inside a container;Making an HTTP(S) request against a container; orOpening a TCP socket against a container.
  • When we roll out a new version, Kubernetes will wait for the new pod to mark itself as “ready” before moving on to the next one.
  • If there is no readiness probe, then the container is considered as ready, as long as it could be started.
  • MaxSurge indicates how many extra pods we are willing to run during a rolling update, while MaxUnavailable indicates how many pods we can lose during the rolling update.
  • Setting MaxUnavailable to 0 means, “do not shutdown any old pod before a new one is up and ready to serve traffic“.
  • Setting MaxSurge to 100% means, “immediately start all the new pods“, implying that we have enough spare capacity on our cluster, and that we want to go as fast as possible.
  • kubectl rollout undo deployment web
  • the replica set doesn’t look at the pods’ specifications, but only at their labels.
  • A replica set contains a selector, which is a logical expression that “selects” (just like a SELECT query in SQL) a number of pods.
  • it is absolutely possible to manually create pods with these labels, but running a different image (or with different settings), and fool our replica set.
  • Selectors are also used by services, which act as the load balancers for Kubernetes traffic, internal and external.
  • internal IP address (denoted by the name ClusterIP)
  • during a rollout, the deployment doesn’t reconfigure or inform the load balancer that pods are started and stopped. It happens automatically through the selector of the service associated to the load balancer.
  • a pod is added as a valid endpoint for a service only if all its containers pass their readiness check. In other words, a pod starts receiving traffic only once it’s actually ready for it.
  • In blue/green deployment, we want to instantly switch over all the traffic from the old version to the new, instead of doing it progressively
  • We can achieve blue/green deployment by creating multiple deployments (in the Kubernetes sense), and then switching from one to another by changing the selector of our service
  • kubectl label pods -l app=blue,version=v1.5 status=enabled
  • kubectl label pods -l app=blue,version=v1.4 status-
  •  
    "Continuous integration gives you confidence in your code. To extend that confidence to the release process, your deployment operations need to come with a safety belt."
1 - 20 of 23 Next ›
Showing 20 items per page