Skip to main content

Home/ Larvata/ Group items tagged manager

Rss Feed Group items tagged

張 旭

Deploy services to a swarm | Docker Documentation - 0 views

  • Swarm services use a declarative model, which means that you define the desired state of the service, and rely upon Docker to maintain this state.
  • To create a single-replica service with no extra configuration, you only need to supply the image name.
  • A service can be in a pending state if its image is unavailable
  • ...12 more annotations...
  • If your image is available on a private registry which requires login, use the --with-registry-auth flag
  • When you update a service, Docker stops its containers and restarts them with the new configuration.
  • When updating an existing service, the flag is --publish-add. There is also a --publish-rm flag to remove a port that was previously published.
  • To update the command an existing service runs, you can use the --args flag.
  • force the service to use a specific version of the image
  • If the manager can’t resolve the tag to a digest, each worker node is responsible for resolving the tag to a digest, and different nodes may use different versions of the image.
  • After you create a service, its image is never updated unless you explicitly run docker service update with the --image flag as described below.
  • When you run service update with the --image flag, the swarm manager queries Docker Hub or your private Docker registry for the digest the tag currently points to and updates the service tasks to use that digest.
  • You can publish a service task’s port directly on the swarm node where that service is running.
  • You can rely on the routing mesh. When you publish a service port, the swarm makes the service accessible at the target port on every node, regardless of whether there is a task for the service running on that node or not.
  • To publish a service’s ports externally to the swarm, use the --publish <PUBLISHED-PORT>:<SERVICE-PORT> flag.
  • published port on every swarm node
張 旭

Scalable architecture without magic (and how to build it if you're not Google) - DEV Co... - 0 views

  • Don’t mess up write-first and read-first databases.
  • keep them stateless.
  • you should know how to make a scalable setup on bare metal.
  • ...29 more annotations...
  • Different programming languages are for different tasks.
  • Go or C which are compiled to run on bare metal.
  • To run NodeJS on multiple cores, you have to use something like PM2, but since this you have to keep your code stateless.
  • Python have very rich and sugary syntax that’s great for working with data while keeping your code small and expressive.
  • SQL is almost always slower than NoSQL
  • databases are often read-first or write-first
  • write-first, just like Cassandra.
  • store all of your data to your databases and leave nothing at backend
  • Functional code is stateless by default
  • It’s better to go for stateless right from the very beginning.
  • deliver exactly the same responses for same requests.
  • Sessions? Store them at Redis and allow all of your servers to access it.
  • Only the first user will trigger a data query, and all others will be receiving exactly the same data straight from the RAM
  • never, never cache user input
  • Only the server output should be cached
  • Varnish is a great cache option that works with HTTP responses, so it may work with any backend.
  • a rate limiter – if there’s not enough time have passed since last request, the ongoing request will be denied.
  • different requests blasting every 10ms can bring your server down
  • Just set up entry relations and allow your database to calculate external keys for you
  • the query planner will always be faster than your backend.
  • Backend should have different responsibilities: hashing, building web pages from data and templates, managing sessions and so on.
  • For anything related to data management or data models, move it to your database as procedures or queries.
  • a distributed database.
  • your code has to be stateless
  • Move anything related to the data to the database.
  • For load-balancing a database, go for cluster.
  • DB is balancing requests, as well as your backend.
  • Users from different continents are separated with DNS.
  • Keep is scalable, keep is stateless.
  •  
    "Don't mess up write-first and read-first databases."
張 旭

The Twelve-Factor App - 0 views

  • software is commonly delivered as a service: called web apps, or software-as-a-service.
  • Use declarative formats for setup automation
  • offering maximum portability between execution environments
  • ...18 more annotations...
  • obviating the need for servers and systems administration
  • Minimize divergence between development and production
  • scale up without significant changes to tooling, architecture, or development practices
  • Ops engineers who deploy or manage such applications.
  • developer building applications which run as a service
  • One codebase
  • many deploys
  • in the environment
  • services as attached resources
  • Explicitly declare
  • separate build and run stages
  • stateless processes
  • Export services via port binding
  • Scale out
  • fast startup and graceful shutdown
  • as similar as possible
  • logs as event streams
  • admin/management tasks as one-off processes
  •  
    "software is commonly delivered as a service: called web apps, or software-as-a-service"
crazylion lee

OpenZipkin · A distributed tracing system - 0 views

shared by crazylion lee on 12 Mar 18 - No Cached
  •  
    "Zipkin is a distributed tracing system. It helps gather timing data needed to troubleshoot latency problems in microservice architectures. It manages both the collection and lookup of this data. Zipkin's design is based on the Google Dapper paper."
crazylion lee

Bcfg2 - 0 views

  •  
    "Bcfg2 helps system administrators produce a consistent, reproducible, and verifiable description of their environment, and offers visualization and reporting tools to aid in day-to-day administrative tasks. It is the fifth generation of configuration management tools developed in the Mathematics and Computer Science Division of Argonne National Laboratory. "
張 旭

Ask HN: What are the best practises for using SSH keys? | Hacker News - 0 views

  • Make sure you use full disk encryption and never stand up from your machine without locking it, and make sure you keep your local machine patched.
  • I'm more focused on just stealing your keys from you regardless of length
  • attacks that aren't after your keys specifically, e.g. your home directory gets stolen.
  • ...19 more annotations...
  • ED25519 is more vulnerable to quantum computation than is RSA
  • best practice to be using a hardware token
  • to use a yubikey via gpg: with this method you use your gpg subkey as an ssh key
  • sit down and spend an hour thinking about your backup and recovery strategy first
  • never share a private keys between physical devices
  • allows you to revoke a single credential if you lose (control over) that device
  • If a private key ever turns up on the wrong machine, you *know* the key and both source and destination machines have been compromised.
  • centralized management of authentication/authorization
  • I have setup a VPS, disabled passwords, and setup a key with a passphrase to gain access. At this point my greatest worry is losing this private key, as that means I can't access the server.What is a reasonable way to backup my private key?
  • a mountable disk image that's encrypted
  • a system that can update/rotate your keys across all of your servers on the fly in case one is compromised or assumed to be compromised.
  • different keys for different purposes per client device
  • fall back to password plus OTP
  • relying completely on the security of your disk, against either physical or cyber.
  • It is better to use a different passphrase for each key but it is also less convenient unless you're using a password manager (personally, I'm using KeePass)
  • - RSA is pretty standard, and generally speaking is fairly secure for key lengths >=2048. RSA-2048 is the default for ssh-keygen, and is compatible with just about everything.
  • public-key authentication has somewhat unexpected side effect of preventing MITM per this security consulting firm
  • Disable passwords and only allow keys even for root with PermitRootLogin without-password
  • You should definitely use a different passphrase for keys stored on separate computers,
  •  
    "Make sure you use full disk encryption and never stand up from your machine without locking it, and make sure you keep your local machine patched"
crazylion lee

TagSpaces - Your Hackable File Organizer - 0 views

  •  
    "TagSpaces is an open source personal data manager. It helps you organize and browse your files on every platform."
張 旭

How To Use Bash's Job Control to Manage Foreground and Background Processes | DigitalOcean - 0 views

  • Most processes that you start on a Linux machine will run in the foreground. The command will begin execution, blocking use of the shell for the duration of the process.
  • By default, processes are started in the foreground. Until the program exits or changes state, you will not be able to interact with the shell.
  • stop the process by sending it a signal
  • ...17 more annotations...
  • Linux terminals are usually configured to send the "SIGINT" signal (typically signal number 2) to current foreground process when the CTRL-C key combination is pressed.
  • Another signal that we can send is the "SIGTSTP" signal (typically signal number 20).
  • A background process is associated with the specific terminal that started it, but does not block access to the shell
  • start a background process by appending an ampersand character ("&") to the end of your commands.
  • type commands at the same time.
  • The [1] represents the command's "job spec" or job number. We can reference this with other job and process control commands, like kill, fg, and bg by preceding the job number with a percentage sign. In this case, we'd reference this job as %1.
  • Once the process is stopped, we can use the bg command to start it again in the background
  • By default, the bg command operates on the most recently stopped process.
  • Whether a process is in the background or in the foreground, it is rather tightly tied with the terminal instance that started it
  • When a terminal closes, it typically sends a SIGHUP signal to all of the processes (foreground, background, or stopped) that are tied to the terminal.
  • a terminal multiplexer
  • start it using the nohup command
  • appending output to ‘nohup.out’
  • pgrep -a
  • The disown command, in its default configuration, removes a job from the jobs queue of a terminal.
  • You can pass the -h flag to the disown process instead in order to mark the process to ignore SIGHUP signals, but to otherwise continue on as a regular job
  • The huponexit shell option controls whether bash will send its child processes the SIGHUP signal when it exits.
張 旭

Volumes - Kubernetes - 0 views

  • On-disk files in a Container are ephemeral,
  • when a Container crashes, kubelet will restart it, but the files will be lost - the Container starts with a clean state
  • In Docker, a volume is simply a directory on disk or in another Container.
  • ...105 more annotations...
  • A Kubernetes volume, on the other hand, has an explicit lifetime - the same as the Pod that encloses it.
  • a volume outlives any Containers that run within the Pod, and data is preserved across Container restarts.
    • 張 旭
       
      Kubernetes Volume 是跟著 Pod 的生命週期在走
  • Kubernetes supports many types of volumes, and a Pod can use any number of them simultaneously.
  • To use a volume, a Pod specifies what volumes to provide for the Pod (the .spec.volumes field) and where to mount those into Containers (the .spec.containers.volumeMounts field).
  • A process in a container sees a filesystem view composed from their Docker image and volumes.
  • Volumes can not mount onto other volumes or have hard links to other volumes.
  • Each Container in the Pod must independently specify where to mount each volume
  • localnfs
  • cephfs
  • awsElasticBlockStore
  • glusterfs
  • vsphereVolume
  • An awsElasticBlockStore volume mounts an Amazon Web Services (AWS) EBS Volume into your Pod.
  • the contents of an EBS volume are preserved and the volume is merely unmounted.
  • an EBS volume can be pre-populated with data, and that data can be “handed off” between Pods.
  • create an EBS volume using aws ec2 create-volume
  • the nodes on which Pods are running must be AWS EC2 instances
  • EBS only supports a single EC2 instance mounting a volume
  • check that the size and EBS volume type are suitable for your use!
  • A cephfs volume allows an existing CephFS volume to be mounted into your Pod.
  • the contents of a cephfs volume are preserved and the volume is merely unmounted.
    • 張 旭
       
      相當於自己的 AWS EBS
  • CephFS can be mounted by multiple writers simultaneously.
  • have your own Ceph server running with the share exported
  • configMap
  • The configMap resource provides a way to inject configuration data into Pods
  • When referencing a configMap object, you can simply provide its name in the volume to reference it
  • volumeMounts: - name: config-vol mountPath: /etc/config volumes: - name: config-vol configMap: name: log-config items: - key: log_level path: log_level
  • create a ConfigMap before you can use it.
  • A Container using a ConfigMap as a subPath volume mount will not receive ConfigMap updates.
  • An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node.
  • When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever.
  • By default, emptyDir volumes are stored on whatever medium is backing the node - that might be disk or SSD or network storage, depending on your environment.
  • you can set the emptyDir.medium field to "Memory" to tell Kubernetes to mount a tmpfs (RAM-backed filesystem)
  • volumeMounts: - mountPath: /cache name: cache-volume volumes: - name: cache-volume emptyDir: {}
  • An fc volume allows an existing fibre channel volume to be mounted in a Pod.
  • configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them.
  • Flocker is an open-source clustered Container data volume manager. It provides management and orchestration of data volumes backed by a variety of storage backends.
  • emptyDir
  • flocker
  • A flocker volume allows a Flocker dataset to be mounted into a Pod
  • have your own Flocker installation running
  • A gcePersistentDisk volume mounts a Google Compute Engine (GCE) Persistent Disk into your Pod.
  • Using a PD on a Pod controlled by a ReplicationController will fail unless the PD is read-only or the replica count is 0 or 1
  • A glusterfs volume allows a Glusterfs (an open source networked filesystem) volume to be mounted into your Pod.
  • have your own GlusterFS installation running
  • A hostPath volume mounts a file or directory from the host node’s filesystem into your Pod.
  • a powerful escape hatch for some applications
  • access to Docker internals; use a hostPath of /var/lib/docker
  • allowing a Pod to specify whether a given hostPath should exist prior to the Pod running, whether it should be created, and what it should exist as
  • specify a type for a hostPath volume
  • the files or directories created on the underlying hosts are only writable by root.
  • hostPath: # directory location on host path: /data # this field is optional type: Directory
  • An iscsi volume allows an existing iSCSI (SCSI over IP) volume to be mounted into your Pod.
  • have your own iSCSI server running
  • A feature of iSCSI is that it can be mounted as read-only by multiple consumers simultaneously.
  • A local volume represents a mounted local storage device such as a disk, partition or directory.
  • Local volumes can only be used as a statically created PersistentVolume.
  • Compared to hostPath volumes, local volumes can be used in a durable and portable manner without manually scheduling Pods to nodes, as the system is aware of the volume’s node constraints by looking at the node affinity on the PersistentVolume.
  • If a node becomes unhealthy, then the local volume will also become inaccessible, and a Pod using it will not be able to run.
  • PersistentVolume spec using a local volume and nodeAffinity
  • PersistentVolume nodeAffinity is required when using local volumes. It enables the Kubernetes scheduler to correctly schedule Pods using local volumes to the correct node.
  • PersistentVolume volumeMode can now be set to “Block” (instead of the default value “Filesystem”) to expose the local volume as a raw block device.
  • When using local volumes, it is recommended to create a StorageClass with volumeBindingMode set to WaitForFirstConsumer
  • An nfs volume allows an existing NFS (Network File System) share to be mounted into your Pod.
  • NFS can be mounted by multiple writers simultaneously.
  • have your own NFS server running with the share exported
  • A persistentVolumeClaim volume is used to mount a PersistentVolume into a Pod.
  • PersistentVolumes are a way for users to “claim” durable storage (such as a GCE PersistentDisk or an iSCSI volume) without knowing the details of the particular cloud environment.
  • A projected volume maps several existing volume sources into the same directory.
  • All sources are required to be in the same namespace as the Pod. For more details, see the all-in-one volume design document.
  • Each projected volume source is listed in the spec under sources
  • A Container using a projected volume source as a subPath volume mount will not receive updates for those volume sources.
  • RBD volumes can only be mounted by a single consumer in read-write mode - no simultaneous writers allowed
  • A secret volume is used to pass sensitive information, such as passwords, to Pods
  • store secrets in the Kubernetes API and mount them as files for use by Pods
  • secret volumes are backed by tmpfs (a RAM-backed filesystem) so they are never written to non-volatile storage.
  • create a secret in the Kubernetes API before you can use it
  • A Container using a Secret as a subPath volume mount will not receive Secret updates.
  • StorageOS runs as a Container within your Kubernetes environment, making local or attached storage accessible from any node within the Kubernetes cluster.
  • Data can be replicated to protect against node failure. Thin provisioning and compression can improve utilization and reduce cost.
  • StorageOS provides block storage to Containers, accessible via a file system.
  • A vsphereVolume is used to mount a vSphere VMDK Volume into your Pod.
  • supports both VMFS and VSAN datastore.
  • create VMDK using one of the following methods before using with Pod.
  • share one volume for multiple uses in a single Pod.
  • The volumeMounts.subPath property can be used to specify a sub-path inside the referenced volume instead of its root.
  • volumeMounts: - name: workdir1 mountPath: /logs subPathExpr: $(POD_NAME)
  • env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name
  • Use the subPathExpr field to construct subPath directory names from Downward API environment variables
  • enable the VolumeSubpathEnvExpansion feature gate
  • The subPath and subPathExpr properties are mutually exclusive.
  • There is no limit on how much space an emptyDir or hostPath volume can consume, and no isolation between Containers or between Pods.
  • emptyDir and hostPath volumes will be able to request a certain amount of space using a resource specification, and to select the type of media to use, for clusters that have several media types.
  • the Container Storage Interface (CSI) and Flexvolume. They enable storage vendors to create custom storage plugins without adding them to the Kubernetes repository.
  • all volume plugins (like volume types listed above) were “in-tree” meaning they were built, linked, compiled, and shipped with the core Kubernetes binaries and extend the core Kubernetes API.
  • Container Storage Interface (CSI) defines a standard interface for container orchestration systems (like Kubernetes) to expose arbitrary storage systems to their container workloads.
  • Once a CSI compatible volume driver is deployed on a Kubernetes cluster, users may use the csi volume type to attach, mount, etc. the volumes exposed by the CSI driver.
  • The csi volume type does not support direct reference from Pod and may only be referenced in a Pod via a PersistentVolumeClaim object.
  • This feature requires CSIInlineVolume feature gate to be enabled:--feature-gates=CSIInlineVolume=true
  • In-tree plugins that support CSI Migration and have a corresponding CSI driver implemented are listed in the “Types of Volumes” section above.
  • Mount propagation allows for sharing volumes mounted by a Container to other Containers in the same Pod, or even to other Pods on the same node.
  • Mount propagation of a volume is controlled by mountPropagation field in Container.volumeMounts.
  • HostToContainer - This volume mount will receive all subsequent mounts that are mounted to this volume or any of its subdirectories.
  • Bidirectional - This volume mount behaves the same the HostToContainer mount. In addition, all volume mounts created by the Container will be propagated back to the host and to all Containers of all Pods that use the same volume.
  • Edit your Docker’s systemd service file. Set MountFlags as follows:MountFlags=shared
張 旭

Keycloak and FreeIPA Intro - scott poore's blog - 0 views

  • Keycloak is an “Open source identity and access management” solution.
  • setup a central Identity Provider (IdP) that applications acting as Service Providers (SP) use to authenticate or authorize user access.
  • FreeIPA does a LOT more than just provide user info though.  It can manage user groups, service lists, hosts, DNS, certificates, and much, much, more.
  • ...5 more annotations...
  • IPA – refers to the FreeIPA Master Server.
  • IdP – as mentioned earlier, IdP stands for Identity Provider.
  • SP – stands for Service Provider.   This can be a java application, jboss, etc.  It can also be a simple Apache web server
  • SAML – stands for Security Assertion Markup Language and refers to mod_auth_mellon here.  This provides the SSO functionality.
  • Openidc – stands for OpenID Connect.
張 旭

Warnings, Notes, & Tips - 0 views

  • AS3 manages topology records globally in /Common, it is required that records only be managed through AS3, as it will treat the records declaratively.
  • If a record is added outside of AS3, it will be removed if it is not included in the next AS3 declaration for topology records (AS3 completely overwrites non-AS3 topologies when a declaration is submitted).
  • using AS3 to delete a tenant (for example, sending DELETE to the /declare/<TENANT> endpoint) that contains GSLB topologies will completely remove ALL GSLB topologies from the BIG-IP.
  • ...12 more annotations...
  • When posting a large declaration (hundreds of application services in a single declaration), you may experience a 500 error stating that the save sys config operation failed.
  • Even if you have asynchronous mode set to false, after 45 seconds AS3 sets asynchronous mode to true (API swap), and returns an async response.
  • When creating a new tenant using AS3, it must not use the same name as a partition you separately create on the target BIG-IP system.
  • If you use the same name and then post the declaration, AS3 overwrites (or removes) the existing partition completely, including all configuration objects in that partition.
  • use AS3 to create a tenant (which creates a BIG-IP partition), manually adding configuration objects to the partition created by AS3 can have unexpected results
  • When you delete the Tenant using AS3, the system deletes both virtual servers.
  • if a Firewall_Address_List contains zero addresses, a dummy IPv6 address of ::1:5ee:bad:c0de is added in order to maintain a valid Firewall_Address_List. If an address is added to the list, the dummy address is removed.
  • use /mgmt/shared/appsvcs/declare?async=true if you have a particularly large declaration which will take a long time to process.
  • reviewing the Sizing BIG-IP Virtual Editions section (page 7) of Deploying BIG-IP VEs in a Hyper-Converged Infrastructure
  • To test whether your system has AS3 installed or not, use GET with the /mgmt/shared/appsvcs/info URI.
  • You may find it more convenient to put multi-line texts such as iRules into AS3 declarations by first encoding them in Base64.
  • no matter your BIG-IP user account name, audit logs show all messages from admin and not the specific user name.
crazylion lee

What Google Learned From Its Quest to Build the Perfect Team - The New York Times - 0 views

  •  
    "What Google Learned From Its Quest to Build the Perfect Team"
crazylion lee

Gerrit Code Review - index.md - 0 views

  •  
    "Gerrit provides web based code review and repository management for the Git version control system."
張 旭

在 EKS 中实现基于 Promtail + Loki + Grafana 容器日志解决方案 - 0 views

  • Grafana大家应该都比较熟悉,它是一款开源的可视化和分析软件,它允许用户查询、可视化、警告和探索监控指标。Grafana主要提供时间序列数据的仪表板解决方案,支持超过数十种数据源(还在陆续添加支持中)
  • Grafana Loki是一组可以组成一个功能齐全的日志堆栈组件,与其它日志系统不同的是,Loki只建立日志标签的索引而不索引原始日志消息,而是为日志数据设置一组标签,这意味着Loki的运营成本更低,效率也能提高几个数量级。
  • Loki整体架构也是由不同的组件来协同完成日志收集、索引、存储等工作
  • ...5 more annotations...
  • 一句话形容下Loki就是like Prometheus, but for logs。
  • Promtail是一个日志收集的代理,它会将本地日志的内容发送到一个Loki实例,它通常部署到需要监视应用程序的每台机器/容器上。Promtail主要是用来发现目标、将标签附加到日志流以及将日志推送到Loki。
  • Loki中的日志带有一组标签名和值,其中只有标签对被索引,这种权衡使得它比完整索引的操作成本更低,但是针对基于内容的查询,需要通过LogQL再单独查询。
  • 和Fluentd相比,Promtail是专门为Loki量身定制的,它可以为运行在同一节点上的Kubernetes Pods做服务发现,从指定文件夹读取日志。
  • 亚马逊云科技也提供了Grafana和Prometheus的托管服务Amazon Managed Service for Grafana(AMG)和Amazon Managed Service for Prometheus(AMP)
張 旭

Best practices for building Kubernetes Operators and stateful apps | Google Cloud Blog - 0 views

  • use the StatefulSet workload controller to maintain identity for each of the pods, and to use Persistent Volumes to persist data so it can survive a service restart.
  • a way to extend Kubernetes functionality with application specific logic using custom resources and custom controllers.
  • An Operator can automate various features of an application, but it should be specific to a single application
  • ...12 more annotations...
  • Kubebuilder is a comprehensive development kit for building and publishing Kubernetes APIs and Controllers using CRDs
  • Design declarative APIs for operators, not imperative APIs. This aligns well with Kubernetes APIs that are declarative in nature.
  • With declarative APIs, users only need to express their desired cluster state, while letting the operator perform all necessary steps to achieve it.
  • scaling, backup, restore, and monitoring. An operator should be made up of multiple controllers that specifically handle each of the those features.
  • the operator can have a main controller to spawn and manage application instances, a backup controller to handle backup operations, and a restore controller to handle restore operations.
  • each controller should correspond to a specific CRD so that the domain of each controller's responsibility is clear.
  • If you keep a log for every container, you will likely end up with unmanageable amount of logs.
  • integrate application-specific details to the log messages such as adding a prefix for the application name.
  • you may have to use external logging tools such as Google Stackdriver, Elasticsearch, Fluentd, or Kibana to perform the aggregations.
  • adding labels to metrics to facilitate aggregation and analysis by monitoring systems.
  • a more viable option is for application pods to expose a metrics HTTP endpoint for monitoring tools to scrape.
  • A good way to achieve this is to use open-source application-specific exporters for exposing Prometheus-style metrics.
張 旭

鳥哥的 Linux 私房菜 -- 第零章、計算機概論 - 0 views

  • 但因為 CPU 的運算速度比其他的設備都要來的快,又為了要滿足 FSB 的頻率,因此廠商就在 CPU 內部再進行加速, 於是就有所謂的外頻與倍頻了。
  • 中央處理器 (Central Processing Unit, CPU),CPU 為一個具有特定功能的晶片, 裡頭含有微指令集,如果你想要讓主機進行什麼特異的功能,就得要參考這顆 CPU 是否有相關內建的微指令集才可以。
  • CPU 內又可分為兩個主要的單元,分別是: 算數邏輯單元與控制單元。
  • ...63 more annotations...
  • CPU 讀取的資料都是從主記憶體來的! 主記憶體內的資料則是從輸入單元所傳輸進來!而 CPU 處理完畢的資料也必須要先寫回主記憶體中,最後資料才從主記憶體傳輸到輸出單元。
  • 重點在於 CPU 與主記憶體。 特別要看的是實線部分的傳輸方向,基本上資料都是流經過主記憶體再轉出去的!
  • CPU 實際要處理的資料則完全來自於主記憶體 (不管是程式還是一般文件資料)!這是個很重要的概念喔! 這也是為什麼當你的記憶體不足時,系統的效能就很糟糕!
  • 常見到的兩種主要 CPU 架構, 分別是:精簡指令集 (RISC) 與複雜指令集 (CISC) 系統。
  • 微指令集較為精簡,每個指令的執行時間都很短,完成的動作也很單純,指令的執行效能較佳; 但是若要做複雜的事情,就要由多個指令來完成。
  • CISC在微指令集的每個小指令可以執行一些較低階的硬體操作,指令數目多而且複雜, 每條指令的長度並不相同。因為指令執行較為複雜所以每條指令花費的時間較長, 但每條個別指令可以處理的工作較為豐富。
  • 多媒體微指令集:MMX, SSE, SSE2, SSE3, SSE4, AMD-3DNow! 虛擬化微指令集:Intel-VT, AMD-SVM 省電功能:Intel-SpeedStep, AMD-PowerNow! 64/32位元相容技術:AMD-AMD64, Intel-EM64T
  • 若光以效能來說,目前的個人電腦效能已經夠快了,甚至已經比工作站等級以上的電腦運算速度還要快! 但是工作站電腦強調的是穩定不當機,並且運算過程要完全正確,因此工作站以上等級的電腦在設計時的考量與個人電腦並不相同啦
  • 1 Byte = 8 bits
  • 檔案容量使用的是二進位的方式,所以 1 GBytes 的檔案大小實際上為:1024x1024x1024 Bytes 這麼大! 速度單位則常使用十進位,例如 1GHz 就是 1000x1000x1000 Hz 的意思。
  • CPU的運算速度常使用 MHz 或者是 GHz 之類的單位,這個 Hz 其實就是秒分之一
  • 在網路傳輸方面,由於網路使用的是 bit 為單位,因此網路常使用的單位為 Mbps 是 Mbits per second,亦即是每秒多少 Mbit
  • (1)北橋:負責連結速度較快的CPU、主記憶體與顯示卡界面等元件
  • (2)南橋:負責連接速度較慢的裝置介面, 包括硬碟、USB、網路卡等等
  • CPU內部含有微指令集,不同的微指令集會導致CPU工作效率的優劣
  • 時脈就是CPU每秒鐘可以進行的工作次數。 所以時脈越高表示這顆CPU單位時間內可以作更多的事情。
  • 早期的 CPU 架構主要透過北橋來連結系統最重要的 CPU、主記憶體與顯示卡裝置。因為所有的設備都得掉透過北橋來連結,因此每個設備的工作頻率應該要相同。
  • 前端匯流排 (FSB)
  • 外頻指的是CPU與外部元件進行資料傳輸時的速度
  • 倍頻則是 CPU 內部用來加速工作效能的一個倍數
  • 新的 CPU 設計中, 已經將記憶體控制器整合到 CPU 內部,而連結 CPU 與記憶體、顯示卡的控制器的設計,在Intel部份使用 QPI (Quick Path Interconnect) 與 DMI 技術,而 AMD 部份則使用 Hyper Transport 了,這些技術都可以讓 CPU 直接與主記憶體、顯示卡等設備分別進行溝通,而不需要透過外部的連結晶片了。
  • 如何知道主記憶體能提供的資料量呢?此時還是得要藉由 CPU 內的記憶體控制晶片與主記憶體間的傳輸速度『前端匯流排速度(Front Side Bus, FSB)
  • 主記憶體也是有其工作的時脈,這個時脈限制還是來自於 CPU 內的記憶體控制器所決定的。
  • CPU每次能夠處理的資料量稱為字組大小(word size), 字組大小依據CPU的設計而有32位元與64位元。我們現在所稱的電腦是32或64位元主要是依據這個 CPU解析的字組大小而來的
  • 早期的32位元CPU中,因為CPU每次能夠解析的資料量有限, 因此由主記憶體傳來的資料量就有所限制了。這也導致32位元的CPU最多只能支援最大到4GBytes的記憶體。
  • 在每一個 CPU 內部將重要的暫存器 (register) 分成兩群, 而讓程序分別使用這兩群暫存器。
  • 可以有兩個程序『同時競爭 CPU 的運算單元』,而非透過作業系統的多工切換!
  • 大多發現 HT 雖然可以提昇效能,不過,有些情況下卻可能導致效能降低喔!因為,實際上明明就僅有一個運算單元
  • 個人電腦的主記憶體主要元件為動態隨機存取記憶體(Dynamic Random Access Memory, DRAM), 隨機存取記憶體只有在通電時才能記錄與使用,斷電後資料就消失了。因此我們也稱這種RAM為揮發性記憶體。
  • 要啟用雙通道的功能你必須要安插兩支(或四支)主記憶體,這兩支記憶體最好連型號都一模一樣比較好, 這是因為啟動雙通道記憶體功能時,資料是同步寫入/讀出這一對主記憶體中,如此才能夠提升整體的頻寬啊!
  • 第二層快取(L2 cache)整合到CPU內部,因此這個L2記憶體的速度必須要CPU時脈相同。 使用DRAM是無法達到這個時脈速度的,此時就需要靜態隨機存取記憶體(Static Random Access Memory, SRAM)的幫忙了。
  • BIOS(Basic Input Output System)是一套程式,這套程式是寫死到主機板上面的一個記憶體晶片中, 這個記憶體晶片在沒有通電時也能夠將資料記錄下來,那就是唯讀記憶體(Read Only Memory, ROM)。
  • BIOS對於個人電腦來說是非常重要的, 因為他是系統在開機的時候首先會去讀取的一個小程式
  • 由於磁碟盤是圓的,並且透過機器手臂去讀寫資料,磁碟盤要轉動才能夠讓機器手臂讀寫。因此,通常資料寫入當然就是以圓圈轉圈的方式讀寫囉! 所以,當初設計就是在類似磁碟盤同心圓上面切出一個一個的小區塊,這些小區塊整合成一個圓形,讓機器手臂上的讀寫頭去存取。 這個小區塊就是磁碟的最小物理儲存單位,稱之為磁區 (sector),那同一個同心圓的磁區組合成的圓就是所謂的磁軌(track)。 由於磁碟裡面可能會有多個磁碟盤,因此在所有磁碟盤上面的同一個磁軌可以組合成所謂的磁柱 (cylinder)。
  • 原本硬碟的磁區都是設計成 512byte 的容量,但因為近期以來硬碟的容量越來越大,為了減少資料量的拆解,所以新的高容量硬碟已經有 4Kbyte 的磁區設計
  • 拿快閃記憶體去製作成高容量的設備,這些設備的連接界面也是透過 SATA 或 SAS,而且外型還做的跟傳統磁碟一樣
  • 固態硬碟最大的好處是,它沒有馬達不需要轉動,而是透過記憶體直接讀寫的特性,因此除了沒資料延遲且快速之外,還很省電
  • 硬碟主要是利用主軸馬達轉動磁碟盤來存取,因此轉速的快慢會影響到效能
  • 使用作業系統的正常關機方式,才能夠有比較好的硬碟保養啊!因為他會讓硬碟的機械手臂歸回原位啊!
  • I/O位址有點類似每個裝置的門牌號碼,每個裝置都有他自己的位址,一般來說,不能有兩個裝置使用同一個I/O位址, 否則系統就會不曉得該如何運作這兩個裝置了。
  • IRQ就可以想成是各個門牌連接到郵件中心(CPU)的專門路徑囉! 各裝置可以透過IRQ中斷通道來告知CPU該裝置的工作情況,以方便CPU進行工作分配的任務。
  • BIOS為寫入到主機板上某一塊 flash 或 EEPROM 的程式,他可以在開機的時候執行,以載入CMOS當中的參數, 並嘗試呼叫儲存裝置中的開機程式,進一步進入作業系統當中。
  • 電腦都只有記錄0/1而已,甚至記錄的資料都是使用byte/bit等單位來記錄的
  • 常用的英文編碼表為ASCII系統,這個編碼系統中, 每個符號(英文、數字或符號等)都會佔用1bytes的記錄, 因此總共會有28=256種變化
  • 中文字當中的編碼系統早期最常用的就是big5這個編碼表了。 每個中文字會佔用2bytes,理論上最多可以有216=65536,亦即最多可達6萬多個中文字。
  • 國際組織ISO/IEC跳出來制訂了所謂的Unicode編碼系統, 我們常常稱呼的UTF8或萬國碼的編碼
  • CPU其實是具有微指令集的。因此,我們需要CPU幫忙工作時,就得要參考微指令集的內容, 然後撰寫讓CPU讀的懂的指令碼給CPU執行,這樣就能夠讓CPU運作了。
  • 編譯器』來將這些人類能夠寫的程式語言轉譯成為機器能看懂得機器碼
  • 當你需要將運作的資料寫入記憶體中,你就得要自行分配一個記憶體區塊出來讓自己的資料能夠填上去, 所以你還得要瞭解到記憶體的位址是如何定位的,啊!眼淚還是不知不覺的流了下來... 怎麼寫程式這麼麻煩啊!
  • 作業系統(Operating System, OS)其實也是一組程式, 這組程式的重點在於管理電腦的所有活動以及驅動系統中的所有硬體。
  • 作業系統的功能就是讓CPU可以開始判斷邏輯與運算數值、 讓主記憶體可以開始載入/讀出資料與程式碼、讓硬碟可以開始被存取、讓網路卡可以開始傳輸資料、 讓所有周邊可以開始運轉等等。
  • 只有核心有提供的功能,你的電腦系統才能幫你完成!舉例來說,你的核心並不支援TCP/IP的網路協定, 那麼無論你購買了什麼樣的網卡,這個核心都無法提供網路能力的!
  • 核心程式所放置到記憶體當中的區塊是受保護的! 並且開機後就一直常駐在記憶體當中。
  • 作業系統通常會提供一整組的開發介面給工程師來開發軟體! 工程師只要遵守該開發介面那就很容易開發軟體了!
  • 系統呼叫介面(System call interface)
  • 程序管理(Process control)
  • 記憶體管理(Memory management)
  • 檔案系統管理(Filesystem management)
  • 通常核心會提供虛擬記憶體的功能,當記憶體不足時可以提供記憶體置換(swap)的功能
  • 裝置的驅動(Device drivers)
  • 『可載入模組』功能,可以將驅動程式編輯成模組,就不需要重新的編譯核心
  • 驅動程式可以說是作業系統裡面相當重要的一環
  • 作業系統通常會提供一個開發介面給硬體開發商, 讓他們可以根據這個介面設計可以驅動他們硬體的『驅動程式』,如此一來,只要使用者安裝驅動程式後, 自然就可以在他們的作業系統上面驅動這塊顯示卡了。
  •  
    "但因為 CPU 的運算速度比其他的設備都要來的快,又為了要滿足 FSB 的頻率,因此廠商就在 CPU 內部再進行加速, 於是就有所謂的外頻與倍頻了。"
張 旭

作業系統 - 維基百科,自由的百科全書 - 0 views

  • 作業系統位於底層硬體與使用者之間,是兩者溝通的橋樑。
  • 行程管理(Processing management)
  • 安全機制(Security)
  • ...20 more annotations...
  • 記憶體管理(Memory management)
  • 核心 - 作業系統之最核心部分,通常執行在最高特權級,負責提供基礎性、結構性的功能。
  • 驅動程式 - 最底層的、直接控制和監視各類硬體的部分,它們的職責是隱藏硬體的具體細節,並向其他部分提供一個抽象的、通用的介面。
  • 作業系統的分類沒有一個單一的標準,可以根據工作方式分為批次處理作業系統、分時作業系統、即時作業系統、網路作業系統和分散式作業系統等
  • 根據帕金森定律:「你給程式再多記憶體,程式也會想盡辦法耗光」
  • 大部分的現代電腦記憶體架構都是階層式的,最快且數量最少的暫存器為首,然後是快取、記憶體以及最慢的磁碟儲存裝置。
  • 虛擬記憶體管理的功能大幅增加每個行程可獲得的記憶空間
  • 當年運用馮·諾伊曼結構建造電腦時,每個中央處理器最多只能同時執行一個行程。
  • 現代的作業系統,即使只擁有一個CPU,也可以利用多行程(multitask)功能同時執行多個行程。行程管理指的是作業系統調整多個行程的功能。
  • 作業系統尚有擔負起行程間通訊(IPC)、行程異常終止處理以及死結(Dead Lock)偵測及處理等較為艱深的問題。
  • 檔案系統,通常指稱管理磁碟資料的系統,可將資料以目錄或檔案的型式儲存。每個檔案系統都有自己的特殊格式與功能,例如日誌管理或不需磁碟重整。
  • 現代的作業系統都具備操作主流網路通訊協定TCP/IP的能力。也就是說這樣的作業系統可以進入網路世界,並且與其他系統分享諸如檔案、印表機與掃描器等資源。
  • 作業系統提供外界直接或間接存取數種資源的管道
  • 作業系統有能力認證資源存取的請求
  • 通常是一個正在執行的程式發出的資源請求。在某些系統上,一個程式一旦可執行就可做任何事情(例如DOS時代的病毒),但通常作業系統會給程式一個識別代號,並且在此程式發出請求時,檢查其代號與所需資源的存取權限關係。
  • 一個高安全等級的系統也會提供記錄選項,允許記錄各種請求對資源存取的行為(例如「誰曾經讀了這個檔案?」)
  • 大部分的作業系統都包含圖形化使用者介面(GUI)。有幾類較舊的作業系統將圖形化使用者介面與核心緊密結合,例如最早的Windows與Mac OS實作產品。
  • 驅動程式(Device driver)是指某類設計來與硬體互動的電腦軟體。通常是一設計完善的裝置互動介面,利用與此硬體連接的電腦匯排流或通訊子系統,提供對此裝置下令與接收資訊的功能;以及最終目的,將訊息提供給作業系統或應用程式。
  • 驅動程式是針對特定硬體與特定作業系統設計的軟體,通常以作業系統核心模組、應用軟體包或普通電腦程式的形式在作業系統核心底下執行,以達到通透順暢地與硬體互動的效果
  • 適合的驅動程式一旦安裝,相對應的新裝置就可以無誤地執行。此新驅動程式可以讓此裝置完美地切合在作業系統中,讓使用者察覺不到這是作業系統原本沒有的功能。
  •  
    "作業系統位於底層硬體與使用者之間,是兩者溝通的橋樑。"
張 旭

Think Before you NodePort in Kubernetes - Oteemo - 0 views

  • Two options are provided for Services intended for external use: a NodePort, or a LoadBalancer
  • no built-in cloud load balancers for Kubernetes in bare-metal environments
  • NodePort may not be your best choice.
  • ...15 more annotations...
  • NodePort, by design, bypasses almost all network security in Kubernetes.
  • NetworkPolicy resources can currently only control NodePorts by allowing or disallowing all traffic on them.
  • put a network filter in front of all the nodes
  • if a Nodeport-ranged Service is advertised to the public, it may serve as an invitation to black-hats to scan and probe
  • When Kubernetes creates a NodePort service, it allocates a port from a range specified in the flags that define your Kubernetes cluster. (By default, these are ports ranging from 30000-32767.)
  • By design, Kubernetes NodePort cannot expose standard low-numbered ports like 80 and 443, or even 8080 and 8443.
  • A port in the NodePort range can be specified manually, but this would mean the creation of a list of non-standard ports, cross-referenced with the applications they map to
  • if you want the exposed application to be highly available, everything contacting the application has to know all of your node addresses, or at least more than one.
  • non-standard ports.
  • Ingress resources use an Ingress controller (the nginx one is common but not by any means the only choice) and an external load balancer or public IP to enable path-based routing of external requests to internal Services.
  • With a single point of entry to expose and secure
  • get simpler TLS management!
  • consider putting a real load balancer in front of your NodePort Services before opening them up to the world
  • Google very recently released an alpha-stage bare-metal load balancer that, once installed in your cluster, will load-balance using BGP
  • NodePort Services are easy to create but hard to secure, hard to manage, and not especially friendly to others
張 旭

The differences between Docker, containerd, CRI-O and runc - Tutorial Works - 0 views

  • Docker isn’t the only container contender on the block.
  • Container Runtime Interface (CRI), which defines an API between Kubernetes and the container runtime
  • Open Container Initiative (OCI) which publishes specifications for images and containers.
  • ...20 more annotations...
  • for a lot of people, the name “Docker” itself is synonymous with the word “container”.
  • Docker created a very ergonomic (nice-to-use) tool for working with containers – also called docker.
  • docker is designed to be installed on a workstation or server and comes with a bunch of tools to make it easy to build and run containers as a developer, or DevOps person.
  • containerd: This is a daemon process that manages and runs containers.
  • runc: This is the low-level container runtime (the thing that actually creates and runs containers).
  • libcontainer, a native Go-based implementation for creating containers.
  • Kubernetes includes a component called dockershim, which allows it to support Docker.
  • Kubernetes prefers to run containers through any container runtime which supports its Container Runtime Interface (CRI).
  • Kubernetes will remove support for Docker directly, and prefer to use only container runtimes that implement its Container Runtime Interface.
  • Both containerd and CRI-O can run Docker-formatted (actually OCI-formatted) images, they just do it without having to use the docker command or the Docker daemon.
  • Docker images, are actually images packaged in the Open Container Initiative (OCI) format.
  • CRI is the API that Kubernetes uses to control the different runtimes that create and manage containers.
  • CRI makes it easier for Kubernetes to use different container runtimes
  • containerd is a high-level container runtime that came from Docker, and implements the CRI spec
  • containerd was separated out of the Docker project, to make Docker more modular.
  • CRI-O is another high-level container runtime which implements the Container Runtime Interface (CRI).
  • The idea behind the OCI is that you can choose between different runtimes which conform to the spec.
  • runc is an OCI-compatible container runtime.
  • A reference implementation is a piece of software that has implemented all the requirements of a specification or standard.
  • runc provides all of the low-level functionality for containers, interacting with existing low-level Linux features, like namespaces and control groups.
« First ‹ Previous 61 - 80 of 148 Next › Last »
Showing 20 items per page