Skip to main content

Home/ Larvata/ Group items tagged json

Rss Feed Group items tagged

張 旭

紀錄一下和 CORS (Cross-Origin Resource Sharing) 有關的問題 | Just for noting - 0 views

  • 通常只允許單一 domain
  • 回一個寫死 domain 的 Access-Control-Allow-Origin 的 HTTP Header, 但是可以在設定檔裏面做設定, 如果 request 是來自允許的 domain 的話, 就把 Access-Control-Allow-Origin 的值設定成該 domain, 如果不在白名單裡面的話當然就擋掉。
  • Google App Engine 不允許對非 static files 的 handler 加上 HTTP Headers
  • ...1 more annotation...
  • JSONP 拯救 Cross-Domain JSON API Request
張 旭

MongoDB Performance Tuning: Everything You Need to Know - Stackify - 0 views

  • db.serverStatus().globalLock
  • db.serverStatus().locks
  • globalLock.currentQueue.total: This number can indicate a possible concurrency issue if it’s consistently high. This can happen if a lot of requests are waiting for a lock to be released.
  • ...35 more annotations...
  • globalLock.totalTime: If this is higher than the total database uptime, the database has been in a lock state for too long.
  • Unlike relational databases such as MySQL or PostgreSQL, MongoDB uses JSON-like documents for storing data.
  • Databases operate in an environment that consists of numerous reads, writes, and updates.
  • When a lock occurs, no other operation can read or modify the data until the operation that initiated the lock is finished.
  • locks.deadlockCount: Number of times the lock acquisitions have encountered deadlocks
  • Is the database frequently locking from queries? This might indicate issues with the schema design, query structure, or system architecture.
  • For version 3.2 on, WiredTiger is the default.
  • MMAPv1 locks whole collections, not individual documents.
  • WiredTiger performs locking at the document level.
  • When the MMAPv1 storage engine is in use, MongoDB will use memory-mapped files to store data.
  • All available memory will be allocated for this usage if the data set is large enough.
  • db.serverStatus().mem
  • mem.resident: Roughly equivalent to the amount of RAM in megabytes that the database process uses
  • If mem.resident exceeds the value of system memory and there’s a large amount of unmapped data on disk, we’ve most likely exceeded system capacity.
  • If the value of mem.mapped is greater than the amount of system memory, some operations will experience page faults.
  • The WiredTiger storage engine is a significant improvement over MMAPv1 in performance and concurrency.
  • By default, MongoDB will reserve 50 percent of the available memory for the WiredTiger data cache.
  • wiredTiger.cache.bytes currently in the cache – This is the size of the data currently in the cache.
  • wiredTiger.cache.tracked dirty bytes in the cache – This is the size of the dirty data in the cache.
  • we can look at the wiredTiger.cache.bytes read into cache value for read-heavy applications. If this value is consistently high, increasing the cache size may improve overall read performance.
  • check whether the application is read-heavy. If it is, increase the size of the replica set and distribute the read operations to secondary members of the set.
  • write-heavy, use sharding within a sharded cluster to distribute the load.
  • Replication is the propagation of data from one node to another
  • Replication sets handle this replication.
  • Sometimes, data isn’t replicated as quickly as we’d like.
  • a particularly thorny problem if the lag between a primary and secondary node is high and the secondary becomes the primary
  • use the db.printSlaveReplicationInfo() or the rs.printSlaveReplicationInfo() command to see the status of a replica set from the perspective of the secondary member of the set.
  • shows how far behind the secondary members are from the primary. This number should be as low as possible.
  • monitor this metric closely.
  • watch for any spikes in replication delay.
  • Always investigate these issues to understand the reasons for the lag.
  • One replica set is primary. All others are secondary.
  • it’s not normal for nodes to change back and forth between primary and secondary.
  • use the profiler to gain a deeper understanding of the database’s behavior.
  • Enabling the profiler can affect system performance, due to the additional activity.
  •  
    "globalLock.currentQueue.total: This number can indicate a possible concurrency issue if it's consistently high. This can happen if a lot of requests are waiting for a lock to be released."
張 旭

Introduction to MongoDB - MongoDB Manual - 0 views

  • MongoDB is a document database designed for ease of development and scaling
  • MongoDB offers both a Community and an Enterprise version
  • A record in MongoDB is a document, which is a data structure composed of field and value pairs.
  • ...12 more annotations...
  • MongoDB documents are similar to JSON objects.
  • The values of fields may include other documents, arrays, and arrays of documents.
  • reduce need for expensive joins
  • MongoDB stores documents in collections.
  • Collections are analogous to tables in relational databases.
  • Read-only Views
  • Indexes support faster queries and can include keys from embedded documents and arrays.
  • MongoDB's replication facility, called replica set
  • A replica set is a group of MongoDB servers that maintain the same data set, providing redundancy and increasing data availability.
  • Sharding distributes data across a cluster of machines.
  • MongoDB supports creating zones of data based on the shard key.
  • MongoDB provides pluggable storage engine API
張 旭

Logging Architecture | Kubernetes - 0 views

  • Application logs can help you understand what is happening inside your application
  • container engines are designed to support logging.
  • The easiest and most adopted logging method for containerized applications is writing to standard output and standard error streams.
  • ...26 more annotations...
  • In a cluster, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called cluster-level logging.
  • Cluster-level logging architectures require a separate backend to store, analyze, and query logs
  • Kubernetes does not provide a native storage solution for log data.
  • use kubectl logs --previous to retrieve logs from a previous instantiation of a container.
  • A container engine handles and redirects any output generated to a containerized application's stdout and stderr streams
  • The Docker JSON logging driver treats each line as a separate message.
  • By default, if a container restarts, the kubelet keeps one terminated container with its logs.
  • An important consideration in node-level logging is implementing log rotation, so that logs don't consume all available storage on the node
  • You can also set up a container runtime to rotate an application's logs automatically.
  • The two kubelet flags container-log-max-size and container-log-max-files can be used to configure the maximum size for each log file and the maximum number of files allowed for each container respectively.
  • The kubelet and container runtime do not run in containers.
  • On machines with systemd, the kubelet and container runtime write to journald. If systemd is not present, the kubelet and container runtime write to .log files in the /var/log directory.
  • System components inside containers always write to the /var/log directory, bypassing the default logging mechanism.
  • Kubernetes does not provide a native solution for cluster-level logging
  • Use a node-level logging agent that runs on every node.
  • implement cluster-level logging by including a node-level logging agent on each node.
  • the logging agent is a container that has access to a directory with log files from all of the application containers on that node.
  • the logging agent must run on every node, it is recommended to run the agent as a DaemonSet
  • Node-level logging creates only one agent per node and doesn't require any changes to the applications running on the node.
  • Containers write stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation.
  • Each sidecar container prints a log to its own stdout or stderr stream.
  • It is not recommended to write log entries with different formats to the same log stream
  • writing logs to a file and then streaming them to stdout can double disk usage.
  • If you have an application that writes to a single file, it's recommended to set /dev/stdout as the destination
  • it's recommended to use stdout and stderr directly and leave rotation and retention policies to the kubelet.
  • Using a logging agent in a sidecar container can lead to significant resource consumption. Moreover, you won't be able to access those logs using kubectl logs because they are not controlled by the kubelet.
張 旭

Kubernetes 基本概念 · Kubernetes指南 - 0 views

  • Container(容器)是一种便携式、轻量级的操作系统级虚拟化技术。它使用 namespace 隔离不同的软件运行环境,并通过镜像自包含软件的运行环境,从而使得容器可以很方便的在任何地方运行。
  • 每个应用程序用容器封装,管理容器部署就等同于管理应用程序部署。+
  • Pod 是一组紧密关联的容器集合,它们共享 PID、IPC、Network 和 UTS namespace,是 Kubernetes 调度的基本单位。
  • ...9 more annotations...
  • 进程间通信和文件共享
  • 在 Kubernetes 中,所有对象都使用 manifest(yaml 或 json)来定义
  • Node 是 Pod 真正运行的主机,可以是物理机,也可以是虚拟机。
  • 每个 Node 节点上至少要运行 container runtime(比如 docker 或者 rkt)、kubelet 和 kube-proxy 服务。
  • 常见的 pods, services, replication controllers 和 deployments 等都是属于某一个 namespace 的(默认是 default)
  • node, persistentVolumes 等则不属于任何 namespace
  • Service 是应用服务的抽象,通过 labels 为应用提供负载均衡和服务发现。
  • 匹配 labels 的 Pod IP 和端口列表组成 endpoints,由 kube-proxy 负责将服务 IP 负载均衡到这些 endpoints 上。
  • 每个 Service 都会自动分配一个 cluster IP(仅在集群内部可访问的虚拟地址)和 DNS 名
  •  
    "常见的 pods, services, replication controllers 和 deployments 等都是属于某一个 namespace 的(默认是 default),而 node, persistentVolumes 等则不属于任何 namespace。"
張 旭

chaifeng/ufw-docker: To fix the Docker and UFW security flaw without disabling iptables - 0 views

  • It requires to disable docker's iptables function first, but this also means that we give up docker's network management function.
  • This causes containers will not be able to access the external network.
  • such as -A POSTROUTING ! -o docker0 -s 172.17.0.0/16 -j MASQUERADE. But this only allows containers that belong to network 172.17.0.0/16 can access outside.
  • ...13 more annotations...
  • Don't need to disable Docker's iptables and let Docker to manage it's network.
  • The public network cannot access ports that published by Docker.
  • In a very convenient way to allow/deny public networks to access container ports without additional software and extra configurations
  • Enable Docker's iptables feature. Remove all changes like --iptables=false , including configuration file /etc/docker/daemon.json
  • Modify the UFW configuration file /etc/ufw/after.rules
  • There may be some unknown reasons cause the UFW rules will not take effect after restart UFW, please reboot servers.
  • If we publish a port by using option -p 8080:80, we should use the container port 80, not the host port 8080
  • allow the private networks to be able to visit each other.
  • The following rules block connection requests initiated by all public networks, but allow internal networks to access external networks.
  • Since the UDP protocol is stateless, it is not possible to block the handshake signal that initiates the connection request as TCP does.
  • For GNU/Linux we can find the local port range in the file /proc/sys/net/ipv4/ip_local_port_range. The default range is 32768 60999
  • It not only exposes ports of containers but also exposes ports of the host.
  • Cannot expose services running on hosts and containers at the same time by the same command.
  •  
    "It requires to disable docker's iptables function first, but this also means that we give up docker's network management function."
« First ‹ Previous 41 - 46 of 46
Showing 20 items per page