Skip to main content

Home/ Larvata/ Group items tagged net

Rss Feed Group items tagged

張 旭

MongoDB Performance - MongoDB Manual - 0 views

  • MongoDB uses a locking system to ensure data set consistency. If certain operations are long-running or a queue forms, performance will degrade as requests and operations wait for the lock.
  • performance limitations as a result of inadequate or inappropriate indexing strategies, or as a consequence of poor schema design patterns.
  • performance issues may be temporary and related to abnormal traffic load.
  • ...9 more annotations...
  • Lock-related slowdowns can be intermittent.
  • If globalLock.currentQueue.total is consistently high, then there is a chance that a large number of requests are waiting for a lock.
  • If globalLock.totalTime is high relative to uptime, the database has existed in a lock state for a significant amount of time.
  • For write-heavy applications, deploy sharding and add one or more shards to a sharded cluster to distribute load among mongod instances.
  • Unless constrained by system-wide limits, the maximum number of incoming connections supported by MongoDB is configured with the maxIncomingConnections setting.
  • When logLevel is set to 0, MongoDB records slow operations to the diagnostic log at a rate determined by slowOpSampleRate.
  • At higher logLevel settings, all operations appear in the diagnostic log regardless of their latency with the following exception
  • Full Time Diagnostic Data Collection (FTDC) mechanism. FTDC data files are compressed, are not human-readable, and inherit the same file access permissions as the MongoDB data files.
  • mongod processes store FTDC data files in a diagnostic.data directory under the instances storage.dbPath.
  •  
    "MongoDB uses a locking system to ensure data set consistency. If certain operations are long-running or a queue forms, performance will degrade as requests and operations wait for the lock."
張 旭

Creating a cluster with kubeadm | Kubernetes - 0 views

  • (Recommended) If you have plans to upgrade this single control-plane kubeadm cluster to high availability you should specify the --control-plane-endpoint to set the shared endpoint for all control-plane nodes
  • set the --pod-network-cidr to a provider-specific value.
  • kubeadm tries to detect the container runtime by using a list of well known endpoints.
  • ...12 more annotations...
  • kubeadm uses the network interface associated with the default gateway to set the advertise address for this particular control-plane node's API server. To use a different network interface, specify the --apiserver-advertise-address=<ip-address> argument to kubeadm init
  • Do not share the admin.conf file with anyone and instead grant users custom permissions by generating them a kubeconfig file using the kubeadm kubeconfig user command.
  • The token is used for mutual authentication between the control-plane node and the joining nodes. The token included here is secret. Keep it safe, because anyone with this token can add authenticated nodes to your cluster.
  • You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.
  • Take care that your Pod network must not overlap with any of the host networks
  • Make sure that your Pod network plugin supports RBAC, and so do any manifests that you use to deploy it.
  • You can install only one Pod network per cluster.
  • The cluster created here has a single control-plane node, with a single etcd database running on it.
  • The node-role.kubernetes.io/control-plane label is such a restricted label and kubeadm manually applies it using a privileged client after a node has been created.
  • By default, your cluster will not schedule Pods on the control plane nodes for security reasons.
  • kubectl taint nodes --all node-role.kubernetes.io/control-plane-
  • remove the node-role.kubernetes.io/control-plane:NoSchedule taint from any nodes that have it, including the control plane nodes, meaning that the scheduler will then be able to schedule Pods everywhere.
張 旭

Installing kubeadm | Kubernetes - 0 views

  • Swap disabled. You MUST disable swap in order for the kubelet to work properly.
  • The product_uuid can be checked by using the command sudo cat /sys/class/dmi/id/product_uuid
  • some virtual machines may have identical values.
  • ...6 more annotations...
  • Kubernetes uses these values to uniquely identify the nodes in the cluster.
  • Make sure that the br_netfilter module is loaded.
  • you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config,
  • kubeadm will not install or manage kubelet or kubectl for you, so you will need to ensure they match the version of the Kubernetes control plane you want kubeadm to install for you.
  • one minor version skew between the kubelet and the control plane is supported, but the kubelet version may never exceed the API server version.
  • Both the container runtime and the kubelet have a property called "cgroup driver", which is important for the management of cgroups on Linux machines.
張 旭

chaifeng/ufw-docker: To fix the Docker and UFW security flaw without disabling iptables - 0 views

  • It requires to disable docker's iptables function first, but this also means that we give up docker's network management function.
  • This causes containers will not be able to access the external network.
  • such as -A POSTROUTING ! -o docker0 -s 172.17.0.0/16 -j MASQUERADE. But this only allows containers that belong to network 172.17.0.0/16 can access outside.
  • ...13 more annotations...
  • Don't need to disable Docker's iptables and let Docker to manage it's network.
  • The public network cannot access ports that published by Docker.
  • In a very convenient way to allow/deny public networks to access container ports without additional software and extra configurations
  • Enable Docker's iptables feature. Remove all changes like --iptables=false , including configuration file /etc/docker/daemon.json
  • Modify the UFW configuration file /etc/ufw/after.rules
  • There may be some unknown reasons cause the UFW rules will not take effect after restart UFW, please reboot servers.
  • If we publish a port by using option -p 8080:80, we should use the container port 80, not the host port 8080
  • allow the private networks to be able to visit each other.
  • The following rules block connection requests initiated by all public networks, but allow internal networks to access external networks.
  • Since the UDP protocol is stateless, it is not possible to block the handshake signal that initiates the connection request as TCP does.
  • For GNU/Linux we can find the local port range in the file /proc/sys/net/ipv4/ip_local_port_range. The default range is 32768 60999
  • It not only exposes ports of containers but also exposes ports of the host.
  • Cannot expose services running on hosts and containers at the same time by the same command.
  •  
    "It requires to disable docker's iptables function first, but this also means that we give up docker's network management function."
張 旭

我做系统架构的一些原则 | 酷 壳 - CoolShell - 0 views

  • 如果不说收益,只是为了技术而技术,而没有任何意义。
  • 有计划和无计划的停机做相应的解决方案
  • 经常不断的 human error
  • ...35 more annotations...
  • 运维又会分成基础运维和应用运维,开发则会分成基础核心开发和业务开发。
  • 基础运维和开发的同学更多的只是关注资源的利用率和性能,而应用运维和业务开发则更多关注的是应用和服务上的东西。
  • 有一些系统已经说不清楚是基础层的还是应用层的了,比如像服务治理上的东西,里面即有底层基础技术,也需要业务的同学来配合,包括 k8s 也样,里面即有底层的如网络这样的技术,也有需要业务配合的 readniess和 liveness 这样的健康检查,以及业务应用需要 configMap 等等 ……
  • 试想一下城市交通的优化,当城市规模到一定程度的时候,整体的性能你是无法通过优化几条路或是几条街区来完成的,你需要对整个城市做整体的功能体的规划才可能达到整体效率的提升
  • 当系统越来越复杂的时候,用户把他们的  PHP,Python, .NET,或 Node.js 的架构完全都迁移到 Java + Go 的架构上来的案例不断的发生。
  • 更为工业化的技术
  • 使用更为成熟更为工业化的技术栈,而不是自己熟悉的技术栈
  • 不要自己发明轮子,更不要魔改
  • 完全没有必要。不重新发明轮子,不魔改,不是因为自己技术不能,而是因为,这个世界早已不是自己干所有事的年代了
  • 好些公司的架构都被技术负责人个人的喜好、擅长和个人经验给绑架了,完全不是从一个客观的角度来进行技术选型
  • 全中国所有的电商平台,几百家银行,三大电信运营商,所有的保险公司,劵商的系统,医院里的系统,电子政府系统,等等,基本都是用 Java 开发的,包括 AWS 的主流语言也是 Java
  • NoSQL 的数据库在 Join 上都表现的太差
  • 为了不做 Join 就开始冗余数据,然而自己又维护不好冗余数据后带来的数据一致性的问题,导致数据上的各种错乱丢失。
  • 永远使用完备支持 ACID 的关系型数据库
  • 性能上的事,总是有解的,手段也是最多的,这个比起架构的完备性和扩展性来说真的不必太过担心。
  • 很多公司的系统既没有服从业界标准,也没有形成自己公司的标准,感觉就像一群乌合之众一样。
  • 最典型的例子就是 HTTP 调用的状态返回码。业内给你的标准是 200表示成功,3xx 跳转,4xx 表示调用端出错,5xx 表示服务端出错,我实在是不明白为什么无论成功和失败大家都喜欢返回 200,然后在 body 里指出是否error
  • Restful API 的规范。我觉得是非常重要的,这里给两个我觉得写得最好的参考:Paypal 和 Microsoft 。
  • 监控系统宁可自己死了也不能干扰实际应用。
  • 一个公司至少一年要有一次软件版本升级的review,然后形成软件版本的统一和一致
  • 架构和软件不是写好就完的,是需要不断修改不断维护的,80%的软件成本都是在维护上。
  • 通过服务发现或服务网关来降低服务依赖所带来的运维复杂度
  • 一定要使用各种软件设计的原则。比如:像SOLID这样的原则(参看《一些软件设计的原则》),IoC/DIP,SOA 或 Spring Cloud 等 架构的最佳实践(参看《SteveY对Amazon和Google平台的吐槽》中的 Service Interface 的那几条军规),分布式系统架构的相关实践(参看:《分布式系统的事务处理》,或微软件的 《Cloud Design Patterns》)……等等
  • 没有自动化测试,没有好的软件文档,没有质量好的代码,没有标准和规范
  • 以前欠下的技术债,都得要还,没打好的地基要重新打,没建配套设施都要建。这些基础设施如果不按照正确科学的方式建立的话,你是不可能有一个好的的系统
  • 与其花大力气迁就技术债务,不如直接还技术债
  • 建设没有技术债的“新城区”,并通过“防腐层 ”的架构模型,不要让技术债侵入“新城区”。
  • 如果有一天你在做技术决定的时候,开始凭自己以往的经验,那么你就已经不可能再成长了。
  • 做任何决定之前,最好花上一点时间,上网查一下相关的资料,技术博客,文章,论文等 ,同时,也看看各个公司,或是各个开源软件他们是怎么做的?然后,比较多种方案的 Pros/Cons,最终形成自己的决定
  • 对于 X-Y 问题,也就是说,用户为了解决 X问题,他觉得用 Y 可以解,于是问我 Y 怎么搞,结果搞到最后,发现原来要解决的 X 问题,这个时候最好的解决方案不是 Y,而是 Z。
  • 我很喜欢追问为什么 ,这种追问,会让客户也跟着来一起重新思考。
  • 激进并不是瞎搞,也不是见新技术就上,而是积极拥抱会改变未来的新技术
  • 不是不喜欢的就不学了,我对区块链和 Rust 我一样学习,我也知道这些技术的优势,但我不会大规模使用它们。
  • 进步永远来自于探索,探索是要付出代价的,但是收益更大。
  • 不敢冒险才是最大的冒险,不敢犯错才是最大的错误,害怕失去会让你失去的更多
張 旭

GoJS - 0 views

shared by 張 旭 on 20 May 22 - No Cached
« First ‹ Previous 61 - 70 of 70
Showing 20 items per page