Skip to main content

Home/ Larvata/ Group items tagged azure

Rss Feed Group items tagged

張 旭

Outbound connections in Azure | Microsoft Docs - 0 views

  • When an instance initiates an outbound flow to a destination in the public IP address space, Azure dynamically maps the private IP address to a public IP address.
  • After this mapping is created, return traffic for this outbound originated flow can also reach the private IP address where the flow originated.
  • Azure uses source network address translation (SNAT) to perform this function
  • ...22 more annotations...
  • When multiple private IP addresses are masquerading behind a single public IP address, Azure uses port address translation (PAT) to masquerade private IP addresses.
  • If you want outbound connectivity when working with Standard SKUs, you must explicitly define it either with Standard Public IP addresses or Standard public Load Balancer.
  • the VM is part of a public Load Balancer backend pool. The VM does not have a public IP address assigned to it.
  • The Load Balancer resource must be configured with a load balancer rule to create a link between the public IP frontend with the backend pool.
  • VM has an Instance Level Public IP (ILPIP) assigned to it. As far as outbound connections are concerned, it doesn't matter whether the VM is load balanced or not.
  • When an ILPIP is used, the VM uses the ILPIP for all outbound flows.
  • A public IP assigned to a VM is a 1:1 relationship (rather than 1: many) and implemented as a stateless 1:1 NAT.
  • Port masquerading (PAT) is not used, and the VM has all ephemeral ports available for use.
  • When the load-balanced VM creates an outbound flow, Azure translates the private source IP address of the outbound flow to the public IP address of the public Load Balancer frontend.
  • Azure uses SNAT to perform this function. Azure also uses PAT to masquerade multiple private IP addresses behind a public IP address.
  • Ephemeral ports of the load balancer's public IP address frontend are used to distinguish individual flows originated by the VM.
  • When multiple public IP addresses are associated with Load Balancer Basic, any of these public IP addresses are a candidate for outbound flows, and one is selected at random.
  • the VM is not part of a public Load Balancer pool (and not part of an internal Standard Load Balancer pool) and does not have an ILPIP address assigned to it.
  • The public IP address used for this outbound flow is not configurable and does not count against the subscription's public IP resource limit.
  • Do not use this scenario for whitelisting IP addresses.
  • This public IP address does not belong to you and cannot be reserved.
  • Standard Load Balancer uses all candidates for outbound flows at the same time when multiple (public) IP frontends is present.
  • Load Balancer Basic chooses a single frontend to be used for outbound flows when multiple (public) IP frontends are candidates for outbound flows.
  • the disableOutboundSnat option defaults to false and signifies that this rule programs outbound SNAT for the associated VMs in the backend pool of the load balancing rule.
  • Port masquerading SNAT (PAT)
  • Ephemeral port preallocation for port masquerading SNAT (PAT)
  • determine the public source IP address of an outbound connection.
張 旭

Azure 101: Networking Part 1 - Cloud Solution Architect - 0 views

  • Virtual Private Gateways and it is these combined set of services that allow you to provide traffic flow to/from your Virtual Network and any external network, such as your On-Prem DataCenter.
  • No matter which version of the gateway you plan on implementing, there are three resources within Azure that you will need to implement and then connect to one of your Virtual Networks.
  • "Gateway Subnet". This is a specialized Subnet within your Virtual Network that can only be used for connecting Virtual Private Gateways to a VPN connection of some kind.
  • ...2 more annotations...
  • The Local Gateway is where you define the configuration of your external network's VPN access point with the most important piece being the external IP of that device so that Azure knows exactly how to establish the VPN connection.
  • The VPN Gateway is the Azure resource that you tie into your Gateway Subnet within your Virtual Network.
張 旭

我做系统架构的一些原则 | 酷 壳 - CoolShell - 0 views

  • 如果不说收益,只是为了技术而技术,而没有任何意义。
  • 有计划和无计划的停机做相应的解决方案
  • 经常不断的 human error
  • ...35 more annotations...
  • 运维又会分成基础运维和应用运维,开发则会分成基础核心开发和业务开发。
  • 基础运维和开发的同学更多的只是关注资源的利用率和性能,而应用运维和业务开发则更多关注的是应用和服务上的东西。
  • 有一些系统已经说不清楚是基础层的还是应用层的了,比如像服务治理上的东西,里面即有底层基础技术,也需要业务的同学来配合,包括 k8s 也样,里面即有底层的如网络这样的技术,也有需要业务配合的 readniess和 liveness 这样的健康检查,以及业务应用需要 configMap 等等 ……
  • 试想一下城市交通的优化,当城市规模到一定程度的时候,整体的性能你是无法通过优化几条路或是几条街区来完成的,你需要对整个城市做整体的功能体的规划才可能达到整体效率的提升
  • 当系统越来越复杂的时候,用户把他们的  PHP,Python, .NET,或 Node.js 的架构完全都迁移到 Java + Go 的架构上来的案例不断的发生。
  • 更为工业化的技术
  • 使用更为成熟更为工业化的技术栈,而不是自己熟悉的技术栈
  • 不要自己发明轮子,更不要魔改
  • 完全没有必要。不重新发明轮子,不魔改,不是因为自己技术不能,而是因为,这个世界早已不是自己干所有事的年代了
  • 好些公司的架构都被技术负责人个人的喜好、擅长和个人经验给绑架了,完全不是从一个客观的角度来进行技术选型
  • 全中国所有的电商平台,几百家银行,三大电信运营商,所有的保险公司,劵商的系统,医院里的系统,电子政府系统,等等,基本都是用 Java 开发的,包括 AWS 的主流语言也是 Java
  • NoSQL 的数据库在 Join 上都表现的太差
  • 为了不做 Join 就开始冗余数据,然而自己又维护不好冗余数据后带来的数据一致性的问题,导致数据上的各种错乱丢失。
  • 永远使用完备支持 ACID 的关系型数据库
  • 性能上的事,总是有解的,手段也是最多的,这个比起架构的完备性和扩展性来说真的不必太过担心。
  • 很多公司的系统既没有服从业界标准,也没有形成自己公司的标准,感觉就像一群乌合之众一样。
  • 最典型的例子就是 HTTP 调用的状态返回码。业内给你的标准是 200表示成功,3xx 跳转,4xx 表示调用端出错,5xx 表示服务端出错,我实在是不明白为什么无论成功和失败大家都喜欢返回 200,然后在 body 里指出是否error
  • Restful API 的规范。我觉得是非常重要的,这里给两个我觉得写得最好的参考:Paypal 和 Microsoft 。
  • 监控系统宁可自己死了也不能干扰实际应用。
  • 一个公司至少一年要有一次软件版本升级的review,然后形成软件版本的统一和一致
  • 架构和软件不是写好就完的,是需要不断修改不断维护的,80%的软件成本都是在维护上。
  • 通过服务发现或服务网关来降低服务依赖所带来的运维复杂度
  • 一定要使用各种软件设计的原则。比如:像SOLID这样的原则(参看《一些软件设计的原则》),IoC/DIP,SOA 或 Spring Cloud 等 架构的最佳实践(参看《SteveY对Amazon和Google平台的吐槽》中的 Service Interface 的那几条军规),分布式系统架构的相关实践(参看:《分布式系统的事务处理》,或微软件的 《Cloud Design Patterns》)……等等
  • 没有自动化测试,没有好的软件文档,没有质量好的代码,没有标准和规范
  • 以前欠下的技术债,都得要还,没打好的地基要重新打,没建配套设施都要建。这些基础设施如果不按照正确科学的方式建立的话,你是不可能有一个好的的系统
  • 与其花大力气迁就技术债务,不如直接还技术债
  • 建设没有技术债的“新城区”,并通过“防腐层 ”的架构模型,不要让技术债侵入“新城区”。
  • 如果有一天你在做技术决定的时候,开始凭自己以往的经验,那么你就已经不可能再成长了。
  • 做任何决定之前,最好花上一点时间,上网查一下相关的资料,技术博客,文章,论文等 ,同时,也看看各个公司,或是各个开源软件他们是怎么做的?然后,比较多种方案的 Pros/Cons,最终形成自己的决定
  • 对于 X-Y 问题,也就是说,用户为了解决 X问题,他觉得用 Y 可以解,于是问我 Y 怎么搞,结果搞到最后,发现原来要解决的 X 问题,这个时候最好的解决方案不是 Y,而是 Z。
  • 我很喜欢追问为什么 ,这种追问,会让客户也跟着来一起重新思考。
  • 激进并不是瞎搞,也不是见新技术就上,而是积极拥抱会改变未来的新技术
  • 不是不喜欢的就不学了,我对区块链和 Rust 我一样学习,我也知道这些技术的优势,但我不会大规模使用它们。
  • 进步永远来自于探索,探索是要付出代价的,但是收益更大。
  • 不敢冒险才是最大的冒险,不敢犯错才是最大的错误,害怕失去会让你失去的更多
張 旭

Deploy a registry server | Docker Documentation - 0 views

  • By default, secrets are mounted into a service at /run/secrets/<secret-name>
  • docker secret create
  • If you use a distributed storage driver, such as Amazon S3, you can use a fully replicated service. Each worker can write to the storage back-end without causing write conflicts.
  • ...10 more annotations...
  • You can access the service on port 443 of any swarm node. Docker sends the requests to the node which is running the service.
  • --publish published=443,target=443
  • The most important aspect is that a load balanced cluster of registries must share the same resources
  • S3 or Azure, they should be accessing the same resource and share an identical configuration.
  • you must make sure you are properly sending the X-Forwarded-Proto, X-Forwarded-For, and Host headers to their “client-side” values. Failure to do so usually makes the registry issue redirects to internal hostnames or downgrading from https to http.
  • A properly secured registry should return 401 when the “/v2/” endpoint is hit without credentials
  • registries should always implement access restrictions.
  • REGISTRY_AUTH=htpasswd
  • REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd
  • The registry also supports delegated authentication which redirects users to a specific trusted token server. This approach is more complicated to set up, and only makes sense if you need to fully configure ACLs and need more control over the registry’s integration into your global authorization and authentication systems.
  •  
    "You can access the service on port 443 of any swarm node. Docker sends the requests to the node which is running the service. "
張 旭

MetalLB, bare metal load-balancer for Kubernetes - 0 views

  • Kubernetes does not offer an implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters
  • If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.
  • Bare metal cluster operators are left with two lesser tools to bring user traffic into their clusters, “NodePort” and “externalIPs” services.
張 旭

Cluster Networking - Kubernetes - 0 views

  • Networking is a central part of Kubernetes, but it can be challenging to understand exactly how it is expected to work
  • Highly-coupled container-to-container communications
  • Pod-to-Pod communications
  • ...57 more annotations...
  • this is the primary focus of this document
    • 張 旭
       
      Cluster Networking 所關注處理的是: Pod 到 Pod 之間的連線
  • Pod-to-Service communications
  • External-to-Service communications
  • Kubernetes is all about sharing machines between applications.
  • sharing machines requires ensuring that two applications do not try to use the same ports.
  • Dynamic port allocation brings a lot of complications to the system
  • Every Pod gets its own IP address
  • do not need to explicitly create links between Pods
  • almost never need to deal with mapping container ports to host ports.
  • Pods can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.
  • pods on a node can communicate with all pods on all nodes without NAT
  • agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node
  • pods in the host network of a node can communicate with all pods on all nodes without NAT
  • If your job previously ran in a VM, your VM had an IP and could talk to other VMs in your project. This is the same basic model.
  • containers within a Pod share their network namespaces - including their IP address
  • containers within a Pod can all reach each other’s ports on localhost
  • containers within a Pod must coordinate port usage
  • “IP-per-pod” model.
  • request ports on the Node itself which forward to your Pod (called host ports), but this is a very niche operation
  • The Pod itself is blind to the existence or non-existence of host ports.
  • AOS is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform.
  • Cisco Application Centric Infrastructure offers an integrated overlay and underlay SDN solution that supports containers, virtual machines, and bare metal servers.
  • AOS Reference Design currently supports Layer-3 connected hosts that eliminate legacy Layer-2 switching problems.
  • The AWS VPC CNI offers integrated AWS Virtual Private Cloud (VPC) networking for Kubernetes clusters.
  • users can apply existing AWS VPC networking and security best practices for building Kubernetes clusters.
  • Using this CNI plugin allows Kubernetes pods to have the same IP address inside the pod as they do on the VPC network.
  • The CNI allocates AWS Elastic Networking Interfaces (ENIs) to each Kubernetes node and using the secondary IP range from each ENI for pods on the node.
  • Big Cloud Fabric is a cloud native networking architecture, designed to run Kubernetes in private cloud/on-premises environments.
  • Cilium is L7/HTTP aware and can enforce network policies on L3-L7 using an identity based security model that is decoupled from network addressing.
  • CNI-Genie is a CNI plugin that enables Kubernetes to simultaneously have access to different implementations of the Kubernetes network model in runtime.
  • CNI-Genie also supports assigning multiple IP addresses to a pod, each from a different CNI plugin.
  • cni-ipvlan-vpc-k8s contains a set of CNI and IPAM plugins to provide a simple, host-local, low latency, high throughput, and compliant networking stack for Kubernetes within Amazon Virtual Private Cloud (VPC) environments by making use of Amazon Elastic Network Interfaces (ENI) and binding AWS-managed IPs into Pods using the Linux kernel’s IPvlan driver in L2 mode.
  • to be straightforward to configure and deploy within a VPC
  • Contiv provides configurable networking
  • Contrail, based on Tungsten Fabric, is a truly open, multi-cloud network virtualization and policy management platform.
  • DANM is a networking solution for telco workloads running in a Kubernetes cluster.
  • Flannel is a very simple overlay network that satisfies the Kubernetes requirements.
  • Any traffic bound for that subnet will be routed directly to the VM by the GCE network fabric.
  • sysctl net.ipv4.ip_forward=1
  • Jaguar provides overlay network using vxlan and Jaguar CNIPlugin provides one IP address per pod.
  • Knitter is a network solution which supports multiple networking in Kubernetes.
  • Kube-OVN is an OVN-based kubernetes network fabric for enterprises.
  • Kube-router provides a Linux LVS/IPVS-based service proxy, a Linux kernel forwarding-based pod-to-pod networking solution with no overlays, and iptables/ipset-based network policy enforcer.
  • If you have a “dumb” L2 network, such as a simple switch in a “bare-metal” environment, you should be able to do something similar to the above GCE setup.
  • Multus is a Multi CNI plugin to support the Multi Networking feature in Kubernetes using CRD based network objects in Kubernetes.
  • NSX-T can provide network virtualization for a multi-cloud and multi-hypervisor environment and is focused on emerging application frameworks and architectures that have heterogeneous endpoints and technology stacks.
  • NSX-T Container Plug-in (NCP) provides integration between NSX-T and container orchestrators such as Kubernetes
  • Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards.
  • OpenVSwitch is a somewhat more mature but also complicated way to build an overlay network
  • OVN is an opensource network virtualization solution developed by the Open vSwitch community.
  • Project Calico is an open source container networking provider and network policy engine.
  • Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet
  • Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking.
  • Calico can also be run in policy enforcement mode in conjunction with other networking solutions such as Flannel, aka canal, or native GCE, AWS or Azure networking.
  • Romana is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network
  • Weave Net runs as a CNI plug-in or stand-alone. In either version, it doesn’t require any configuration or extra code to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes.
  • The network model is implemented by the container runtime on each node.
1 - 6 of 6
Showing 20 items per page