Skip to main content

Home/ Larvata/ Group items tagged best-practices

Rss Feed Group items tagged

張 旭

我所认为的RESTful API最佳实践 · ScienJus's Blog - 0 views

  • 最好将过滤、分页和排序的相关信息全权交给客户端,包括过滤条件、页数或是游标、每页的数量、排序方式、升降序等,这样可以使 API 更加灵活。
  • 过度纠结如何遵守规范只是徒增烦恼
  • API 的版本号和客户端 APP 的版本号是毫无关系的
  • ...13 more annotations...
  • 除了 POST 其他三种请求都具备幂等性(多次请求的效果相同)
  • PUT 也可以用于创建操作,只要在创建前就可以确定资源的 id。
  • 永远去使用可以指向资源的的最短 URL 路径
  • RESTful 中不建议出现动词
  • 这里我选择了 PUT 而不是 POST,因为我觉得点赞这种行为应该是幂等的,多次操作的结果应该相同。
  • Token 的状态保持一般有两种方式实现:一种是在用户每次操作都会延长或重置 TOKEN 的生存时间(类似于缓存的机制),另一种是 Token 的生存时间固定不变,但是同时返回一个刷新用的 Token,当 Token 过期时可以将其刷新而不是重新登录。
  • 一般客户端会把请求参数拼接后并加密作为 Sign 传给服务端,这样即使被抓包了,对方只修改参数而无法生成对应的 Sign 也会被服务端识破。
  • 尽量使用 HTTP 状态码
  • data是真正需要返回的数据,并且只会在请求成功时才存在,msg只用在开发环境,并且只为了开发人员识别。客户端逻辑只允许识别code,并且不允许直接将msg的内容展示给用户。
  • JSON 比 XML 可视化更好,也更加节约流量,所以尽量不要使用 XML。
  • 创建和修改操作成功后,需要返回该资源的全部信息。
  • 即使客户端一个页面需要展示多个资源,也不要在一个接口中全部返回,而是让客户端分别请求多个接口。
  • 推荐使用游标分页
張 旭

Choosing an Executor Type - CircleCI - 0 views

  • Containers are an instance of the Docker Image you specify and the first image listed in your configuration is the primary container image in which all steps run.
  • In this example, all steps run in the container created by the first image listed under the build job
  • If you experience increases in your run times due to installing additional tools during execution, it is best practice to use the Building Custom Docker Images Documentation to create a custom image with tools that are pre-loaded in the container to meet the job requirements.
  • ...9 more annotations...
  • The machine option runs your jobs in a dedicated, ephemeral VM
  • Using the machine executor gives your application full access to OS resources and provides you with full control over the job environment.
  • Using machine may require additional fees in a future pricing update.
  • Using the macos executor allows you to run your job in a macOS environment on a VM.
  • In a multi-image configuration job, all steps are executed in the container created by the first image listed.
  • All containers run in a common network and every exposed port will be available on localhost from a primary container.
  • If you want to work with private images/registries, please refer to Using Private Images.
  • Docker also has built-in image caching and enables you to build, run, and publish Docker images via Remote Docker.
  • if you require low-level access to the network or need to mount external volumes consider using machine
張 旭

Makefiles - Best practices and suggestions | MDN - 0 views

  • hardcoded values - avoid them like the plague
  • For classes of hardware (unix/windows) place your makefile in a subdirectory, unix/Makefile.in
  • Initial make call should always be the workhorse: build, generate, deploy, install, etc.
  • ...6 more annotations...
  • All subsequent make calls must become a NOP unless sources or dependencies change or have been removed.
    • 張 旭
       
      No Operation 或 No Operation Performed 的縮寫,意為無操作
    • 張 旭
  • Do not use directories as a dependency for generated targets, ever.
  • Parallel make: add an explicit timestamp dependency (.done) that make can synchronize threaded calls on to avoid a race condition.
  • Maintain clean targets - makefiles should be able to remove all content that is generated so "make clean" will return the sandbox/directory back to a clean state.
  • Wrapper check/unit tests with a ENABLE_TESTS conditional
張 旭

Rails API Testing Best Practices - 0 views

  • Writing an API is almost a given with modern web applications
  • A properly designed API should return two things: an HTTP response status-code and the response body.
  • Testing the status-code is necessary
  • ...6 more annotations...
  • testing the response body should just verify that the application is sending the right content.
  • Unauthorized
  • Forbidden
  • Your test should also ensure that any desired business logic gets completed as expected.
  • Request specs provide a thin wrapper around Rails’ integration tests, and are designed to drive behavior through the full stack
  • we’ll be doing json = JSON.parse(response.body) a lot. This should be a helper method.
張 旭

Pre-Built CircleCI Docker Images - CircleCI - 0 views

  • typically extensions of official Docker images and include tools especially useful for CI/CD.
  • Convenience images are based on the most recently built versions of upstream images, so it is best practice to use the most specific image possible.
  • add -jessie or -stretch to the end of each of those containers to ensure you’re only using that version of the Debian base OS.
  • ...12 more annotations...
  • language images
  • service images
  • All images add a circleci user as a system user
  • A language image should be listed first under the docker key in your configuration, making it the primary container during execution.
  • For example, if you want to add browsers to the circleci/golang:1.9 image, use the circleci/golang:1.9-browsers image.
  • Service images are convenience images for services like databases
  • should be listed after language images so they become secondary service containers.
  • To speed up builds using RAM volume, add the -ram suffix to the end of a service image tag
  • All convenience images have been extended with additional tools.
  • all images include the following packages, installed via apt-get
  • Most CircleCI convenience images are Debian Jessie- or Stretch-based images, however some extend Ubuntu-based images.
  • The following packages are installed via curl
張 旭

phusion/passenger-docker: Docker base images for Ruby, Python, Node.js and Meteor web apps - 0 views

  • Ubuntu 20.04 LTS as base system
  • 2.7.5 is configured as the default.
  • Python 3.8
  • ...23 more annotations...
  • A build system, git, and development headers for many popular libraries, so that the most popular Ruby, Python and Node.js native extensions can be compiled without problems.
  • Nginx 1.18. Disabled by default
  • production-grade features, such as process monitoring, administration and status inspection.
  • Redis 5.0. Not installed by default.
  • The image has an app user with UID 9999 and home directory /home/app. Your application is supposed to run as this user.
  • running applications without root privileges is good security practice.
  • Your application should be placed inside /home/app.
  • COPY --chown=app:app
  • Passenger works like a mod_ruby, mod_nodejs, etc. It changes Nginx into an application server and runs your app from Nginx.
  • placing a .conf file in the directory /etc/nginx/sites-enabled
  • The best way to configure Nginx is by adding .conf files to /etc/nginx/main.d and /etc/nginx/conf.d
  • files in conf.d are included in the Nginx configuration's http context.
  • any environment variables you set with docker run -e, Docker linking and /etc/container_environment, won't reach Nginx.
  • To preserve these variables, place an Nginx config file ending with *.conf in the directory /etc/nginx/main.d, in which you tell Nginx to preserve these variables.
  • By default, Phusion Passenger sets all of the following environment variables to the value production
  • Setting these environment variables yourself (e.g. using docker run -e RAILS_ENV=...) will not have any effect, because Phusion Passenger overrides all of these environment variables.
  • PASSENGER_APP_ENV environment variable
  • passenger-docker autogenerates an Nginx configuration file (/etc/nginx/conf.d/00_app_env.conf) during container boot.
  • The configuration file is in /etc/redis/redis.conf. Modify it as you see fit, but make sure daemonize no is set.
  • You can add additional daemons to the image by creating runit entries.
  • The shell script must be called run, must be executable
  • the shell script must run the daemon without letting it daemonize/fork it.
  • We use RVM to install and to manage Ruby interpreters.
張 旭

Cluster Networking - Kubernetes - 0 views

  • Networking is a central part of Kubernetes, but it can be challenging to understand exactly how it is expected to work
  • Highly-coupled container-to-container communications
  • Pod-to-Pod communications
  • ...57 more annotations...
  • this is the primary focus of this document
    • 張 旭
       
      Cluster Networking 所關注處理的是: Pod 到 Pod 之間的連線
  • Pod-to-Service communications
  • External-to-Service communications
  • Kubernetes is all about sharing machines between applications.
  • sharing machines requires ensuring that two applications do not try to use the same ports.
  • Dynamic port allocation brings a lot of complications to the system
  • Every Pod gets its own IP address
  • do not need to explicitly create links between Pods
  • almost never need to deal with mapping container ports to host ports.
  • Pods can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.
  • pods on a node can communicate with all pods on all nodes without NAT
  • agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node
  • pods in the host network of a node can communicate with all pods on all nodes without NAT
  • If your job previously ran in a VM, your VM had an IP and could talk to other VMs in your project. This is the same basic model.
  • containers within a Pod share their network namespaces - including their IP address
  • containers within a Pod can all reach each other’s ports on localhost
  • containers within a Pod must coordinate port usage
  • “IP-per-pod” model.
  • request ports on the Node itself which forward to your Pod (called host ports), but this is a very niche operation
  • The Pod itself is blind to the existence or non-existence of host ports.
  • AOS is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform.
  • Cisco Application Centric Infrastructure offers an integrated overlay and underlay SDN solution that supports containers, virtual machines, and bare metal servers.
  • AOS Reference Design currently supports Layer-3 connected hosts that eliminate legacy Layer-2 switching problems.
  • The AWS VPC CNI offers integrated AWS Virtual Private Cloud (VPC) networking for Kubernetes clusters.
  • users can apply existing AWS VPC networking and security best practices for building Kubernetes clusters.
  • Using this CNI plugin allows Kubernetes pods to have the same IP address inside the pod as they do on the VPC network.
  • The CNI allocates AWS Elastic Networking Interfaces (ENIs) to each Kubernetes node and using the secondary IP range from each ENI for pods on the node.
  • Big Cloud Fabric is a cloud native networking architecture, designed to run Kubernetes in private cloud/on-premises environments.
  • Cilium is L7/HTTP aware and can enforce network policies on L3-L7 using an identity based security model that is decoupled from network addressing.
  • CNI-Genie is a CNI plugin that enables Kubernetes to simultaneously have access to different implementations of the Kubernetes network model in runtime.
  • CNI-Genie also supports assigning multiple IP addresses to a pod, each from a different CNI plugin.
  • cni-ipvlan-vpc-k8s contains a set of CNI and IPAM plugins to provide a simple, host-local, low latency, high throughput, and compliant networking stack for Kubernetes within Amazon Virtual Private Cloud (VPC) environments by making use of Amazon Elastic Network Interfaces (ENI) and binding AWS-managed IPs into Pods using the Linux kernel’s IPvlan driver in L2 mode.
  • to be straightforward to configure and deploy within a VPC
  • Contiv provides configurable networking
  • Contrail, based on Tungsten Fabric, is a truly open, multi-cloud network virtualization and policy management platform.
  • DANM is a networking solution for telco workloads running in a Kubernetes cluster.
  • Flannel is a very simple overlay network that satisfies the Kubernetes requirements.
  • Any traffic bound for that subnet will be routed directly to the VM by the GCE network fabric.
  • sysctl net.ipv4.ip_forward=1
  • Jaguar provides overlay network using vxlan and Jaguar CNIPlugin provides one IP address per pod.
  • Knitter is a network solution which supports multiple networking in Kubernetes.
  • Kube-OVN is an OVN-based kubernetes network fabric for enterprises.
  • Kube-router provides a Linux LVS/IPVS-based service proxy, a Linux kernel forwarding-based pod-to-pod networking solution with no overlays, and iptables/ipset-based network policy enforcer.
  • If you have a “dumb” L2 network, such as a simple switch in a “bare-metal” environment, you should be able to do something similar to the above GCE setup.
  • Multus is a Multi CNI plugin to support the Multi Networking feature in Kubernetes using CRD based network objects in Kubernetes.
  • NSX-T can provide network virtualization for a multi-cloud and multi-hypervisor environment and is focused on emerging application frameworks and architectures that have heterogeneous endpoints and technology stacks.
  • NSX-T Container Plug-in (NCP) provides integration between NSX-T and container orchestrators such as Kubernetes
  • Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards.
  • OpenVSwitch is a somewhat more mature but also complicated way to build an overlay network
  • OVN is an opensource network virtualization solution developed by the Open vSwitch community.
  • Project Calico is an open source container networking provider and network policy engine.
  • Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet
  • Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking.
  • Calico can also be run in policy enforcement mode in conjunction with other networking solutions such as Flannel, aka canal, or native GCE, AWS or Azure networking.
  • Romana is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network
  • Weave Net runs as a CNI plug-in or stand-alone. In either version, it doesn’t require any configuration or extra code to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes.
  • The network model is implemented by the container runtime on each node.
張 旭

Considerations for large clusters | Kubernetes - 0 views

  • A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by the control plane.
  • Kubernetes v1.23 supports clusters with up to 5000 nodes.
  • criteria: No more than 110 pods per node No more than 5000 nodes No more than 150000 total pods No more than 300000 total containers
  • ...14 more annotations...
  • In-use IP addresses
  • run one or two control plane instances per failure zone, scaling those instances vertically first and then scaling horizontally after reaching the point of falling returns to (vertical) scale.
  • Kubernetes nodes do not automatically steer traffic towards control-plane endpoints that are in the same failure zone
  • store Event objects in a separate dedicated etcd instance.
  • start and configure additional etcd instance
  • Kubernetes resource limits help to minimize the impact of memory leaks and other ways that pods and containers can impact on other components.
  • Addons' default limits are typically based on data collected from experience running each addon on small or medium Kubernetes clusters.
  • When running on large clusters, addons often consume more of some resources than their default limits.
  • Many addons scale horizontally - you add capacity by running more pods
  • The VerticalPodAutoscaler can run in recommender mode to provide suggested figures for requests and limits.
  • Some addons run as one copy per node, controlled by a DaemonSet: for example, a node-level log aggregator.
  • VerticalPodAutoscaler is a custom resource that you can deploy into your cluster to help you manage resource requests and limits for pods.
  • The cluster autoscaler integrates with a number of cloud providers to help you run the right number of nodes for the level of resource demand in your cluster.
  • The addon resizer helps you in resizing the addons automatically as your cluster's scale changes.
‹ Previous 21 - 29 of 29
Showing 20 items per page