Skip to main content

Home/ Larvata/ Group items tagged application

Rss Feed Group items tagged

張 旭

Auto DevOps | GitLab - 0 views

  • Auto DevOps provides pre-defined CI/CD configuration which allows you to automatically detect, build, test, deploy, and monitor your applications
  • Just push your code and GitLab takes care of everything else.
  • Auto DevOps will be automatically disabled on the first pipeline failure.
  • ...78 more annotations...
  • Your project will continue to use an alternative CI/CD configuration file if one is found
  • Auto DevOps works with any Kubernetes cluster;
  • using the Docker or Kubernetes executor, with privileged mode enabled.
  • Base domain (needed for Auto Review Apps and Auto Deploy)
  • Kubernetes (needed for Auto Review Apps, Auto Deploy, and Auto Monitoring)
  • Prometheus (needed for Auto Monitoring)
  • scrape your Kubernetes cluster.
  • project level as a variable: KUBE_INGRESS_BASE_DOMAIN
  • A wildcard DNS A record matching the base domain(s) is required
  • Once set up, all requests will hit the load balancer, which in turn will route them to the Kubernetes pods that run your application(s).
  • review/ (every environment starting with review/)
  • staging
  • production
  • need to define a separate KUBE_INGRESS_BASE_DOMAIN variable for all the above based on the environment.
  • Continuous deployment to production: Enables Auto Deploy with master branch directly deployed to production.
  • Continuous deployment to production using timed incremental rollout
  • Automatic deployment to staging, manual deployment to production
  • Auto Build creates a build of the application using an existing Dockerfile or Heroku buildpacks.
  • If a project’s repository contains a Dockerfile, Auto Build will use docker build to create a Docker image.
  • Each buildpack requires certain files to be in your project’s repository for Auto Build to successfully build your application.
  • Auto Test automatically runs the appropriate tests for your application using Herokuish and Heroku buildpacks by analyzing your project to detect the language and framework.
  • Auto Code Quality uses the Code Quality image to run static analysis and other code checks on the current code.
  • Static Application Security Testing (SAST) uses the SAST Docker image to run static analysis on the current code and checks for potential security issues.
  • Dependency Scanning uses the Dependency Scanning Docker image to run analysis on the project dependencies and checks for potential security issues.
  • License Management uses the License Management Docker image to search the project dependencies for their license.
  • Vulnerability Static Analysis for containers uses Clair to run static analysis on a Docker image and checks for potential security issues.
  • Review Apps are temporary application environments based on the branch’s code so developers, designers, QA, product managers, and other reviewers can actually see and interact with code changes as part of the review process. Auto Review Apps create a Review App for each branch. Auto Review Apps will deploy your app to your Kubernetes cluster only. When no cluster is available, no deployment will occur.
  • The Review App will have a unique URL based on the project ID, the branch or tag name, and a unique number, combined with the Auto DevOps base domain.
  • Review apps are deployed using the auto-deploy-app chart with Helm, which can be customized.
  • Your apps should not be manipulated outside of Helm (using Kubernetes directly).
  • Dynamic Application Security Testing (DAST) uses the popular open source tool OWASP ZAProxy to perform an analysis on the current code and checks for potential security issues.
  • Auto Browser Performance Testing utilizes the Sitespeed.io container to measure the performance of a web page.
  • add the paths to a file named .gitlab-urls.txt in the root directory, one per line.
  • After a branch or merge request is merged into the project’s default branch (usually master), Auto Deploy deploys the application to a production environment in the Kubernetes cluster, with a namespace based on the project name and unique project ID
  • Auto Deploy doesn’t include deployments to staging or canary by default, but the Auto DevOps template contains job definitions for these tasks if you want to enable them.
  • Apps are deployed using the auto-deploy-app chart with Helm.
  • For internal and private projects a GitLab Deploy Token will be automatically created, when Auto DevOps is enabled and the Auto DevOps settings are saved.
  • If the GitLab Deploy Token cannot be found, CI_REGISTRY_PASSWORD is used. Note that CI_REGISTRY_PASSWORD is only valid during deployment.
  • If present, DB_INITIALIZE will be run as a shell command within an application pod as a helm post-install hook.
  • a post-install hook means that if any deploy succeeds, DB_INITIALIZE will not be processed thereafter.
  • DB_MIGRATE will be run as a shell command within an application pod as a helm pre-upgrade hook.
    • 張 旭
       
      如果專案類型不同,就要去查 buildpacks 裡面如何叫用該指令,例如 laravel 的 migration
    • 張 旭
       
      如果是自己的 Dockerfile 建立起來的,看來就不用鳥 buildpacks 的作法
  • Once your application is deployed, Auto Monitoring makes it possible to monitor your application’s server and response metrics right out of the box.
  • annotate the NGINX Ingress deployment to be scraped by Prometheus using prometheus.io/scrape: "true" and prometheus.io/port: "10254"
  • If you are also using Auto Review Apps and Auto Deploy and choose to provide your own Dockerfile, make sure you expose your application to port 5000 as this is the port assumed by the default Helm chart.
  • While Auto DevOps provides great defaults to get you started, you can customize almost everything to fit your needs; from custom buildpacks, to Dockerfiles, Helm charts, or even copying the complete CI/CD configuration into your project to enable staging and canary deployments, and more.
  • If your project has a Dockerfile in the root of the project repo, Auto DevOps will build a Docker image based on the Dockerfile rather than using buildpacks.
  • Auto DevOps uses Helm to deploy your application to Kubernetes.
  • Bundled chart - If your project has a ./chart directory with a Chart.yaml file in it, Auto DevOps will detect the chart and use it instead of the default one.
  • Create a project variable AUTO_DEVOPS_CHART with the URL of a custom chart to use or create two project variables AUTO_DEVOPS_CHART_REPOSITORY with the URL of a custom chart repository and AUTO_DEVOPS_CHART with the path to the chart.
  • make use of the HELM_UPGRADE_EXTRA_ARGS environment variable to override the default values in the values.yaml file in the default Helm chart.
  • specify the use of a custom Helm chart per environment by scoping the environment variable to the desired environment.
    • 張 旭
       
      Auto DevOps 就是一套人家寫好好的傳便便的 .gitlab-ci.yml
  • Your additions will be merged with the Auto DevOps template using the behaviour described for include
  • copy and paste the contents of the Auto DevOps template into your project and edit this as needed.
  • In order to support applications that require a database, PostgreSQL is provisioned by default.
  • Set up the replica variables using a project variable and scale your application by just redeploying it!
  • You should not scale your application using Kubernetes directly.
  • Some applications need to define secret variables that are accessible by the deployed application.
  • Auto DevOps detects variables where the key starts with K8S_SECRET_ and make these prefixed variables available to the deployed application, as environment variables.
  • Auto DevOps pipelines will take your application secret variables to populate a Kubernetes secret.
  • Environment variables are generally considered immutable in a Kubernetes pod.
  • if you update an application secret without changing any code then manually create a new pipeline, you will find that any running application pods will not have the updated secrets.
  • Variables with multiline values are not currently supported
  • The normal behavior of Auto DevOps is to use Continuous Deployment, pushing automatically to the production environment every time a new pipeline is run on the default branch.
  • If STAGING_ENABLED is defined in your project (e.g., set STAGING_ENABLED to 1 as a CI/CD variable), then the application will be automatically deployed to a staging environment, and a production_manual job will be created for you when you’re ready to manually deploy to production.
  • If CANARY_ENABLED is defined in your project (e.g., set CANARY_ENABLED to 1 as a CI/CD variable) then two manual jobs will be created: canary which will deploy the application to the canary environment production_manual which is to be used by you when you’re ready to manually deploy to production.
  • If INCREMENTAL_ROLLOUT_MODE is set to manual in your project, then instead of the standard production job, 4 different manual jobs will be created: rollout 10% rollout 25% rollout 50% rollout 100%
  • The percentage is based on the REPLICAS variable and defines the number of pods you want to have for your deployment.
  • To start a job, click on the play icon next to the job’s name.
  • Once you get to 100%, you cannot scale down, and you’d have to roll back by redeploying the old version using the rollback button in the environment page.
  • With INCREMENTAL_ROLLOUT_MODE set to manual and with STAGING_ENABLED
  • not all buildpacks support Auto Test yet
  • When a project has been marked as private, GitLab’s Container Registry requires authentication when downloading containers.
  • Authentication credentials will be valid while the pipeline is running, allowing for a successful initial deployment.
  • After the pipeline completes, Kubernetes will no longer be able to access the Container Registry.
  • We strongly advise using GitLab Container Registry with Auto DevOps in order to simplify configuration and prevent any unforeseen issues.
張 旭

The Asset Pipeline - Ruby on Rails Guides - 0 views

  • provides a framework to concatenate and minify or compress JavaScript and CSS assets
  • adds the ability to write these assets in other languages and pre-processors such as CoffeeScript, Sass and ERB
  • invalidate the cache by altering this fingerprint
  • ...80 more annotations...
  • Rails 4 automatically adds the sass-rails, coffee-rails and uglifier gems to your Gemfile
  • reduce the number of requests that a browser makes to render a web page
  • Starting with version 3.1, Rails defaults to concatenating all JavaScript files into one master .js file and all CSS files into one master .css file
  • In production, Rails inserts an MD5 fingerprint into each filename so that the file is cached by the web browser
  • The technique sprockets uses for fingerprinting is to insert a hash of the content into the name, usually at the end.
  • asset minification or compression
  • The sass-rails gem is automatically used for CSS compression if included in Gemfile and no config.assets.css_compressor option is set.
  • Supported languages include Sass for CSS, CoffeeScript for JavaScript, and ERB for both by default.
  • When a filename is unique and based on its content, HTTP headers can be set to encourage caches everywhere (whether at CDNs, at ISPs, in networking equipment, or in web browsers) to keep their own copy of the content
  • asset pipeline is technically no longer a core feature of Rails 4
  • Rails uses for fingerprinting is to insert a hash of the content into the name, usually at the end
  • With the asset pipeline, the preferred location for these assets is now the app/assets directory.
  • Fingerprinting is enabled by default for production and disabled for all other environments
  • The files in app/assets are never served directly in production.
  • Paths are traversed in the order that they occur in the search path
  • You should use app/assets for files that must undergo some pre-processing before they are served.
  • By default .coffee and .scss files will not be precompiled on their own
  • app/assets is for assets that are owned by the application, such as custom images, JavaScript files or stylesheets.
  • lib/assets is for your own libraries' code that doesn't really fit into the scope of the application or those libraries which are shared across applications.
  • vendor/assets is for assets that are owned by outside entities, such as code for JavaScript plugins and CSS frameworks.
  • Any path under assets/* will be searched
  • By default these files will be ready to use by your application immediately using the require_tree directive.
  • By default, this means the files in app/assets take precedence, and will mask corresponding paths in lib and vendor
  • Sprockets uses files named index (with the relevant extensions) for a special purpose
  • Rails.application.config.assets.paths
  • causes turbolinks to check if an asset has been updated and if so loads it into the page
  • if you add an erb extension to a CSS asset (for example, application.css.erb), then helpers like asset_path are available in your CSS rules
  • If you add an erb extension to a JavaScript asset, making it something such as application.js.erb, then you can use the asset_path helper in your JavaScript code
  • The asset pipeline automatically evaluates ERB
  • data URI — a method of embedding the image data directly into the CSS file — you can use the asset_data_uri helper.
  • Sprockets will also look through the paths specified in config.assets.paths, which includes the standard application paths and any paths added by Rails engines.
  • image_tag
  • the closing tag cannot be of the style -%>
  • asset_data_uri
  • app/assets/javascripts/application.js
  • sass-rails provides -url and -path helpers (hyphenated in Sass, underscored in Ruby) for the following asset classes: image, font, video, audio, JavaScript and stylesheet.
  • Rails.application.config.assets.compress
  • In JavaScript files, the directives begin with //=
  • The require_tree directive tells Sprockets to recursively include all JavaScript files in the specified directory into the output.
  • manifest files contain directives — instructions that tell Sprockets which files to require in order to build a single CSS or JavaScript file.
  • You should not rely on any particular order among those
  • Sprockets uses manifest files to determine which assets to include and serve.
  • the family of require directives prevents files from being included twice in the output
  • which files to require in order to build a single CSS or JavaScript file
  • Directives are processed top to bottom, but the order in which files are included by require_tree is unspecified.
  • In JavaScript files, Sprockets directives begin with //=
  • If require_self is called more than once, only the last call is respected.
  • require directive is used to tell Sprockets the files you wish to require.
  • You need not supply the extensions explicitly. Sprockets assumes you are requiring a .js file when done from within a .js file
  • paths must be specified relative to the manifest file
  • require_directory
  • Rails 4 creates both app/assets/javascripts/application.js and app/assets/stylesheets/application.css regardless of whether the --skip-sprockets option is used when creating a new rails application.
  • The file extensions used on an asset determine what preprocessing is applied.
  • app/assets/stylesheets/application.css
  • Additional layers of preprocessing can be requested by adding other extensions, where each extension is processed in a right-to-left manner
  • require_self
  • use the Sass @import rule instead of these Sprockets directives.
  • Keep in mind that the order of these preprocessors is important
  • In development mode, assets are served as separate files in the order they are specified in the manifest file.
  • when these files are requested they are processed by the processors provided by the coffee-script and sass gems and then sent back to the browser as JavaScript and CSS respectively.
  • css.scss.erb
  • js.coffee.erb
  • Keep in mind the order of these preprocessors is important.
  • By default Rails assumes that assets have been precompiled and will be served as static assets by your web server
  • with the Asset Pipeline the :cache and :concat options aren't used anymore
  • Assets are compiled and cached on the first request after the server is started
  • RAILS_ENV=production bundle exec rake assets:precompile
  • Debug mode can also be enabled in Rails helper methods
  • If you set config.assets.initialize_on_precompile to false, be sure to test rake assets:precompile locally before deploying
  • By default Rails assumes assets have been precompiled and will be served as static assets by your web server.
  • a rake task to compile the asset manifests and other files in the pipeline
  • RAILS_ENV=production bin/rake assets:precompile
  • a recipe to handle this in deployment
  • links the folder specified in config.assets.prefix to shared/assets
  • config/initializers/assets.rb
  • The initialize_on_precompile change tells the precompile task to run without invoking Rails
  • The X-Sendfile header is a directive to the web server to ignore the response from the application, and instead serve a specified file from disk
  • the jquery-rails gem which comes with Rails as the standard JavaScript library gem.
  • Possible options for JavaScript compression are :closure, :uglifier and :yui
  • concatenate assets
張 旭

Serverless Architectures - 0 views

  • Serverless was first used to describe applications that significantly or fully depend on 3rd party applications / services (‘in the cloud’) to manage server-side logic and state.
  • ‘rich client’ applications (think single page web apps, or mobile apps) that use the vast ecosystem of cloud accessible databases (like Parse, Firebase), authentication services (Auth0, AWS Cognito), etc.
  • ‘(Mobile) Backend as a Service’
  • ...33 more annotations...
  • Serverless can also mean applications where some amount of server-side logic is still written by the application developer but unlike traditional architectures is run in stateless compute containers that are event-triggered, ephemeral (may only last for one invocation), and fully managed by a 3rd party.
  • ‘Functions as a service
  • AWS Lambda is one of the most popular implementations of FaaS at present,
  • A good example is Auth0 - they started initially with BaaS ‘Authentication as a Service’, but with Auth0 Webtask they are entering the FaaS space.
  • a typical ecommerce app
  • a backend data-processing service
  • with zero administration.
  • FaaS offerings do not require coding to a specific framework or library.
  • Horizontal scaling is completely automatic, elastic, and managed by the provider
  • Functions in FaaS are triggered by event types defined by the provider.
  • a FaaS-supported message broker
  • from a deployment-unit point of view FaaS functions are stateless.
  • allowed the client direct access to a subset of our database
  • deleted the authentication logic in the original application and have replaced it with a third party BaaS service
  • The client is in fact well on its way to becoming a Single Page Application.
  • implement a FaaS function that responds to http requests via an API Gateway
  • port the search code from the Pet Store server to the Pet Store Search function
  • replaced a long lived consumer application with a FaaS function that runs within the event driven context
  • server applications - is a key difference when comparing with other modern architectural trends like containers and PaaS
  • the only code that needs to change when moving to FaaS is the ‘main method / startup’ code, in that it is deleted, and likely the specific code that is the top-level message handler (the ‘message listener interface’ implementation), but this might only be a change in method signature
  • With FaaS you need to write the function ahead of time to assume parallelism
  • Most providers also allow functions to be triggered as a response to inbound http requests, typically in some kind of API gateway
  • you should assume that for any given invocation of a function none of the in-process or host state that you create will be available to any subsequent invocation.
  • FaaS functions are either naturally stateless
  • store state across requests or for further input to handle a request.
  • certain classes of long lived task are not suited to FaaS functions without re-architecture
  • if you were writing a low-latency trading application you probably wouldn’t want to use FaaS systems at this time
  • An API Gateway is an HTTP server where routes / endpoints are defined in configuration and each route is associated with a FaaS function.
  • API Gateway will allow mapping from http request parameters to inputs arguments for the FaaS function
  • API Gateways may also perform authentication, input validation, response code mapping, etc.
  • the Serverless Framework makes working with API Gateway + Lambda significantly easier than using the first principles provided by AWS.
  • Apex - a project to ‘Build, deploy, and manage AWS Lambda functions with ease.'
  • 'Serverless' to mean the union of a couple of other ideas - 'Backend as a Service' and 'Functions as a Service'.
張 旭

Flynn: first preview release | Hacker News - 0 views

  • Etcd and Zookeeper provide essentially the same functionality. They are both a strongly consistent key/value stores that support notifications to clients of changes. These two projects are limited to service discovery
  • So lets say you had a client application that would talk to a node application that could be on any number of servers. What you could do is hard code that list into your application and randomly select one, in order to "fake" load balancing. However every time a machine went up or down you would have to update that list.
  • What Consul provides is you just tell your app to connect to "mynodeapp.consul" and then consul will give you the proper address of one of your node apps.
  • ...9 more annotations...
  • Consul and Skydock are both applications that build on top of a tool like Zookeeper and Etcd.
  • What a developer ideally wants to do is just push code and not have to worry about what servers are running what, and worry about failover and the like
  • What Flynn provides (if I get it), is a diy Heroku like platform
  • Raft is a consensus algorithm for keeping a set of distributed state machines in a consistent state.
  • a self hosted Heroku
  • Google Omega is Google's answer to Apache Mesos
  • Omega would need a service like Raft to understand what services are currently available
  • Another project that I believe may be similar to Flynn is Apache Mesos.
  • I want to use Docker, but it has no easy way to say "take this file that contains instructions and make everything". You can write Dockerfiles, but you can only use one part of the stack in them, otherwise you run into trouble.
  •  
    " So lets say you had a client application that would talk to a node application that could be on any number of servers. What you could do is hard code that list into your application and randomly select one, in order to "fake" load balancing. However every time a machine went up or down you would have to update that list. What Consul provides is you just tell your app to connect to "mynodeapp.consul" and then consul will give you the proper address of one of your node apps."
張 旭

Asset Pipeline - Ruby on Rails 指南 - 0 views

  • 清单文件或帮助方法
    • 張 旭
       
      清單文件是指:application.css 跟 application.js
  • Sprockets 会按照搜索路径中各路径出现的顺序进行搜索。默认情况下,这意味着 app/assets 文件夹中的静态资源优先级较高,会遮盖 lib 和 vendor 文件夹中的相应文件
  • 如果静态资源不会在清单文件中引入,就要添加到预编译的文件列表中,否则在生产环境中就无法访问文件。
  • ...36 more annotations...
  • 程序中使用了 jQuery 代码库和许多模块,都保存在 lib/assets/javascripts/library_name 文件夹中,那么 lib/assets/javascripts/library_name/index.js 文件的作用就是这个代码库的清单。清单文件中可以按顺序列出所需的文件,或者干脆使用 require_tree 指令。
  • 如果使用 Turbolinks(Rails 4 默认启用),加上 data-turbolinks-track 选项后,Turbolinks 会检查静态资源是否有更新,如果更新了就会将其载入页面
  • config.assets.paths 包含标准路径和其他 Rails 引擎添加的路径。
  • 链接不存在的资源(也包括链接到空字符串的情况)会在调用页面抛出异常。
  • 关闭标签不能使用 -%> 形式
  • Sprockets 通过清单文件决定要引入和伺服哪些静态资源
  • 在 JavaScript 文件中,Sprockets 的指令以 //= 开头。在上面的文件中,用到了 require 和 the require_tree 指令。
  • app/assets/javascripts/application.js
  • require_tree 指令告知 Sprockets 递归引入指定文件夹中的所有 JavaScript 文件。文件夹的路径必须相对于清单文件。也可使用 require_directory 指令加载指定文件夹中的所有 JavaScript 文件,但不会递归。
  • Sprockets 会按照从上至下的顺序处理指令,但 require_tree 引入的文件顺序是不可预期的,不要设想能得到一个期望的顺序。
  • app/assets/stylesheets/application.css
  • 不管创建新程序时有没有指定 --skip-sprockets 选项,Rails 4 都会生成 app/assets/javascripts/application.js 和 app/assets/stylesheets/application.css
  • 如果多次调用 require_self,只有最后一次调用有效
  • 如果想使用多个 Sass 文件,应该使用 Sass 中的 @import 规则,不要使用 Sprockets 指令。
  • 清单文件可以有多个。
  • 如果使用默认的 gem,生成控制器或脚手架时,会生成 CoffeeScript 和 SCSS 文件,而不是普通的 JavaScript 和 CSS 文件。
  • 在开发环境中,或者禁用 Asset Pipeline 时,这些文件会使用 coffee-script 和 sass 提供的预处理器处理,然后再发给浏览器
  • 启用 Asset Pipeline 时,这些文件会先使用预处理器处理,然后保存到 public/assets 文件夹中,再由 Rails 程序或网页服务器伺服
  • 添加额外的扩展名可以增加预处理次数,预处理程序会按照扩展名从右至左的顺序处理文件内容。所以,扩展名的顺序要和处理的顺序一致
  • 预处理器的执行顺序很重要
  • 在开发环境中也可启用压缩功能,检查是否能正常运行。需要调试时再禁用压缩即可。
  • 默认情况下,Rails 认为静态资源已经事先编译好了,直接由网页服务器伺服。
  • 般情况下,请勿修改 config.assets.digest 的默认值
  • 可在部署时编译静态资源
  • 在多次部署之间共用这个文件夹是十分重要的,这样只要缓存的页面可用,其中引用的编译后的静态资源就能正常使用。
  • 默认编译的文件包括 application.js、application.css 以及 gem 中 app/assets 文件夹中的所有非 JS/CSS 文件(会自动加载所有图片)
  • 如果想编译其他清单,或者单独的样式表和 JavaScript,可以添加到 config/application.rb 文件中的 precompile 选项
  • 设置编译所有静态资源
  • manifest-md5hash.json 的文件,列出所有静态资源和对应的指纹
  • 把 Expires 报头设置为很久以后
  • 在本地预编译后,可以把编译好的文件纳入版本控制系统,再按照常规的方式部署
  • 实时编译消耗的内存更多,比默认的编译方式性能更低,因此不推荐使用
  • 如果用 CDN 分发静态资源,要确保文件不会被缓存,因为缓存会导致问题。如果设置了 config.action_controller.perform_caching = true,Rack::Cache 会使用 Rails.cache 存储静态文件,很快缓存空间就会用完。
  • Sprockets 默认使用的公开路径是 /assets
  • X-Sendfile 报头的作用是让服务器忽略程序的响应,直接从硬盘上伺服指定的文件
  • 为 Rails 提供标准 JavaScript 代码库的 jquery-rails gem 是个很好的例子。这个 gem 中有个引擎类,继承自 Rails::Engine。添加这层继承关系后,Rails 就知道这个 gem 中可能包含静态资源文件,会把这个引擎中的 app/assets、lib/assets 和 vendor/assets 三个文件夹加入 Sprockets 的搜索路径中。
張 旭

Best practices for building Kubernetes Operators and stateful apps | Google Cloud Blog - 0 views

  • use the StatefulSet workload controller to maintain identity for each of the pods, and to use Persistent Volumes to persist data so it can survive a service restart.
  • a way to extend Kubernetes functionality with application specific logic using custom resources and custom controllers.
  • An Operator can automate various features of an application, but it should be specific to a single application
  • ...12 more annotations...
  • Kubebuilder is a comprehensive development kit for building and publishing Kubernetes APIs and Controllers using CRDs
  • Design declarative APIs for operators, not imperative APIs. This aligns well with Kubernetes APIs that are declarative in nature.
  • With declarative APIs, users only need to express their desired cluster state, while letting the operator perform all necessary steps to achieve it.
  • scaling, backup, restore, and monitoring. An operator should be made up of multiple controllers that specifically handle each of the those features.
  • the operator can have a main controller to spawn and manage application instances, a backup controller to handle backup operations, and a restore controller to handle restore operations.
  • each controller should correspond to a specific CRD so that the domain of each controller's responsibility is clear.
  • If you keep a log for every container, you will likely end up with unmanageable amount of logs.
  • integrate application-specific details to the log messages such as adding a prefix for the application name.
  • you may have to use external logging tools such as Google Stackdriver, Elasticsearch, Fluentd, or Kibana to perform the aggregations.
  • adding labels to metrics to facilitate aggregation and analysis by monitoring systems.
  • a more viable option is for application pods to expose a metrics HTTP endpoint for monitoring tools to scrape.
  • A good way to achieve this is to use open-source application-specific exporters for exposing Prometheus-style metrics.
張 旭

Intro to deployment strategies: blue-green, canary, and more - DEV Community - 0 views

  • using a service-oriented architecture and microservices approach, developers can design a code base to be modular.
  • Modern applications are often distributed and cloud-based
  • different release cycles for different components
  • ...20 more annotations...
  • the abstraction of the infrastructure layer, which is now considered code. Deployment of a new application may require the deployment of new infrastructure code as well.
  • "big bang" deployments update whole or large parts of an application in one fell swoop.
  • Big bang deployments required the business to conduct extensive development and testing before release, often associated with the "waterfall model" of large sequential releases.
  • Rollbacks are often costly, time-consuming, or even impossible.
  • In a rolling deployment, an application’s new version gradually replaces the old one.
  • new and old versions will coexist without affecting functionality or user experience.
  • Each container is modified to download the latest image from the app vendor’s site.
  • two identical production environments work in parallel.
  • Once the testing results are successful, application traffic is routed from blue to green.
  • In a blue-green deployment, both systems use the same persistence layer or database back end.
  • You can use the primary database by blue for write operations and use the secondary by green for read operations.
  • Blue-green deployments rely on traffic routing.
  • long TTL values can delay these changes.
  • The main challenge of canary deployment is to devise a way to route some users to the new application.
  • Using an application logic to unlock new features to specific users and groups.
  • With CD, the CI-built code artifact is packaged and always ready to be deployed in one or more environments.
  • Use Build Automation tools to automate environment builds
  • Use configuration management tools
  • Enable automated rollbacks for deployments
  • An application performance monitoring (APM) tool can help your team monitor critical performance metrics including server response times after deployments.
張 旭

LXC vs Docker: Why Docker is Better | UpGuard - 0 views

  • LXC (LinuX Containers) is a OS-level virtualization technology that allows creation and running of multiple isolated Linux virtual environments (VE) on a single control host.
  • Docker, previously called dotCloud, was started as a side project and only open-sourced in 2013. It is really an extension of LXC’s capabilities.
  • run processes in isolation.
  • ...35 more annotations...
  • Docker is developed in the Go language and utilizes LXC, cgroups, and the Linux kernel itself. Since it’s based on LXC, a Docker container does not include a separate operating system; instead it relies on the operating system’s own functionality as provided by the underlying infrastructure.
  • Docker acts as a portable container engine, packaging the application and all its dependencies in a virtual container that can run on any Linux server.
  • a VE there is no preloaded emulation manager software as in a VM.
  • In a VE, the application (or OS) is spawned in a container and runs with no added overhead, except for a usually minuscule VE initialization process.
  • LXC will boast bare metal performance characteristics because it only packages the needed applications.
  • the OS is also just another application that can be packaged too.
  • a VM, which packages the entire OS and machine setup, including hard drive, virtual processors and network interfaces. The resulting bloated mass usually takes a long time to boot and consumes a lot of CPU and RAM.
  • don’t offer some other neat features of VM’s such as IaaS setups and live migration.
  • LXC as supercharged chroot on Linux. It allows you to not only isolate applications, but even the entire OS.
  • Libvirt, which allows the use of containers through the LXC driver by connecting to 'lxc:///'.
  • 'LXC', is not compatible with libvirt, but is more flexible with more userspace tools.
  • Portable deployment across machines
  • Versioning: Docker includes git-like capabilities for tracking successive versions of a container
  • Component reuse: Docker allows building or stacking of already created packages.
  • Shared libraries: There is already a public registry (http://index.docker.io/ ) where thousands have already uploaded the useful containers they have created.
  • Docker taking the devops world by storm since its launch back in 2013.
  • LXC, while older, has not been as popular with developers as Docker has proven to be
  • LXC having a focus on sys admins that’s similar to what solutions like the Solaris operating system, with its Solaris Zones, Linux OpenVZ, and FreeBSD, with its BSD Jails virtualization system
  • it started out being built on top of LXC, Docker later moved beyond LXC containers to its own execution environment called libcontainer.
  • Unlike LXC, which launches an operating system init for each container, Docker provides one OS environment, supplied by the Docker Engine
  • LXC tooling sticks close to what system administrators running bare metal servers are used to
  • The LXC command line provides essential commands that cover routine management tasks, including the creation, launch, and deletion of LXC containers.
  • Docker containers aim to be even lighter weight in order to support the fast, highly scalable, deployment of applications with microservice architecture.
  • With backing from Canonical, LXC and LXD have an ecosystem tightly bound to the rest of the open source Linux community.
  • Docker Swarm
  • Docker Trusted Registry
  • Docker Compose
  • Docker Machine
  • Kubernetes facilitates the deployment of containers in your data center by representing a cluster of servers as a single system.
  • Swarm is Docker’s clustering, scheduling and orchestration tool for managing a cluster of Docker hosts. 
  • rkt is a security minded container engine that uses KVM for VM-based isolation and packs other enhanced security features. 
  • Apache Mesos can run different kinds of distributed jobs, including containers. 
  • Elastic Container Service is Amazon’s service for running and orchestrating containerized applications on AWS
  • LXC offers the advantages of a VE on Linux, mainly the ability to isolate your own private workloads from one another. It is a cheaper and faster solution to implement than a VM, but doing so requires a bit of extra learning and expertise.
  • Docker is a significant improvement of LXC’s capabilities.
張 旭

What is Kubernetes Ingress? | IBM - 0 views

  • expose an application to the outside of your Kubernetes cluster,
  • ClusterIP, NodePort, LoadBalancer, and Ingress.
  • A service is essentially a frontend for your application that automatically reroutes traffic to available pods in an evenly distributed way.
  • ...23 more annotations...
  • Services are an abstract way of exposing an application running on a set of pods as a network service.
  • Pods are immutable, which means that when they die, they are not resurrected. The Kubernetes cluster creates new pods in the same node or in a new node once a pod dies. 
  • A service provides a single point of access from outside the Kubernetes cluster and allows you to dynamically access a group of replica pods. 
  • For internal application access within a Kubernetes cluster, ClusterIP is the preferred method
  • To expose a service to external network requests, NodePort, LoadBalancer, and Ingress are possible options.
  • Kubernetes Ingress is an API object that provides routing rules to manage external users' access to the services in a Kubernetes cluster, typically via HTTPS/HTTP.
  • content-based routing, support for multiple protocols, and authentication.
  • Ingress is made up of an Ingress API object and the Ingress Controller.
  • Kubernetes Ingress is an API object that describes the desired state for exposing services to the outside of the Kubernetes cluster.
  • An Ingress Controller reads and processes the Ingress Resource information and usually runs as pods within the Kubernetes cluster.  
  • If Kubernetes Ingress is the API object that provides routing rules to manage external access to services, Ingress Controller is the actual implementation of the Ingress API.
  • The Ingress Controller is usually a load balancer for routing external traffic to your Kubernetes cluster and is responsible for L4-L7 Network Services. 
  • Layer 7 (L7) refers to the application level of the OSI stack—external connections load-balanced across pods, based on requests.
  • if Kubernetes Ingress is a computer, then Ingress Controller is a programmer using the computer and taking action.
  • Ingress Rules are a set of rules for processing inbound HTTP traffic. An Ingress with no rules sends all traffic to a single default backend service. 
  • the Ingress Controller is an application that runs in a Kubernetes cluster and configures an HTTP load balancer according to Ingress Resources.
  • The load balancer can be a software load balancer running in the cluster or a hardware or cloud load balancer running externally.
  • ClusterIP is the preferred option for internal service access and uses an internal IP address to access the service
  • A NodePort is a virtual machine (VM) used to expose a service on a Static Port number.
  • a NodePort would be used to expose a single service (with no load-balancing requirements for multiple services).
  • Ingress enables you to consolidate the traffic-routing rules into a single resource and runs as part of a Kubernetes cluster.
  • An application is accessed from the Internet via Port 80 (HTTP) or Port 443 (HTTPS), and Ingress is an object that allows access to your Kubernetes services from outside the Kubernetes cluster. 
  • To implement Ingress, you need to configure an Ingress Controller in your cluster—it is responsible for processing Ingress Resource information and allowing traffic based on the Ingress Rules.
crazylion lee

PouchDB, the JavaScript Database that Syncs! - 0 views

  •  
    " PouchDB is an open-source JavaScript database inspired by Apache CouchDB that is designed to run well within the browser. PouchDB was created to help web developers build applications that work as well offline as they do online. It enables applications to store data locally while offline, then synchronize it with CouchDB and compatible servers when the application is back online, keeping the user's data in sync no matter where they next login."
crazylion lee

journeyapps/zxing-android-embedded: Port of the ZXing Android application as an Android... - 0 views

  •  
    "Port of the ZXing Android application as an Android library project, for embedding in an Android application."
張 旭

How To Install and Use Docker: Getting Started | DigitalOcean - 0 views

  • docker as a project offers you the complete set of higher-level tools to carry everything that forms an application across systems and machines - virtual or physical - and brings along loads more of great benefits with it
  • docker daemon: used to manage docker (LXC) containers on the host it runs
  • docker CLI: used to command and communicate with the docker daemon
  • ...20 more annotations...
  • containers: directories containing everything-your-application
  • images: snapshots of containers or base OS (e.g. Ubuntu) images
  • Dockerfiles: scripts automating the building process of images
  • Docker containers are basically directories which can be packed (e.g. tar-archived) like any other, then shared and run across various different machines and platforms (hosts).
  • Linux Containers can be defined as a combination various kernel-level features (i.e. things that Linux-kernel can do) which allow management of applications (and resources they use) contained within their own environment
  • Each container is layered like an onion and each action taken within a container consists of putting another block (which actually translates to a simple change within the file system) on top of the previous one.
  • Each docker container starts from a docker image which forms the base for other applications and layers to come.
  • Docker images constitute the base of docker containers from which everything starts to form
  • a solid, consistent and dependable base with everything that is needed to run the applications
  • As more layers (tools, applications etc.) are added on top of the base, new images can be formed by committing these changes.
  • a Dockerfile for automated image building
  • Dockerfiles are scripts containing a successive series of instructions, directions, and commands which are to be executed to form a new docker image.
  • As you work with a container and continue to perform actions on it (e.g. download and install software, configure files etc.), to have it keep its state, you need to “commit”.
  • Please remember to “commit” all your changes.
  • When you "run" any process using an image, in return, you will have a container.
  • When the process is not actively running, this container will be a non-running container. Nonetheless, all of them will reside on your system until you remove them via rm command.
  • To create a new container, you need to use a base image and specify a command to run.
  • you can not change the command you run after having created a container (hence specifying one during "creation")
  • If you would like to save the progress and changes you made with a container, you can use “commit”
  • turns your container to an image
crazylion lee

AppImage | Linux apps that run anywhere - 1 views

  •  
    製作像是window,mac上的執行檔 ""As a user, I want to download an application from the original author, and run it on my Linux desktop system just like I would do with a Windows or Mac application." "As an application author, I want to provide packages for Linux desktop systems, without the need to get it 'into' a distribution and without having to build for gazillions of different distributions.""
張 旭

Pods - Kubernetes - 0 views

  • Pods are the smallest deployable units of computing
  • A Pod (as in a pod of whales or pea pod) is a group of one or more containersA lightweight and portable executable image that contains software and all of its dependencies. (such as Docker containers), with shared storage/network, and a specification for how to run the containers.
  • A Pod’s contents are always co-located and co-scheduled, and run in a shared context.
  • ...32 more annotations...
  • A Pod models an application-specific “logical host”
  • application containers which are relatively tightly coupled
  • being executed on the same physical or virtual machine would mean being executed on the same logical host.
  • The shared context of a Pod is a set of Linux namespaces, cgroups, and potentially other facets of isolation
  • Containers within a Pod share an IP address and port space, and can find each other via localhost
  • Containers in different Pods have distinct IP addresses and can not communicate by IPC without special configuration. These containers usually communicate with each other via Pod IP addresses.
  • Applications within a Pod also have access to shared volumesA directory containing data, accessible to the containers in a pod. , which are defined as part of a Pod and are made available to be mounted into each application’s filesystem.
  • a Pod is modelled as a group of Docker containers with shared namespaces and shared filesystem volumes
    • 張 旭
       
      類似 docker-compose 裡面宣告的同一坨?
  • Pods are considered to be relatively ephemeral (rather than durable) entities.
  • Pods are created, assigned a unique ID (UID), and scheduled to nodes where they remain until termination (according to restart policy) or deletion.
  • it can be replaced by an identical Pod
  • When something is said to have the same lifetime as a Pod, such as a volume, that means that it exists as long as that Pod (with that UID) exists.
  • uses a persistent volume for shared storage between the containers
  • Pods serve as unit of deployment, horizontal scaling, and replication
  • The applications in a Pod all use the same network namespace (same IP and port space), and can thus “find” each other and communicate using localhost
  • flat shared networking space
  • Containers within the Pod see the system hostname as being the same as the configured name for the Pod.
  • Volumes enable data to survive container restarts and to be shared among the applications within the Pod.
  • Individual Pods are not intended to run multiple instances of the same application
  • The individual containers may be versioned, rebuilt and redeployed independently.
  • Pods aren’t intended to be treated as durable entities.
  • Controllers like StatefulSet can also provide support to stateful Pods.
  • When a user requests deletion of a Pod, the system records the intended grace period before the Pod is allowed to be forcefully killed, and a TERM signal is sent to the main process in each container.
  • Once the grace period has expired, the KILL signal is sent to those processes, and the Pod is then deleted from the API server.
  • grace period
  • Pod is removed from endpoints list for service, and are no longer considered part of the set of running Pods for replication controllers.
  • When the grace period expires, any processes still running in the Pod are killed with SIGKILL.
  • By default, all deletes are graceful within 30 seconds.
  • You must specify an additional flag --force along with --grace-period=0 in order to perform force deletions.
  • Force deletion of a Pod is defined as deletion of a Pod from the cluster state and etcd immediately.
  • StatefulSet Pods
  • Processes within the container get almost the same privileges that are available to processes outside a container.
張 旭

Logging Architecture | Kubernetes - 0 views

  • Application logs can help you understand what is happening inside your application
  • container engines are designed to support logging.
  • The easiest and most adopted logging method for containerized applications is writing to standard output and standard error streams.
  • ...26 more annotations...
  • In a cluster, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called cluster-level logging.
  • Cluster-level logging architectures require a separate backend to store, analyze, and query logs
  • Kubernetes does not provide a native storage solution for log data.
  • use kubectl logs --previous to retrieve logs from a previous instantiation of a container.
  • A container engine handles and redirects any output generated to a containerized application's stdout and stderr streams
  • The Docker JSON logging driver treats each line as a separate message.
  • By default, if a container restarts, the kubelet keeps one terminated container with its logs.
  • An important consideration in node-level logging is implementing log rotation, so that logs don't consume all available storage on the node
  • You can also set up a container runtime to rotate an application's logs automatically.
  • The two kubelet flags container-log-max-size and container-log-max-files can be used to configure the maximum size for each log file and the maximum number of files allowed for each container respectively.
  • The kubelet and container runtime do not run in containers.
  • On machines with systemd, the kubelet and container runtime write to journald. If systemd is not present, the kubelet and container runtime write to .log files in the /var/log directory.
  • System components inside containers always write to the /var/log directory, bypassing the default logging mechanism.
  • Kubernetes does not provide a native solution for cluster-level logging
  • Use a node-level logging agent that runs on every node.
  • implement cluster-level logging by including a node-level logging agent on each node.
  • the logging agent is a container that has access to a directory with log files from all of the application containers on that node.
  • the logging agent must run on every node, it is recommended to run the agent as a DaemonSet
  • Node-level logging creates only one agent per node and doesn't require any changes to the applications running on the node.
  • Containers write stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation.
  • Each sidecar container prints a log to its own stdout or stderr stream.
  • It is not recommended to write log entries with different formats to the same log stream
  • writing logs to a file and then streaming them to stdout can double disk usage.
  • If you have an application that writes to a single file, it's recommended to set /dev/stdout as the destination
  • it's recommended to use stdout and stderr directly and leave rotation and retention policies to the kubelet.
  • Using a logging agent in a sidecar container can lead to significant resource consumption. Moreover, you won't be able to access those logs using kubectl logs because they are not controlled by the kubelet.
張 旭

Cluster Networking - Kubernetes - 0 views

  • Networking is a central part of Kubernetes, but it can be challenging to understand exactly how it is expected to work
  • Highly-coupled container-to-container communications
  • Pod-to-Pod communications
  • ...57 more annotations...
  • this is the primary focus of this document
    • 張 旭
       
      Cluster Networking 所關注處理的是: Pod 到 Pod 之間的連線
  • Pod-to-Service communications
  • External-to-Service communications
  • Kubernetes is all about sharing machines between applications.
  • sharing machines requires ensuring that two applications do not try to use the same ports.
  • Dynamic port allocation brings a lot of complications to the system
  • Every Pod gets its own IP address
  • do not need to explicitly create links between Pods
  • almost never need to deal with mapping container ports to host ports.
  • Pods can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.
  • pods on a node can communicate with all pods on all nodes without NAT
  • agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node
  • pods in the host network of a node can communicate with all pods on all nodes without NAT
  • If your job previously ran in a VM, your VM had an IP and could talk to other VMs in your project. This is the same basic model.
  • containers within a Pod share their network namespaces - including their IP address
  • containers within a Pod can all reach each other’s ports on localhost
  • containers within a Pod must coordinate port usage
  • “IP-per-pod” model.
  • request ports on the Node itself which forward to your Pod (called host ports), but this is a very niche operation
  • The Pod itself is blind to the existence or non-existence of host ports.
  • AOS is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform.
  • Cisco Application Centric Infrastructure offers an integrated overlay and underlay SDN solution that supports containers, virtual machines, and bare metal servers.
  • AOS Reference Design currently supports Layer-3 connected hosts that eliminate legacy Layer-2 switching problems.
  • The AWS VPC CNI offers integrated AWS Virtual Private Cloud (VPC) networking for Kubernetes clusters.
  • users can apply existing AWS VPC networking and security best practices for building Kubernetes clusters.
  • Using this CNI plugin allows Kubernetes pods to have the same IP address inside the pod as they do on the VPC network.
  • The CNI allocates AWS Elastic Networking Interfaces (ENIs) to each Kubernetes node and using the secondary IP range from each ENI for pods on the node.
  • Big Cloud Fabric is a cloud native networking architecture, designed to run Kubernetes in private cloud/on-premises environments.
  • Cilium is L7/HTTP aware and can enforce network policies on L3-L7 using an identity based security model that is decoupled from network addressing.
  • CNI-Genie is a CNI plugin that enables Kubernetes to simultaneously have access to different implementations of the Kubernetes network model in runtime.
  • CNI-Genie also supports assigning multiple IP addresses to a pod, each from a different CNI plugin.
  • cni-ipvlan-vpc-k8s contains a set of CNI and IPAM plugins to provide a simple, host-local, low latency, high throughput, and compliant networking stack for Kubernetes within Amazon Virtual Private Cloud (VPC) environments by making use of Amazon Elastic Network Interfaces (ENI) and binding AWS-managed IPs into Pods using the Linux kernel’s IPvlan driver in L2 mode.
  • to be straightforward to configure and deploy within a VPC
  • Contiv provides configurable networking
  • Contrail, based on Tungsten Fabric, is a truly open, multi-cloud network virtualization and policy management platform.
  • DANM is a networking solution for telco workloads running in a Kubernetes cluster.
  • Flannel is a very simple overlay network that satisfies the Kubernetes requirements.
  • Any traffic bound for that subnet will be routed directly to the VM by the GCE network fabric.
  • sysctl net.ipv4.ip_forward=1
  • Jaguar provides overlay network using vxlan and Jaguar CNIPlugin provides one IP address per pod.
  • Knitter is a network solution which supports multiple networking in Kubernetes.
  • Kube-OVN is an OVN-based kubernetes network fabric for enterprises.
  • Kube-router provides a Linux LVS/IPVS-based service proxy, a Linux kernel forwarding-based pod-to-pod networking solution with no overlays, and iptables/ipset-based network policy enforcer.
  • If you have a “dumb” L2 network, such as a simple switch in a “bare-metal” environment, you should be able to do something similar to the above GCE setup.
  • Multus is a Multi CNI plugin to support the Multi Networking feature in Kubernetes using CRD based network objects in Kubernetes.
  • NSX-T can provide network virtualization for a multi-cloud and multi-hypervisor environment and is focused on emerging application frameworks and architectures that have heterogeneous endpoints and technology stacks.
  • NSX-T Container Plug-in (NCP) provides integration between NSX-T and container orchestrators such as Kubernetes
  • Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards.
  • OpenVSwitch is a somewhat more mature but also complicated way to build an overlay network
  • OVN is an opensource network virtualization solution developed by the Open vSwitch community.
  • Project Calico is an open source container networking provider and network policy engine.
  • Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet
  • Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking.
  • Calico can also be run in policy enforcement mode in conjunction with other networking solutions such as Flannel, aka canal, or native GCE, AWS or Azure networking.
  • Romana is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network
  • Weave Net runs as a CNI plug-in or stand-alone. In either version, it doesn’t require any configuration or extra code to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes.
  • The network model is implemented by the container runtime on each node.
張 旭

Rails Environment Variables · RailsApps - 1 views

  • You can pass local configuration settings to an application using environment variables.
  • Operating systems (Linux, Mac OS X, Windows) provide mechanisms to set local environment variables, as does Heroku and other deployment platforms.
  • In general, you shouldn’t save email account credentials or private API keys to a shared git repository.
  • ...10 more annotations...
  • You could “hardcode” your Gmail username and password into the file but that would expose it to everyone who has access to your git repository.
  • It’s important to learn to use the Unix shell if you’re commited to improving your skills as a developer.
  • The gem reads a config/application.yml file and sets environment variables before anything else is configured in the Rails application.
  • make sure this file is listed in the .gitignore file so it isn’t checked into the git repository
  • Rails provides a config.before_configuration
  • YAML.load(File.open(env_file)).each do |key, value| ENV[key.to_s] = value end if File.exists?(env_file)
  • Heroku is a popular choice for low cost, easily configured Rails application hosting.
  • heroku config:add
  • the dotenv Ruby gem
  • Foreman is a tool for starting and configuring multiple processes in a complex application
張 旭

bbatsov/rails-style-guide: A community-driven Ruby on Rails 4 style guide - 0 views

  • custom initialization code in config/initializers. The code in initializers executes on application startup
  • Keep initialization code for each gem in a separate file with the same name as the gem
  • Mark additional assets for precompilation
  • ...90 more annotations...
  • config/environments/production.rb
  • Create an additional staging environment that closely resembles the production one
  • Keep any additional configuration in YAML files under the config/ directory
  • Rails::Application.config_for(:yaml_file)
  • Use nested routes to express better the relationship between ActiveRecord models
  • nest routes more than 1 level deep then use the shallow: true option
  • namespaced routes to group related actions
  • Don't use match to define any routes unless there is need to map multiple request types among [:get, :post, :patch, :put, :delete] to a single action using :via option.
  • Keep the controllers skinny
  • all the business logic should naturally reside in the model
  • Share no more than two instance variables between a controller and a view.
  • using a template
  • Prefer render plain: over render text
  • Prefer corresponding symbols to numeric HTTP status codes
  • without abbreviations
  • Keep your models for business logic and data-persistence only
  • Avoid altering ActiveRecord defaults (table names, primary key, etc)
  • Group macro-style methods (has_many, validates, etc) in the beginning of the class definition
  • Prefer has_many :through to has_and_belongs_to_many
  • self[:attribute]
  • self[:attribute] = value
  • validates
  • Keep custom validators under app/validators
  • Consider extracting custom validators to a shared gem
  • preferable to make a class method instead which serves the same purpose of the named scope
  • returns an ActiveRecord::Relation object
  • .update_attributes
  • Override the to_param method of the model
  • Use the friendly_id gem. It allows creation of human-readable URLs by using some descriptive attribute of the model instead of its id
  • find_each to iterate over a collection of AR objects
  • .find_each
  • .find_each
  • Looping through a collection of records from the database (using the all method, for example) is very inefficient since it will try to instantiate all the objects at once
  • always call before_destroy callbacks that perform validation with prepend: true
  • Define the dependent option to the has_many and has_one associations
  • always use the exception raising bang! method or handle the method return value.
  • When persisting AR objects
  • Avoid string interpolation in queries
  • param will be properly escaped
  • Consider using named placeholders instead of positional placeholders
  • use of find over where when you need to retrieve a single record by id
  • use of find_by over where and find_by_attribute
  • use of where.not over SQL
  • use heredocs with squish
  • Keep the schema.rb (or structure.sql) under version control.
  • Use rake db:schema:load instead of rake db:migrate to initialize an empty database
  • Enforce default values in the migrations themselves instead of in the application layer
  • change_column_default
  • imposing data integrity from the Rails app is impossible
  • use the change method instead of up and down methods.
  • constructive migrations
  • use models in migrations, make sure you define them so that you don't end up with broken migrations in the future
  • Don't use non-reversible migration commands in the change method.
  • In this case, block will be used by create_table in rollback
  • Never call the model layer directly from a view
  • Never make complex formatting in the views, export the formatting to a method in the view helper or the model.
  • When the labels of an ActiveRecord model need to be translated, use the activerecord scope
  • Separate the texts used in the views from translations of ActiveRecord attributes
  • Place the locale files for the models in a folder locales/models
  • the texts used in the views in folder locales/views
  • config/application.rb config.i18n.load_path += Dir[Rails.root.join('config', 'locales', '**', '*.{rb,yml}')]
  • I18n.t
  • I18n.l
  • Use "lazy" lookup for the texts used in views.
  • Use the dot-separated keys in the controllers and models
  • Reserve app/assets for custom stylesheets, javascripts, or images
  • Third party code such as jQuery or bootstrap should be placed in vendor/assets
  • Provide both HTML and plain-text view templates
  • config.action_mailer.raise_delivery_errors = true
  • Use a local SMTP server like Mailcatcher in the development environment
  • Provide default settings for the host name
  • The _url methods include the host name and the _path methods don't
  • _url
  • Format the from and to addresses properly
  • default from:
  • sending html emails all styles should be inline
  • Sending emails while generating page response should be avoided. It causes delays in loading of the page and request can timeout if multiple email are sent.
  • .start_with?
  • .end_with?
  • &.
  • Config your timezone accordingly in application.rb
  • config.active_record.default_timezone = :local
  • it can be only :utc or :local
  • Don't use Time.parse
  • Time.zone.parse
  • Don't use Time.now
  • Time.zone.now
  • Put gems used only for development or testing in the appropriate group in the Gemfile
  • Add all OS X specific gems to a darwin group in the Gemfile, and all Linux specific gems to a linux group
  • Do not remove the Gemfile.lock from version control.
crazylion lee

GitHub - SigNoz/signoz: SigNoz helps developers monitor their applications & troublesho... - 0 views

  •  
    "Monitor your applications and troubleshoot problems in your deployed applications, an open-source alternative to DataDog, New Relic, etc."
張 旭

Using NGINX Logging for Application Performance Monitoring - 0 views

  • taking advantage of the flexibility of NGINX access logging is application performance monitoring (APM).
  • it’s simple to get detailed visibility into the performance of your applications by adding timing values to your code and passing them as response headers for inclusion in the NGINX access log.
  • $request_time – Full request time, starting when NGINX reads the first byte from the client and ending when NGINX sends the last byte of the response body
  • ...3 more annotations...
  • $upstream_response_time – Time between establishing a connection to an upstream server and receiving the last byte of the response body
  • capture timings in the application itself and include them as response headers, which NGINX then captures in its access log.
  • $upstream_header_time – Time between establishing a connection to an upstream server and receiving the first byte of the response header
1 - 20 of 176 Next › Last »
Showing 20 items per page