Skip to main content

Home/ Larvata/ Group items tagged building

Rss Feed Group items tagged

張 旭

Introduction to CI/CD with GitLab | GitLab - 0 views

  • deploying code changes at every small iteration, reducing the chance of developing new code based on bugged or failed previous versions
  • based on automating the execution of scripts to minimize the chance of introducing errors while developing applications.
  • For every push to the repository, you can create a set of scripts to build and test your application automatically, decreasing the chance of introducing errors to your app.
  • ...5 more annotations...
  • checked automatically but requires human intervention to manually and strategically trigger the deployment of the changes.
  • instead of deploying your application manually, you set it to be deployed automatically.
  • .gitlab-ci.yml, located in the root path of your repository
  • all the scripts you add to the configuration file are the same as the commands you run on a terminal in your computer.
  • GitLab will detect it and run your scripts with the tool called GitLab Runner, which works similarly to your terminal.
  •  
    "deploying code changes at every small iteration, reducing the chance of developing new code based on bugged or failed previous versions"
張 旭

1. Introduction · swooletw/laravel-swoole Wiki - 0 views

  • when you run PHP script every time, PHP needs to initialize modules and launch Zend Engine for your running environment. And your PHP script needs to be compiled to OpCodes and then Zend Engine can finally execute them.
  • in traditional PHP lifecycle, it wastes a bunch of time building and destroying resources for your script execution.
  • have a built-in server on top of Swoole, and all the scripts can be kept in memory after the first load
  •  
    "when you run PHP script every time, PHP needs to initialize modules and launch Zend Engine for your running environment. And your PHP script needs to be compiled to OpCodes and then Zend Engine can finally execute them."
張 旭

phusion/baseimage-docker - 1 views

    • 張 旭
       
      原始的 docker 在執行命令時,預設就是將傳入的 COMMAND 當成 PID 1 的程序,執行完畢就結束這個  docker,其他的 daemons 並不會執行,而 baseimage 解決了這個問題。
    • crazylion lee
       
      好棒棒
  • docker exec
  • Through SSH
  • ...57 more annotations...
  • docker exec -t -i YOUR-CONTAINER-ID bash -l
  • Login to the container
  • Baseimage-docker only advocates running multiple OS processes inside a single container.
  • Password and challenge-response authentication are disabled by default. Only key authentication is allowed.
  • A tool for running a command as another user
  • The Docker developers advocate the philosophy of running a single logical service per container. A logical service can consist of multiple OS processes.
  • All syslog messages are forwarded to "docker logs".
  • Baseimage-docker advocates running multiple OS processes inside a single container, and a single logical service can consist of multiple OS processes.
  • Baseimage-docker provides tools to encourage running processes as different users
  • sometimes it makes sense to run multiple services in a single container, and sometimes it doesn't.
  • Splitting your logical service into multiple OS processes also makes sense from a security standpoint.
  • using environment variables to pass parameters to containers is very much the "Docker way"
  • Baseimage-docker provides a facility to run a single one-shot command, while solving all of the aforementioned problems
  • the shell script must run the daemon without letting it daemonize/fork it.
  • All executable scripts in /etc/my_init.d, if this directory exists. The scripts are run in lexicographic order.
  • variables will also be passed to all child processes
  • Environment variables on Unix are inherited on a per-process basis
  • there is no good central place for defining environment variables for all applications and services
  • centrally defining environment variables
  • One of the ideas behind Docker is that containers should be stateless, easily restartable, and behave like a black box.
  • a one-shot command in a new container
  • immediately exit after the command exits,
  • However the downside of this approach is that the init system is not started. That is, while invoking COMMAND, important daemons such as cron and syslog are not running. Also, orphaned child processes are not properly reaped, because COMMAND is PID 1.
  • add additional daemons (e.g. your own app) to the image by creating runit entries.
  • Nginx is one such example: it removes all environment variables unless you explicitly instruct it to retain them through the env configuration option.
  • Mechanisms for easily running multiple processes, without violating the Docker philosophy
  • Ubuntu is not designed to be run inside Docker
  • According to the Unix process model, the init process -- PID 1 -- inherits all orphaned child processes and must reap them
  • Syslog-ng seems to be much more stable
  • cron daemon
  • Rotates and compresses logs
  • /sbin/setuser
  • A tool for installing apt packages that automatically cleans up after itself.
  • a single logical service inside a single container
  • A daemon is a program which runs in the background of its system, such as a web server.
  • The shell script must be called run, must be executable, and is to be placed in the directory /etc/service/<NAME>. runsv will switch to the directory and invoke ./run after your container starts.
  • If any script exits with a non-zero exit code, the booting will fail.
  • If your process is started with a shell script, make sure you exec the actual process, otherwise the shell will receive the signal and not your process.
  • any environment variables set with docker run --env or with the ENV command in the Dockerfile, will be picked up by my_init
  • not possible for a child process to change the environment variables of other processes
  • they will not see the environment variables that were originally passed by Docker.
  • We ignore HOME, SHELL, USER and a bunch of other environment variables on purpose, because not ignoring them will break multi-user containers.
  • my_init imports environment variables from the directory /etc/container_environment
  • /etc/container_environment.sh - a dump of the environment variables in Bash format.
  • modify the environment variables in my_init (and therefore the environment variables in all child processes that are spawned after that point in time), by altering the files in /etc/container_environment
  • my_init only activates changes in /etc/container_environment when running startup scripts
  • environment variables don't contain sensitive data, then you can also relax the permissions
  • Syslog messages are forwarded to the console
  • syslog-ng is started separately before the runit supervisor process, and shutdown after runit exits.
  • RUN apt-get update && apt-get upgrade -y -o Dpkg::Options::="--force-confold"
  • /sbin/my_init --skip-startup-files --quiet --
  • By default, no keys are installed, so nobody can login
  • provide a pregenerated, insecure key (PuTTY format)
  • RUN /usr/sbin/enable_insecure_key
  • docker run YOUR_IMAGE /sbin/my_init --enable-insecure-key
  • RUN cat /tmp/your_key.pub >> /root/.ssh/authorized_keys && rm -f /tmp/your_key.pub
  • The default baseimage-docker installs syslog-ng, cron and sshd services during the build process
張 旭

Template Designer Documentation - Jinja2 Documentation (2.10) - 0 views

  • A Jinja template doesn’t need to have a specific extension
  • A Jinja template is simply a text file
  • tags, which control the logic of the template
  • ...106 more annotations...
  • {% ... %} for Statements
  • {{ ... }} for Expressions to print to the template output
  • use a dot (.) to access attributes of a variable
  • the outer double-curly braces are not part of the variable, but the print statement.
  • If you access variables inside tags don’t put the braces around them.
  • If a variable or attribute does not exist, you will get back an undefined value.
  • the default behavior is to evaluate to an empty string if printed or iterated over, and to fail for every other operation.
  • if an object has an item and attribute with the same name. Additionally, the attr() filter only looks up attributes.
  • Variables can be modified by filters. Filters are separated from the variable by a pipe symbol (|) and may have optional arguments in parentheses.
  • Multiple filters can be chained
  • Tests can be used to test a variable against a common expression.
  • add is plus the name of the test after the variable.
  • to find out if a variable is defined, you can do name is defined, which will then return true or false depending on whether name is defined in the current template context.
  • strip whitespace in templates by hand. If you add a minus sign (-) to the start or end of a block (e.g. a For tag), a comment, or a variable expression, the whitespaces before or after that block will be removed
  • not add whitespace between the tag and the minus sign
  • mark a block raw
  • Template inheritance allows you to build a base “skeleton” template that contains all the common elements of your site and defines blocks that child templates can override.
  • The {% extends %} tag is the key here. It tells the template engine that this template “extends” another template.
  • access templates in subdirectories with a slash
  • can’t define multiple {% block %} tags with the same name in the same template
  • use the special self variable and call the block with that name
  • self.title()
  • super()
  • put the name of the block after the end tag for better readability
  • if the block is replaced by a child template, a variable would appear that was not defined in the block or passed to the context.
  • setting the block to “scoped” by adding the scoped modifier to a block declaration
  • If you have a variable that may include any of the following chars (>, <, &, or ") you SHOULD escape it unless the variable contains well-formed and trusted HTML.
  • Jinja2 functions (macros, super, self.BLOCKNAME) always return template data that is marked as safe.
  • With the default syntax, control structures appear inside {% ... %} blocks.
  • the dictsort filter
  • loop.cycle
  • Unlike in Python, it’s not possible to break or continue in a loop
  • use loops recursively
  • add the recursive modifier to the loop definition and call the loop variable with the new iterable where you want to recurse.
  • The loop variable always refers to the closest (innermost) loop.
  • whether the value changed at all,
  • use it to test if a variable is defined, not empty and not false
  • Macros are comparable with functions in regular programming languages.
  • If a macro name starts with an underscore, it’s not exported and can’t be imported.
  • pass a macro to another macro
  • caller()
  • a single trailing newline is stripped if present
  • other whitespace (spaces, tabs, newlines etc.) is returned unchanged
  • a block tag works in “both” directions. That is, a block tag doesn’t just provide a placeholder to fill - it also defines the content that fills the placeholder in the parent.
  • Python dicts are not ordered
  • caller(user)
  • call(user)
  • This is a simple dialog rendered by using a macro and a call block.
  • Filter sections allow you to apply regular Jinja2 filters on a block of template data.
  • Assignments at top level (outside of blocks, macros or loops) are exported from the template like top level macros and can be imported by other templates.
  • using namespace objects which allow propagating of changes across scopes
  • use block assignments to capture the contents of a block into a variable name.
  • The extends tag can be used to extend one template from another.
  • Blocks are used for inheritance and act as both placeholders and replacements at the same time.
  • The include statement is useful to include a template and return the rendered contents of that file into the current namespace
  • Included templates have access to the variables of the active context by default.
  • putting often used code into macros
  • imports are cached and imported templates don’t have access to the current template variables, just the globals by default.
  • Macros and variables starting with one or more underscores are private and cannot be imported.
  • By default, included templates are passed the current context and imported templates are not.
  • imports are often used just as a module that holds macros.
  • Integers and floating point numbers are created by just writing the number down
  • Everything between two brackets is a list.
  • Tuples are like lists that cannot be modified (“immutable”).
  • A dict in Python is a structure that combines keys and values.
  • // Divide two numbers and return the truncated integer result
  • The special constants true, false, and none are indeed lowercase
  • all Jinja identifiers are lowercase
  • (expr) group an expression.
  • The is and in operators support negation using an infix notation
  • in Perform a sequence / mapping containment test.
  • | Applies a filter.
  • ~ Converts all operands into strings and concatenates them.
  • use inline if expressions.
  • always an attribute is returned and items are not looked up.
  • default(value, default_value=u'', boolean=False)¶ If the value is undefined it will return the passed default value, otherwise the value of the variable
  • dictsort(value, case_sensitive=False, by='key', reverse=False)¶ Sort a dict and yield (key, value) pairs.
  • format(value, *args, **kwargs)¶ Apply python string formatting on an object
  • groupby(value, attribute)¶ Group a sequence of objects by a common attribute.
  • grouping by is stored in the grouper attribute and the list contains all the objects that have this grouper in common.
  • indent(s, width=4, first=False, blank=False, indentfirst=None)¶ Return a copy of the string with each line indented by 4 spaces. The first line and blank lines are not indented by default.
  • join(value, d=u'', attribute=None)¶ Return a string which is the concatenation of the strings in the sequence.
  • map()¶ Applies a filter on a sequence of objects or looks up an attribute.
  • pprint(value, verbose=False)¶ Pretty print a variable. Useful for debugging.
  • reject()¶ Filters a sequence of objects by applying a test to each object, and rejecting the objects with the test succeeding.
  • replace(s, old, new, count=None)¶ Return a copy of the value with all occurrences of a substring replaced with a new one.
  • round(value, precision=0, method='common')¶ Round the number to a given precision
  • even if rounded to 0 precision, a float is returned.
  • select()¶ Filters a sequence of objects by applying a test to each object, and only selecting the objects with the test succeeding.
  • sort(value, reverse=False, case_sensitive=False, attribute=None)¶ Sort an iterable. Per default it sorts ascending, if you pass it true as first argument it will reverse the sorting.
  • striptags(value)¶ Strip SGML/XML tags and replace adjacent whitespace by one space.
  • tojson(value, indent=None)¶ Dumps a structure to JSON so that it’s safe to use in <script> tags.
  • trim(value)¶ Strip leading and trailing whitespace.
  • unique(value, case_sensitive=False, attribute=None)¶ Returns a list of unique items from the the given iterable
  • urlize(value, trim_url_limit=None, nofollow=False, target=None, rel=None)¶ Converts URLs in plain text into clickable links.
  • defined(value)¶ Return true if the variable is defined
  • in(value, seq)¶ Check if value is in seq.
  • mapping(value)¶ Return true if the object is a mapping (dict etc.).
  • number(value)¶ Return true if the variable is a number.
  • sameas(value, other)¶ Check if an object points to the same memory address than another object
  • undefined(value)¶ Like defined() but the other way round.
  • A joiner is passed a string and will return that string every time it’s called, except the first time (in which case it returns an empty string).
  • namespace(...)¶ Creates a new container that allows attribute assignment using the {% set %} tag
  • The with statement makes it possible to create a new inner scope. Variables set within this scope are not visible outside of the scope.
  • activate and deactivate the autoescaping from within the templates
  • With both trim_blocks and lstrip_blocks enabled, you can put block tags on their own lines, and the entire block line will be removed when rendered, preserving the whitespace of the contents
chiehting

Top 5 Kubernetes Best Practices From Sandeep Dinesh (Google) - DZone Cloud - 0 views

  • Best Practices for Kubernetes
  • #1: Building Containers
  • Don’t Trust Arbitrary Base Images!
  • ...29 more annotations...
  • There’s a lot wrong with this: you could be using the wrong version of code that has exploits, has a bug in it, or worse it could have malware bundled in on purpose—you just don’t know.
  • Keep Base Images Small
  • Node.js for example, it includes an extra 600MB of libraries you don’t need.
  • Use the Builder Pattern
  • #2: Container Internals
  • Use a Non-Root User Inside the Container
  • Make the File System Read-Only
  • One Process per Container
  • Don’t Restart on Failure. Crash Cleanly Instead.
  • Log Everything to stdout and stderr
  • #3: Deployments
  • Use the “Record” Option for Easier Rollbacks
  • Use Weave Cloud
  • Use Sidecars for Proxies, Watchers, Etc.
  • Don’t Use Sidecars for Bootstrapping!
  • Don’t Use :Latest or No Tag
  • Readiness and Liveness Probes are Your Friend
  • #4: Services
  • Don’t Use type: LoadBalancer
  • Type: Nodeport Can Be “Good Enough”
  • Use Static IPs They Are Free!
  • Map External Services to Internal Ones
  • #5: Application Architecture
  • Use Helm Charts
  • All Downstream Dependencies Are Unreliable
  • Use Plenty of Descriptive Labels
  • Make Sure Your Microservices Aren’t Too Micro
  • Use Namespaces to Split Up Your Cluster
  • Role-Based Access Control
張 旭

The Twelve-Factor App - 0 views

  • Libraries installed through a packaging system can be installed system-wide (known as “site packages”) or scoped into the directory containing the app (known as “vendoring” or “bundling”).
  • A twelve-factor app never relies on implicit existence of system-wide packages.
  • declares all dependencies, completely and exactly, via a dependency declaration manifest.
  • ...8 more annotations...
  • The full and explicit dependency specification is applied uniformly to both production and development.
  • Bundler for Ruby offers the Gemfile manifest format for dependency declaration and bundle exec for dependency isolation.
  • Pip is used for declaration and Virtualenv for isolation.
  • No matter what the toolchain, dependency declaration and isolation must always be used together
  • requiring only the language runtime and dependency manager installed as prerequisites.
  • set up everything needed to run the app’s code with a deterministic build command.
  • If the app needs to shell out to a system tool, that tool should be vendored into the app.
  • do not rely on the implicit existence of any system tools
張 旭

Reusing Config - CircleCI - 0 views

  • Change the version key to 2.1 in your .circleci/config.yml file and commit the changes to test your build.
  • Reusable commands are invoked with specific parameters as steps inside a job.
  • Commands can use other commands in the scope of execution
  • ...19 more annotations...
  • Executors define the environment in which the steps of a job will be run.
  • Executor declarations in config outside of jobs can be used by all jobs in the scope of that declaration, allowing you to reuse a single executor definition across multiple jobs.
  • It is also possible to allow an orb to define the executor used by all of its commands.
  • When invoking an executor in a job any keys in the job itself will override those of the executor invoked.
  • Steps are used when you have a job or command that needs to mix predefined and user-defined steps.
  • Use the enum parameter type when you want to enforce that the value must be one from a specific set of string values.
  • Use an executor parameter type to allow the invoker of a job to decide what executor it will run on
  • invoke the same job more than once in the workflows stanza of config.yml, passing any necessary parameters as subkeys to the job.
  • If a job is declared inside an orb it can use commands in that orb or the global commands.
  • To use parameters in executors, define the parameters under the given executor.
  • Parameters are in-scope only within the job or command that defined them.
  • A single configuration may invoke a job multiple times.
  • Every job invocation may optionally accept two special arguments: pre-steps and post-steps.
  • Pre and post steps allow you to execute steps in a given job without modifying the job.
  • conditions are checked before a workflow is actually run
  • you cannot use a condition to check an environment variable.
  • Conditional steps may be located anywhere a regular step could and may only use parameter values as inputs.
  • A conditional step consists of a step with the key when or unless. Under this conditional key are the subkeys steps and condition
  • A condition is a single value that evaluates to true or false at the time the config is processed, so you cannot use environment variables as conditions
張 旭

Pre-Built CircleCI Docker Images - CircleCI - 0 views

  • typically extensions of official Docker images and include tools especially useful for CI/CD.
  • Convenience images are based on the most recently built versions of upstream images, so it is best practice to use the most specific image possible.
  • add -jessie or -stretch to the end of each of those containers to ensure you’re only using that version of the Debian base OS.
  • ...12 more annotations...
  • language images
  • service images
  • All images add a circleci user as a system user
  • A language image should be listed first under the docker key in your configuration, making it the primary container during execution.
  • For example, if you want to add browsers to the circleci/golang:1.9 image, use the circleci/golang:1.9-browsers image.
  • Service images are convenience images for services like databases
  • should be listed after language images so they become secondary service containers.
  • To speed up builds using RAM volume, add the -ram suffix to the end of a service image tag
  • All convenience images have been extended with additional tools.
  • all images include the following packages, installed via apt-get
  • Most CircleCI convenience images are Debian Jessie- or Stretch-based images, however some extend Ubuntu-based images.
  • The following packages are installed via curl
crazylion lee

fastlane - iOS and Android Automation for Continuous Delivery - 1 views

  •  
    "stlane is the tool to release your iOS and Android app "
張 旭

elabs/pundit: Minimal authorization through OO design and pure Ruby classes - 0 views

  • The class implements some kind of query method
  • Pundit will call the current_user method to retrieve what to send into this argumen
  • put these classes in app/policies
  • ...49 more annotations...
  • in leveraging regular Ruby classes and object oriented design patterns to build a simple, robust and scaleable authorization system
  • map to the name of a particular controller action
  • In the generated ApplicationPolicy, the model object is called record.
  • record
  • authorize
  • authorize would have done something like this: raise "not authorized" unless PostPolicy.new(current_user, @post).update?
  • pass a second argument to authorize if the name of the permission you want to check doesn't match the action name.
  • you can chain it
  • authorize returns the object passed to it
  • the policy method in both the view and controller.
  • have some kind of view listing records which a particular user has access to
  • ActiveRecord::Relation
  • Instances of this class respond to the method resolve, which should return some kind of result which can be iterated over.
  • scope.where(published: true)
    • 張 旭
       
      我想大概的意思就是:如果是 admin 可以看到全部 post,如果不是只能看到 published = true 的 post
  • use this class from your controller via the policy_scope method:
  • PostPolicy::Scope.new(current_user, Post).resolve
  • policy_scope(@user.posts).each
  • This method will raise an exception if authorize has not yet been called.
  • verify_policy_scoped to your controller. This will raise an exception in the vein of verify_authorized. However, it tracks if policy_scope is used instead of authorize
  • need to conditionally bypass verification, you can use skip_authorization
  • skip_policy_scope
  • Having a mechanism that ensures authorization happens allows developers to thoroughly test authorization scenarios as units on the policy objects themselves.
  • Pundit doesn't do anything you couldn't have easily done yourself. It's a very small library, it just provides a few neat helpers.
  • all of the policy and scope classes are just plain Ruby classes
  • rails g pundit:policy post
  • define a filter that redirects unauthenticated users to the login page
  • fail more gracefully
  • raise Pundit::NotAuthorizedError, "must be logged in" unless user
  • having rails handle them as a 403 error and serving a 403 error page.
  • config.action_dispatch.rescue_responses["Pundit::NotAuthorizedError"] = :forbidden
  • with I18n to generate error messages
  • retrieve a policy for a record outside the controller or view
  • define a method in your controller called pundit_user
  • Pundit strongly encourages you to model your application in such a way that the only context you need for authorization is a user object and a domain model that you want to check authorization for.
  • Pundit does not allow you to pass additional arguments to policies
  • authorization is dependent on IP address in addition to the authenticated user
  • create a special class which wraps up both user and IP and passes it to the policy.
  • set up a permitted_attributes method in your policy
  • policy(@post).permitted_attributes
  • permitted_attributes(@post)
  • Pundit provides a convenient helper method
  • permit different attributes based on the current action,
  • If you have defined an action-specific method on your policy for the current action, the permitted_attributes helper will call it instead of calling permitted_attributes on your controller
  • If you don't have an instance for the first argument to authorize, then you can pass the class
  • restart the Rails server
  • Given there is a policy without a corresponding model / ruby class, you can retrieve it by passing a symbol
  • after_action :verify_authorized
  • It is not some kind of failsafe mechanism or authorization mechanism.
  • Pundit will work just fine without using verify_authorized and verify_policy_scoped
  •  
    "Minimal authorization through OO design and pure Ruby classes"
張 旭

Helm | - 0 views

  • Helm is a tool for managing Kubernetes packages called charts
  • Install and uninstall charts into an existing Kubernetes cluster
  • The chart is a bundle of information necessary to create an instance of a Kubernetes application.
  • ...12 more annotations...
  • The config contains configuration information that can be merged into a packaged chart to create a releasable object.
  • A release is a running instance of a chart, combined with a specific config.
  • The Helm Client is a command-line client for end users.
  • Interacting with the Tiller server
  • The Tiller Server is an in-cluster server that interacts with the Helm client, and interfaces with the Kubernetes API server.
  • Combining a chart and configuration to build a release
  • Installing charts into Kubernetes, and then tracking the subsequent release
  • the client is responsible for managing charts, and the server is responsible for managing releases.
  • The Helm client is written in the Go programming language, and uses the gRPC protocol suite to interact with the Tiller server.
  • The Tiller server is also written in Go. It provides a gRPC server to connect with the client, and it uses the Kubernetes client library to communicate with Kubernetes.
  • The Tiller server stores information in ConfigMaps located inside of Kubernetes.
  • Configuration files are, when possible, written in YAML.
  •  
    "Helm is a tool for managing Kubernetes packages called charts"
張 旭

An App's Brief Journey from Source to Image · Cloud Native Buildpack Document... - 0 views

  • A buildpack’s job is to gather everything your app needs to build and run, and it often does this job quickly and quietly.
  • a platform sequentially tests groups of buildpacks against your app’s source code.
  • transforming your source code into a runnable app image.
  • ...2 more annotations...
  • Detection criteria is specific to each buildpack – for instance, an NPM buildpack might look for a package.json, and a Go buildpack might look for Go source files.
  • A builder is essentially an image containing buildpacks.
張 旭

Running Terraform in Automation | Terraform - HashiCorp Learn - 0 views

  • In default usage, terraform init downloads and installs the plugins for any providers used in the configuration automatically, placing them in a subdirectory of the .terraform directory.
  • allows each configuration to potentially use different versions of plugins.
  • In automation environments, it can be desirable to disable this behavior and instead provide a fixed set of plugins already installed on the system where Terraform is running. This then avoids the overhead of re-downloading the plugins on each execution
  • ...12 more annotations...
  • the desire for an interactive approval step between plan and apply.
  • terraform init -input=false to initialize the working directory.
  • terraform plan -out=tfplan -input=false to create a plan and save it to the local file tfplan.
  • terraform apply -input=false tfplan to apply the plan stored in the file tfplan.
  • the environment variable TF_IN_AUTOMATION is set to any non-empty value, Terraform makes some minor adjustments to its output to de-emphasize specific commands to run.
  • it can be difficult or impossible to ensure that the plan and apply subcommands are run on the same machine, in the same directory, with all of the same files present.
  • to allow only one plan to be outstanding at a time.
  • forcing plans to be approved (or dismissed) in sequence
  • -auto-approve
  • The -auto-approve option tells Terraform not to require interactive approval of the plan before applying it.
  • obtain the archive created in the previous step and extract it at the same absolute path. This re-creates everything that was present after plan, avoiding strange issues where local files were created during the plan step.
  • a "build artifact"
  •  
    "In default usage, terraform init downloads and installs the plugins for any providers used in the configuration automatically, placing them in a subdirectory of the .terraform directory. "
張 旭

plataformatec/devise: Flexible authentication solution for Rails with Warden. - 0 views

  • we advise you to start a simple authentication system from scratch
  • If you are building your first Rails application, we recommend you do not use Devise. Devise requires a good understanding of the Rails Framework
  • The generator will install an initializer which describes ALL of Devise's configuration options
  • ...6 more annotations...
  • Replace MODEL with the class name used for the application’s users (it’s frequently User but could also be Admin)
  • If you add an option, be sure to inspect the migration file (created by the generator if your ORM supports them) and uncomment the appropriate section
  • set up the default URL options for the Devise mailer in each environment
  • should restart your application after changing Devise's configuration options
  • set up a controller with user authentication, just add this before_action
  • when using a :user resource, the user_root_path will be used if it exists; otherwise, the default root_path will be used
張 旭

The Exhaustive Guide to Rails Time Zones - Alexander Danilenko - 0 views

  • you can use "wrong" methods in development and fairly often get valid results. But then you'll face with unexpected problems on production.
  • Ruby provides two classes to manage time: Time and DateTime
  • that's in Ruby! When it comes to Rails things get a bit more complicated
  • ...15 more annotations...
  • Rails gives your ability to configure application time zone.
  • we have 3 (!) different time zones in our application: system time, application time and database time.
  • DateTime.now and Time.now both give you the time in system time zone
  • Ruby standard library methods that know nothing about Rails time zone configuration
  • It's not Rails responsible for adding time zone, but ActiveSupport
  • switch from Time.now to Time.zone.now
  • Time.zone.now
  • no need to use it explicitly as there is shorter and more clear option.
  • Time.zone.today
  • Time.zone.local
  • Time.zone.at
  • Time.zone.parse
  • DateTime.strptime(str, "%Y-%m-%d %H:%M %Z").in_time_zone
  • always keep in mind that when you build time or date object you should respect current time zone.
  • use Time.zone instead of Time, Date or DateTime
張 旭

Custom Resources | Kubernetes - 0 views

  • Custom resources are extensions of the Kubernetes API
  • A resource is an endpoint in the Kubernetes API that stores a collection of API objects of a certain kind
  • Custom resources can appear and disappear in a running cluster through dynamic registration
  • ...30 more annotations...
  • Once a custom resource is installed, users can create and access its objects using kubectl
  • When you combine a custom resource with a custom controller, custom resources provide a true declarative API.
  • A declarative API allows you to declare or specify the desired state of your resource and tries to keep the current state of Kubernetes objects in sync with the desired state.
  • Custom controllers can work with any kind of resource, but they are especially effective when combined with custom resources.
  • The Operator pattern combines custom resources and custom controllers.
  • the API represents a desired state, not an exact state.
  • define configuration of applications or infrastructure.
  • The main operations on the objects are CRUD-y (creating, reading, updating and deleting).
  • The client says "do this", and then gets an operation ID back, and has to check a separate Operation object to determine completion of the request.
  • The natural operations on the objects are not CRUD-y.
  • High bandwidth access (10s of requests per second sustained) needed.
  • Use a ConfigMap if any of the following apply
  • You want to put the entire config file into one key of a configMap.
  • You want to perform rolling updates via Deployment, etc., when the file is updated.
  • Use a secret for sensitive data, which is similar to a configMap but more secure.
  • You want to build new automation that watches for updates on the new object, and then CRUD other objects, or vice versa.
  • You want the object to be an abstraction over a collection of controlled resources, or a summarization of other resources.
  • CRDs are simple and can be created without any programming.
  • Aggregated APIs are subordinate API servers that sit behind the primary API server
  • CRDs allow users to create new types of resources without adding another API server
  • Defining a CRD object creates a new custom resource with a name and schema that you specify.
  • The name of a CRD object must be a valid DNS subdomain name
  • each resource in the Kubernetes API requires code that handles REST requests and manages persistent storage of objects.
  • The main API server delegates requests to you for the custom resources that you handle, making them available to all of its clients.
  • The new endpoints support CRUD basic operations via HTTP and kubectl
  • Custom resources consume storage space in the same way that ConfigMaps do.
  • Aggregated API servers may use the same storage as the main API server
  • CRDs always use the same authentication, authorization, and audit logging as the built-in resources of your API server.
  • most RBAC roles will not grant access to the new resources (except the cluster-admin role or any role created with wildcard rules).
  • CRDs and Aggregated APIs often come bundled with new role definitions for the types they add.
張 旭

LXC vs Docker: Why Docker is Better | UpGuard - 0 views

  • LXC (LinuX Containers) is a OS-level virtualization technology that allows creation and running of multiple isolated Linux virtual environments (VE) on a single control host.
  • Docker, previously called dotCloud, was started as a side project and only open-sourced in 2013. It is really an extension of LXC’s capabilities.
  • run processes in isolation.
  • ...35 more annotations...
  • Docker is developed in the Go language and utilizes LXC, cgroups, and the Linux kernel itself. Since it’s based on LXC, a Docker container does not include a separate operating system; instead it relies on the operating system’s own functionality as provided by the underlying infrastructure.
  • Docker acts as a portable container engine, packaging the application and all its dependencies in a virtual container that can run on any Linux server.
  • a VE there is no preloaded emulation manager software as in a VM.
  • In a VE, the application (or OS) is spawned in a container and runs with no added overhead, except for a usually minuscule VE initialization process.
  • LXC will boast bare metal performance characteristics because it only packages the needed applications.
  • the OS is also just another application that can be packaged too.
  • a VM, which packages the entire OS and machine setup, including hard drive, virtual processors and network interfaces. The resulting bloated mass usually takes a long time to boot and consumes a lot of CPU and RAM.
  • don’t offer some other neat features of VM’s such as IaaS setups and live migration.
  • LXC as supercharged chroot on Linux. It allows you to not only isolate applications, but even the entire OS.
  • Libvirt, which allows the use of containers through the LXC driver by connecting to 'lxc:///'.
  • 'LXC', is not compatible with libvirt, but is more flexible with more userspace tools.
  • Portable deployment across machines
  • Versioning: Docker includes git-like capabilities for tracking successive versions of a container
  • Component reuse: Docker allows building or stacking of already created packages.
  • Shared libraries: There is already a public registry (http://index.docker.io/ ) where thousands have already uploaded the useful containers they have created.
  • Docker taking the devops world by storm since its launch back in 2013.
  • LXC, while older, has not been as popular with developers as Docker has proven to be
  • LXC having a focus on sys admins that’s similar to what solutions like the Solaris operating system, with its Solaris Zones, Linux OpenVZ, and FreeBSD, with its BSD Jails virtualization system
  • it started out being built on top of LXC, Docker later moved beyond LXC containers to its own execution environment called libcontainer.
  • Unlike LXC, which launches an operating system init for each container, Docker provides one OS environment, supplied by the Docker Engine
  • LXC tooling sticks close to what system administrators running bare metal servers are used to
  • The LXC command line provides essential commands that cover routine management tasks, including the creation, launch, and deletion of LXC containers.
  • Docker containers aim to be even lighter weight in order to support the fast, highly scalable, deployment of applications with microservice architecture.
  • With backing from Canonical, LXC and LXD have an ecosystem tightly bound to the rest of the open source Linux community.
  • Docker Swarm
  • Docker Trusted Registry
  • Docker Compose
  • Docker Machine
  • Kubernetes facilitates the deployment of containers in your data center by representing a cluster of servers as a single system.
  • Swarm is Docker’s clustering, scheduling and orchestration tool for managing a cluster of Docker hosts. 
  • rkt is a security minded container engine that uses KVM for VM-based isolation and packs other enhanced security features. 
  • Apache Mesos can run different kinds of distributed jobs, including containers. 
  • Elastic Container Service is Amazon’s service for running and orchestrating containerized applications on AWS
  • LXC offers the advantages of a VE on Linux, mainly the ability to isolate your own private workloads from one another. It is a cheaper and faster solution to implement than a VM, but doing so requires a bit of extra learning and expertise.
  • Docker is a significant improvement of LXC’s capabilities.
張 旭

The differences between Docker, containerd, CRI-O and runc - Tutorial Works - 0 views

  • Docker isn’t the only container contender on the block.
  • Container Runtime Interface (CRI), which defines an API between Kubernetes and the container runtime
  • Open Container Initiative (OCI) which publishes specifications for images and containers.
  • ...20 more annotations...
  • for a lot of people, the name “Docker” itself is synonymous with the word “container”.
  • Docker created a very ergonomic (nice-to-use) tool for working with containers – also called docker.
  • docker is designed to be installed on a workstation or server and comes with a bunch of tools to make it easy to build and run containers as a developer, or DevOps person.
  • containerd: This is a daemon process that manages and runs containers.
  • runc: This is the low-level container runtime (the thing that actually creates and runs containers).
  • libcontainer, a native Go-based implementation for creating containers.
  • Kubernetes includes a component called dockershim, which allows it to support Docker.
  • Kubernetes prefers to run containers through any container runtime which supports its Container Runtime Interface (CRI).
  • Kubernetes will remove support for Docker directly, and prefer to use only container runtimes that implement its Container Runtime Interface.
  • Both containerd and CRI-O can run Docker-formatted (actually OCI-formatted) images, they just do it without having to use the docker command or the Docker daemon.
  • Docker images, are actually images packaged in the Open Container Initiative (OCI) format.
  • CRI is the API that Kubernetes uses to control the different runtimes that create and manage containers.
  • CRI makes it easier for Kubernetes to use different container runtimes
  • containerd is a high-level container runtime that came from Docker, and implements the CRI spec
  • containerd was separated out of the Docker project, to make Docker more modular.
  • CRI-O is another high-level container runtime which implements the Container Runtime Interface (CRI).
  • The idea behind the OCI is that you can choose between different runtimes which conform to the spec.
  • runc is an OCI-compatible container runtime.
  • A reference implementation is a piece of software that has implemented all the requirements of a specification or standard.
  • runc provides all of the low-level functionality for containers, interacting with existing low-level Linux features, like namespaces and control groups.
張 旭

Kubernetes Deployments: The Ultimate Guide - Semaphore - 1 views

  • Continuous integration gives you confidence in your code. To extend that confidence to the release process, your deployment operations need to come with a safety belt.
  • these Kubernetes objects ensure that you can progressively deploy, roll back and scale your applications without downtime.
  • A pod is just a group of containers (it can be a group of one container) that run on the same machine, and share a few things together.
  • ...34 more annotations...
  • the containers within a pod can communicate with each other over localhost
  • From a network perspective, all the processes in these containers are local.
  • we can never create a standalone container: the closest we can do is create a pod, with a single container in it.
  • Kubernetes is a declarative system (by opposition to imperative systems).
  • All we can do, is describe what we want to have, and wait for Kubernetes to take action to reconcile what we have, with what we want to have.
  • In other words, we can say, “I would like a 40-feet long blue container with yellow doors“, and Kubernetes will find such a container for us. If it doesn’t exist, it will build it; if there is already one but it’s green with red doors, it will paint it for us; if there is already a container of the right size and color, Kubernetes will do nothing, since what we have already matches what we want.
  • The specification of a replica set looks very much like the specification of a pod, except that it carries a number, indicating how many replicas
  • What happens if we change that definition? Suddenly, there are zero pods matching the new specification.
  • the creation of new pods could happen in a more gradual manner.
  • the specification for a deployment looks very much like the one for a replica set: it features a pod specification, and a number of replicas.
  • Deployments, however, don’t create or delete pods directly.
  • When we update a deployment and adjust the number of replicas, it passes that update down to the replica set.
  • When we update the pod specification, the deployment creates a new replica set with the updated pod specification. That replica set has an initial size of zero. Then, the size of that replica set is progressively increased, while decreasing the size of the other replica set.
  • we are going to fade in (turn up the volume) on the new replica set, while we fade out (turn down the volume) on the old one.
  • During the whole process, requests are sent to pods of both the old and new replica sets, without any downtime for our users.
  • A readiness probe is a test that we add to a container specification.
  • Kubernetes supports three ways of implementing readiness probes:Running a command inside a container;Making an HTTP(S) request against a container; orOpening a TCP socket against a container.
  • When we roll out a new version, Kubernetes will wait for the new pod to mark itself as “ready” before moving on to the next one.
  • If there is no readiness probe, then the container is considered as ready, as long as it could be started.
  • MaxSurge indicates how many extra pods we are willing to run during a rolling update, while MaxUnavailable indicates how many pods we can lose during the rolling update.
  • Setting MaxUnavailable to 0 means, “do not shutdown any old pod before a new one is up and ready to serve traffic“.
  • Setting MaxSurge to 100% means, “immediately start all the new pods“, implying that we have enough spare capacity on our cluster, and that we want to go as fast as possible.
  • kubectl rollout undo deployment web
  • the replica set doesn’t look at the pods’ specifications, but only at their labels.
  • A replica set contains a selector, which is a logical expression that “selects” (just like a SELECT query in SQL) a number of pods.
  • it is absolutely possible to manually create pods with these labels, but running a different image (or with different settings), and fool our replica set.
  • Selectors are also used by services, which act as the load balancers for Kubernetes traffic, internal and external.
  • internal IP address (denoted by the name ClusterIP)
  • during a rollout, the deployment doesn’t reconfigure or inform the load balancer that pods are started and stopped. It happens automatically through the selector of the service associated to the load balancer.
  • a pod is added as a valid endpoint for a service only if all its containers pass their readiness check. In other words, a pod starts receiving traffic only once it’s actually ready for it.
  • In blue/green deployment, we want to instantly switch over all the traffic from the old version to the new, instead of doing it progressively
  • We can achieve blue/green deployment by creating multiple deployments (in the Kubernetes sense), and then switching from one to another by changing the selector of our service
  • kubectl label pods -l app=blue,version=v1.5 status=enabled
  • kubectl label pods -l app=blue,version=v1.4 status-
  •  
    "Continuous integration gives you confidence in your code. To extend that confidence to the release process, your deployment operations need to come with a safety belt."
« First ‹ Previous 81 - 100 of 107 Next ›
Showing 20 items per page