Skip to main content

Home/ Larvata/ Group items tagged programming

Rss Feed Group items tagged

張 旭

What's the difference between Prometheus and Zabbix? - Stack Overflow - 0 views

  • Zabbix has core written in C and webUI based on PHP
  • Zabbix stores data in RDBMS (MySQL, PostgreSQL, Oracle, sqlite) of user's choice.
  • Prometheus uses its own database embedded into backend process
  • ...8 more annotations...
  • Zabbix by default uses "pull" model when a server connects to agents on each monitoring machine, agents periodically gather the info and send it to a server.
  • Prometheus prefers "pull" model when a server gather info from client machines.
  • Prometheus requires an application to be instrumented with Prometheus client library (available in different programming languages) for preparing metrics.
  • expose metrics for Prometheus (similar to "agents" for Zabbix)
  • Zabbix uses its own tcp-based communication protocol between agents and a server.
  • Prometheus uses HTTP with protocol buffers (+ text format for ease of use with curl).
  • Prometheus offers basic tool for exploring gathered data and visualizing it in simple graphs on its native server and also offers a minimal dashboard builder PromDash. But Prometheus is and is designed to be supported by modern visualizing tools like Grafana.
  • Prometheus offers solution for alerting that is separated from its core into Alertmanager application.
張 旭

How To Use Bash's Job Control to Manage Foreground and Background Processes | DigitalOcean - 0 views

  • Most processes that you start on a Linux machine will run in the foreground. The command will begin execution, blocking use of the shell for the duration of the process.
  • By default, processes are started in the foreground. Until the program exits or changes state, you will not be able to interact with the shell.
  • stop the process by sending it a signal
  • ...17 more annotations...
  • Linux terminals are usually configured to send the "SIGINT" signal (typically signal number 2) to current foreground process when the CTRL-C key combination is pressed.
  • Another signal that we can send is the "SIGTSTP" signal (typically signal number 20).
  • A background process is associated with the specific terminal that started it, but does not block access to the shell
  • start a background process by appending an ampersand character ("&") to the end of your commands.
  • type commands at the same time.
  • The [1] represents the command's "job spec" or job number. We can reference this with other job and process control commands, like kill, fg, and bg by preceding the job number with a percentage sign. In this case, we'd reference this job as %1.
  • Once the process is stopped, we can use the bg command to start it again in the background
  • By default, the bg command operates on the most recently stopped process.
  • Whether a process is in the background or in the foreground, it is rather tightly tied with the terminal instance that started it
  • When a terminal closes, it typically sends a SIGHUP signal to all of the processes (foreground, background, or stopped) that are tied to the terminal.
  • a terminal multiplexer
  • start it using the nohup command
  • appending output to ‘nohup.out’
  • pgrep -a
  • The disown command, in its default configuration, removes a job from the jobs queue of a terminal.
  • You can pass the -h flag to the disown process instead in order to mark the process to ignore SIGHUP signals, but to otherwise continue on as a regular job
  • The huponexit shell option controls whether bash will send its child processes the SIGHUP signal when it exits.
張 旭

How to Implement Categories in Django | DjangoPy - 0 views

  • Categories may have their subcategories, and subcategories may also have subcategories and so on.
  • You can create categories with Django Admin Panel and then associate it with content like an article or post
張 旭

Outbound connections in Azure | Microsoft Docs - 0 views

  • When an instance initiates an outbound flow to a destination in the public IP address space, Azure dynamically maps the private IP address to a public IP address.
  • After this mapping is created, return traffic for this outbound originated flow can also reach the private IP address where the flow originated.
  • Azure uses source network address translation (SNAT) to perform this function
  • ...22 more annotations...
  • When multiple private IP addresses are masquerading behind a single public IP address, Azure uses port address translation (PAT) to masquerade private IP addresses.
  • If you want outbound connectivity when working with Standard SKUs, you must explicitly define it either with Standard Public IP addresses or Standard public Load Balancer.
  • the VM is part of a public Load Balancer backend pool. The VM does not have a public IP address assigned to it.
  • The Load Balancer resource must be configured with a load balancer rule to create a link between the public IP frontend with the backend pool.
  • VM has an Instance Level Public IP (ILPIP) assigned to it. As far as outbound connections are concerned, it doesn't matter whether the VM is load balanced or not.
  • When an ILPIP is used, the VM uses the ILPIP for all outbound flows.
  • A public IP assigned to a VM is a 1:1 relationship (rather than 1: many) and implemented as a stateless 1:1 NAT.
  • Port masquerading (PAT) is not used, and the VM has all ephemeral ports available for use.
  • When the load-balanced VM creates an outbound flow, Azure translates the private source IP address of the outbound flow to the public IP address of the public Load Balancer frontend.
  • Azure uses SNAT to perform this function. Azure also uses PAT to masquerade multiple private IP addresses behind a public IP address.
  • Ephemeral ports of the load balancer's public IP address frontend are used to distinguish individual flows originated by the VM.
  • When multiple public IP addresses are associated with Load Balancer Basic, any of these public IP addresses are a candidate for outbound flows, and one is selected at random.
  • the VM is not part of a public Load Balancer pool (and not part of an internal Standard Load Balancer pool) and does not have an ILPIP address assigned to it.
  • The public IP address used for this outbound flow is not configurable and does not count against the subscription's public IP resource limit.
  • Do not use this scenario for whitelisting IP addresses.
  • This public IP address does not belong to you and cannot be reserved.
  • Standard Load Balancer uses all candidates for outbound flows at the same time when multiple (public) IP frontends is present.
  • Load Balancer Basic chooses a single frontend to be used for outbound flows when multiple (public) IP frontends are candidates for outbound flows.
  • the disableOutboundSnat option defaults to false and signifies that this rule programs outbound SNAT for the associated VMs in the backend pool of the load balancing rule.
  • Port masquerading SNAT (PAT)
  • Ephemeral port preallocation for port masquerading SNAT (PAT)
  • determine the public source IP address of an outbound connection.
張 旭

Introduction to GitLab Flow | GitLab - 0 views

  • Git allows a wide variety of branching strategies and workflows.
  • not integrated with issue tracking systems
  • The biggest problem is that many long-running branches emerge that all contain part of the changes.
  • ...47 more annotations...
  • most organizations practice continuous delivery, which means that your default branch can be deployed.
  • Merging everything into the master branch and frequently deploying means you minimize the amount of unreleased code, which is in line with lean and continuous delivery best practices.
  • you can deploy to production every time you merge a feature branch.
  • deploy a new version by merging master into the production branch.
  • you can have your deployment script create a tag on each deployment.
  • to have an environment that is automatically updated to the master branch
  • commits only flow downstream, ensures that everything is tested in all environments.
  • first merge these bug fixes into master, and then cherry-pick them into the release branch.
  • Merging into master and then cherry-picking into release is called an “upstream first” policy
  • “merge request” since the final action is to merge the feature branch.
  • “pull request” since the first manual action is to pull the feature branch
  • it is common to protect the long-lived branches
  • After you merge a feature branch, you should remove it from the source control software
  • When you are ready to code, create a branch for the issue from the master branch. This branch is the place for any work related to this change.
  • A merge request is an online place to discuss the change and review the code.
  • If you open the merge request but do not assign it to anyone, it is a “Work In Progress” merge request.
  • Start the title of the merge request with “[WIP]” or “WIP:” to prevent it from being merged before it’s ready.
  • To automatically close linked issues, mention them with the words “fixes” or “closes,” for example, “fixes #14” or “closes #67.” GitLab closes these issues when the code is merged into the default branch.
  • If you have an issue that spans across multiple repositories, create an issue for each repository and link all issues to a parent issue.
  • With Git, you can use an interactive rebase (rebase -i) to squash multiple commits into one or reorder them.
  • you should never rebase commits you have pushed to a remote server.
  • Rebasing creates new commits for all your changes, which can cause confusion because the same change would have multiple identifiers.
  • if someone has already reviewed your code, rebasing makes it hard to tell what changed since the last review.
  • never rebase commits authored by other people.
  • it is a bad idea to rebase commits that you have already pushed.
  • always use the “no fast-forward” (--no-ff) strategy when you merge manually.
  • you should try to avoid merge commits in feature branches
  • people avoid merge commits by just using rebase to reorder their commits after the commits on the master branch. Using rebase prevents a merge commit when merging master into your feature branch, and it creates a neat linear history.
  • you should never rebase commits you have pushed to a remote server
  • Sometimes you can reuse recorded resolutions (rerere), but merging is better since you only have to resolve conflicts once.
  • not frequently merge master into the feature branch.
  • utilizing new code,
  • resolving merge conflicts
  • updating long-running branches.
  • just cherry-picking a commit.
  • If your feature branch has a merge conflict, creating a merge commit is a standard way of solving this.
  • keep your feature branches short-lived.
  • split your features into smaller units of work
  • you should try to prevent merge commits, but not eliminate them.
  • Your codebase should be clean, but your history should represent what actually happened.
  • Splitting up work into individual commits provides context for developers looking at your code later.
  • push your feature branch frequently, even when it is not yet ready for review.
  • Commit often and push frequently
  • A commit message should reflect your intention, not just the contents of the commit.
  • Testing before merging
  • When using GitLab flow, developers create their branches from this master branch, so it is essential that it never breaks. Therefore, each merge request must be tested before it is accepted.
  • When creating a feature branch, always branch from an up-to-date master
  •  
    "Git allows a wide variety of branching strategies and workflows."
張 旭

Helm | - 0 views

  • Helm is a tool for managing Kubernetes packages called charts
  • Install and uninstall charts into an existing Kubernetes cluster
  • The chart is a bundle of information necessary to create an instance of a Kubernetes application.
  • ...12 more annotations...
  • The config contains configuration information that can be merged into a packaged chart to create a releasable object.
  • A release is a running instance of a chart, combined with a specific config.
  • The Helm Client is a command-line client for end users.
  • Interacting with the Tiller server
  • The Tiller Server is an in-cluster server that interacts with the Helm client, and interfaces with the Kubernetes API server.
  • Combining a chart and configuration to build a release
  • Installing charts into Kubernetes, and then tracking the subsequent release
  • the client is responsible for managing charts, and the server is responsible for managing releases.
  • The Helm client is written in the Go programming language, and uses the gRPC protocol suite to interact with the Tiller server.
  • The Tiller server is also written in Go. It provides a gRPC server to connect with the client, and it uses the Kubernetes client library to communicate with Kubernetes.
  • The Tiller server stores information in ConfigMaps located inside of Kubernetes.
  • Configuration files are, when possible, written in YAML.
  •  
    "Helm is a tool for managing Kubernetes packages called charts"
張 旭

Helm | - 0 views

  • Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses.
  • kubectl cluster-info
  • Role-Based Access Control (RBAC) enabled
  • ...133 more annotations...
  • initialize the local CLI
  • install Tiller into your Kubernetes cluster
  • helm install
  • helm init --upgrade
  • By default, when Tiller is installed, it does not have authentication enabled.
  • helm repo update
  • Without a max history set the history is kept indefinitely, leaving a large number of records for helm and tiller to maintain.
  • helm init --upgrade
  • Whenever you install a chart, a new release is created.
  • one chart can be installed multiple times into the same cluster. And each can be independently managed and upgraded.
  • helm list function will show you a list of all deployed releases.
  • helm delete
  • helm status
  • you can audit a cluster’s history, and even undelete a release (with helm rollback).
  • the Helm server (Tiller).
  • The Helm client (helm)
  • brew install kubernetes-helm
  • Tiller, the server portion of Helm, typically runs inside of your Kubernetes cluster.
  • it can also be run locally, and configured to talk to a remote Kubernetes cluster.
  • Role-Based Access Control - RBAC for short
  • create a service account for Tiller with the right roles and permissions to access resources.
  • run Tiller in an RBAC-enabled Kubernetes cluster.
  • run kubectl get pods --namespace kube-system and see Tiller running.
  • helm inspect
  • Helm will look for Tiller in the kube-system namespace unless --tiller-namespace or TILLER_NAMESPACE is set.
  • For development, it is sometimes easier to work on Tiller locally, and configure it to connect to a remote Kubernetes cluster.
  • even when running locally, Tiller will store release configuration in ConfigMaps inside of Kubernetes.
  • helm version should show you both the client and server version.
  • Tiller stores its data in Kubernetes ConfigMaps, you can safely delete and re-install Tiller without worrying about losing any data.
  • helm reset
  • The --node-selectors flag allows us to specify the node labels required for scheduling the Tiller pod.
  • --override allows you to specify properties of Tiller’s deployment manifest.
  • helm init --override manipulates the specified properties of the final manifest (there is no “values” file).
  • The --output flag allows us skip the installation of Tiller’s deployment manifest and simply output the deployment manifest to stdout in either JSON or YAML format.
  • By default, tiller stores release information in ConfigMaps in the namespace where it is running.
  • switch from the default backend to the secrets backend, you’ll have to do the migration for this on your own.
  • a beta SQL storage backend that stores release information in an SQL database (only postgres has been tested so far).
  • Once you have the Helm Client and Tiller successfully installed, you can move on to using Helm to manage charts.
  • Helm requires that kubelet have access to a copy of the socat program to proxy connections to the Tiller API.
  • A Release is an instance of a chart running in a Kubernetes cluster. One chart can often be installed many times into the same cluster.
  • helm init --client-only
  • helm init --dry-run --debug
  • A panic in Tiller is almost always the result of a failure to negotiate with the Kubernetes API server
  • Tiller and Helm have to negotiate a common version to make sure that they can safely communicate without breaking API assumptions
  • helm delete --purge
  • Helm stores some files in $HELM_HOME, which is located by default in ~/.helm
  • A Chart is a Helm package. It contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster.
  • it like the Kubernetes equivalent of a Homebrew formula, an Apt dpkg, or a Yum RPM file.
  • A Repository is the place where charts can be collected and shared.
  • Set the $HELM_HOME environment variable
  • each time it is installed, a new release is created.
  • Helm installs charts into Kubernetes, creating a new release for each installation. And to find new charts, you can search Helm chart repositories.
  • chart repository is named stable by default
  • helm search shows you all of the available charts
  • helm inspect
  • To install a new package, use the helm install command. At its simplest, it takes only one argument: The name of the chart.
  • If you want to use your own release name, simply use the --name flag on helm install
  • additional configuration steps you can or should take.
  • Helm does not wait until all of the resources are running before it exits. Many charts require Docker images that are over 600M in size, and may take a long time to install into the cluster.
  • helm status
  • helm inspect values
  • helm inspect values stable/mariadb
  • override any of these settings in a YAML formatted file, and then pass that file during installation.
  • helm install -f config.yaml stable/mariadb
  • --values (or -f): Specify a YAML file with overrides.
  • --set (and its variants --set-string and --set-file): Specify overrides on the command line.
  • Values that have been --set can be cleared by running helm upgrade with --reset-values specified.
  • Chart designers are encouraged to consider the --set usage when designing the format of a values.yaml file.
  • --set-file key=filepath is another variant of --set. It reads the file and use its content as a value.
  • inject a multi-line text into values without dealing with indentation in YAML.
  • An unpacked chart directory
  • When a new version of a chart is released, or when you want to change the configuration of your release, you can use the helm upgrade command.
  • Kubernetes charts can be large and complex, Helm tries to perform the least invasive upgrade.
  • It will only update things that have changed since the last release
  • $ helm upgrade -f panda.yaml happy-panda stable/mariadb
  • deployment
  • If both are used, --set values are merged into --values with higher precedence.
  • The helm get command is a useful tool for looking at a release in the cluster.
  • helm rollback
  • A release version is an incremental revision. Every time an install, upgrade, or rollback happens, the revision number is incremented by 1.
  • helm history
  • a release name cannot be re-used.
  • you can rollback a deleted resource, and have it re-activate.
  • helm repo list
  • helm repo add
  • helm repo update
  • The Chart Development Guide explains how to develop your own charts.
  • helm create
  • helm lint
  • helm package
  • Charts that are archived can be loaded into chart repositories.
  • chart repository server
  • Tiller can be installed into any namespace.
  • Limiting Tiller to only be able to install into specific namespaces and/or resource types is controlled by Kubernetes RBAC roles and rolebindings
  • Release names are unique PER TILLER INSTANCE
  • Charts should only contain resources that exist in a single namespace.
  • not recommended to have multiple Tillers configured to manage resources in the same namespace.
  • a client-side Helm plugin. A plugin is a tool that can be accessed through the helm CLI, but which is not part of the built-in Helm codebase.
  • Helm plugins are add-on tools that integrate seamlessly with Helm. They provide a way to extend the core feature set of Helm, but without requiring every new feature to be written in Go and added to the core tool.
  • Helm plugins live in $(helm home)/plugins
  • The Helm plugin model is partially modeled on Git’s plugin model
  • helm referred to as the porcelain layer, with plugins being the plumbing.
  • helm plugin install https://github.com/technosophos/helm-template
  • command is the command that this plugin will execute when it is called.
  • Environment variables are interpolated before the plugin is executed.
  • The command itself is not executed in a shell. So you can’t oneline a shell script.
  • Helm is able to fetch Charts using HTTP/S
  • Variables like KUBECONFIG are set for the plugin if they are set in the outer environment.
  • In Kubernetes, granting a role to an application-specific service account is a best practice to ensure that your application is operating in the scope that you have specified.
  • restrict Tiller’s capabilities to install resources to certain namespaces, or to grant a Helm client running access to a Tiller instance.
  • Service account with cluster-admin role
  • The cluster-admin role is created by default in a Kubernetes cluster
  • Deploy Tiller in a namespace, restricted to deploying resources only in that namespace
  • Deploy Tiller in a namespace, restricted to deploying resources in another namespace
  • When running a Helm client in a pod, in order for the Helm client to talk to a Tiller instance, it will need certain privileges to be granted.
  • SSL Between Helm and Tiller
  • The Tiller authentication model uses client-side SSL certificates.
  • creating an internal CA, and using both the cryptographic and identity functions of SSL.
  • Helm is a powerful and flexible package-management and operations tool for Kubernetes.
  • default installation applies no security configurations
  • with a cluster that is well-secured in a private network with no data-sharing or no other users or teams.
  • With great power comes great responsibility.
  • Choose the Best Practices you should apply to your helm installation
  • Role-based access control, or RBAC
  • Tiller’s gRPC endpoint and its usage by Helm
  • Kubernetes employ a role-based access control (or RBAC) system (as do modern operating systems) to help mitigate the damage that can be done if credentials are misused or bugs exist.
  • In the default installation the gRPC endpoint that Tiller offers is available inside the cluster (not external to the cluster) without authentication configuration applied.
  • Tiller stores its release information in ConfigMaps. We suggest changing the default to Secrets.
  • release information
  • charts
  • charts are a kind of package that not only installs containers you may or may not have validated yourself, but it may also install into more than one namespace.
  • As with all shared software, in a controlled or shared environment you must validate all software you install yourself before you install it.
  • Helm’s provenance tools to ensure the provenance and integrity of charts
  •  
    "Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses."
張 旭

Modules - Configuration Language - Terraform by HashiCorp - 0 views

  • provider blocks can appear in any module, it is recommended that they be placed only in the root module of a configuration
  • In all cases it is recommended to keep explicit provider configurations only in the root module and pass them (whether implicitly or explicitly) down to descendent modules
  • Provider configurations are used for all operations on associated resources, including destroying remote objects and refreshing state.
  • ...5 more annotations...
  • all resources created for a particular provider configuration must be destroyed before that provider configuration is removed, unless the related resources are re-configured to use a different provider configuration first.
  • a child module automatically inherits default (un-aliased) provider configurations from its parent.
  • recommended in the common case where only a single configuration is needed for each provider across the entire configuration.
  • the providers argument within a module block can be used to define explicitly which provider configs are made available to the child module.
  • Once the providers argument is used in a module block, it overrides all of the default inheritance behavior, so it is necessary to enumerate mappings for all of the required providers.
張 旭

How to create reusable infrastructure with Terraform modules - 0 views

  • auto scaling schedule
  • The easiest way to create a versioned module is to put the code for the module in a separate Git repository and to set the source parameter to that repository’s URL.
張 旭

git - What is the difference between GitHub Flow and GitLab Flow? - Stack Overflow - 0 views

  • in order to keep master a true record of known working production code the actual deployment to production should happen from the feature branch before merging it into master.
  • This approach works well if we seldom publish results of our work. (Maybe once every 2 weeks).
  • Aside from promoting ready to deploy master branch and feature branches (same as GitHub Flow) it introduces three other kinds of branches
張 旭

Understanding the GitHub flow · GitHub Guides - 0 views

  • anything in the master branch is always deployable.
  • Your branch name should be descriptive
  • Commits also create a transparent history of your work that others can follow to understand what you've done and why.
  • ...9 more annotations...
  • each commit is considered a separate unit of change.
  • By writing clear commit messages, you can make it easier for other people to follow along and provide feedback.
  • Pull Requests initiate discussion about your commits.
  • If you're using a Fork & Pull Model, Pull Requests provide a way to notify project maintainers about the changes you'd like them to consider.
  • Pull Requests are designed to encourage and capture this type of conversation.
  • You can also continue to push to your branch in light of discussion and feedback about your commits.
  • If your branch causes issues, you can roll it back by deploying the existing master into production.
  • With GitHub, you can deploy from a branch for final testing in production before merging to master.
  • your changes have been verified in production, it is time to merge your code into the master branch.
  •  
    "anything in the master branch is always deployable."
張 旭

Introduction to GitLab Flow | GitLab - 0 views

  • GitLab flow as a clearly defined set of best practices. It combines feature-driven development and feature branches with issue tracking.
  • In Git, you add files from the working copy to the staging area. After that, you commit them to your local repo. The third step is pushing to a shared remote repository.
  • branching model
  • ...68 more annotations...
  • The biggest problem is that many long-running branches emerge that all contain part of the changes.
  • It is a convention to call your default branch master and to mostly branch from and merge to this.
  • Nowadays, most organizations practice continuous delivery, which means that your default branch can be deployed.
  • Continuous delivery removes the need for hotfix and release branches, including all the ceremony they introduce.
  • Merging everything into the master branch and frequently deploying means you minimize the amount of unreleased code, which is in line with lean and continuous delivery best practices.
  • GitHub flow assumes you can deploy to production every time you merge a feature branch.
  • You can deploy a new version by merging master into the production branch. If you need to know what code is in production, you can just checkout the production branch to see.
  • Production branch
  • Environment branches
  • have an environment that is automatically updated to the master branch.
  • deploy the master branch to staging.
  • To deploy to pre-production, create a merge request from the master branch to the pre-production branch.
  • Go live by merging the pre-production branch into the production branch.
  • Release branches
  • work with release branches if you need to release software to the outside world.
  • each branch contains a minor version
  • After announcing a release branch, only add serious bug fixes to the branch.
  • merge these bug fixes into master, and then cherry-pick them into the release branch.
  • Merging into master and then cherry-picking into release is called an “upstream first” policy
  • Tools such as GitHub and Bitbucket choose the name “pull request” since the first manual action is to pull the feature branch.
  • Tools such as GitLab and others choose the name “merge request” since the final action is to merge the feature branch.
  • If you work on a feature branch for more than a few hours, it is good to share the intermediate result with the rest of the team.
  • the merge request automatically updates when new commits are pushed to the branch.
  • If the assigned person does not feel comfortable, they can request more changes or close the merge request without merging.
  • In GitLab, it is common to protect the long-lived branches, e.g., the master branch, so that most developers can’t modify them.
  • if you want to merge into a protected branch, assign your merge request to someone with maintainer permissions.
  • After you merge a feature branch, you should remove it from the source control software.
  • Having a reason for every code change helps to inform the rest of the team and to keep the scope of a feature branch small.
  • If there is no issue yet, create the issue
  • The issue title should describe the desired state of the system.
  • For example, the issue title “As an administrator, I want to remove users without receiving an error” is better than “Admin can’t remove users.”
  • create a branch for the issue from the master branch
  • If you open the merge request but do not assign it to anyone, it is a “Work In Progress” merge request.
  • Start the title of the merge request with [WIP] or WIP: to prevent it from being merged before it’s ready.
  • When they press the merge button, GitLab merges the code and creates a merge commit that makes this event easily visible later on.
  • Merge requests always create a merge commit, even when the branch could be merged without one. This merge strategy is called “no fast-forward” in Git.
  • Suppose that a branch is merged but a problem occurs and the issue is reopened. In this case, it is no problem to reuse the same branch name since the first branch was deleted when it was merged.
  • At any time, there is at most one branch for every issue.
  • It is possible that one feature branch solves more than one issue.
  • GitLab closes these issues when the code is merged into the default branch.
  • If you have an issue that spans across multiple repositories, create an issue for each repository and link all issues to a parent issue.
  • use an interactive rebase (rebase -i) to squash multiple commits into one or reorder them.
  • you should never rebase commits you have pushed to a remote server.
  • Rebasing creates new commits for all your changes, which can cause confusion because the same change would have multiple identifiers.
  • if someone has already reviewed your code, rebasing makes it hard to tell what changed since the last review.
  • never rebase commits authored by other people.
  • it is a bad idea to rebase commits that you have already pushed.
  • If you revert a merge commit and then change your mind, revert the revert commit to redo the merge.
  • Often, people avoid merge commits by just using rebase to reorder their commits after the commits on the master branch.
  • Using rebase prevents a merge commit when merging master into your feature branch, and it creates a neat linear history.
  • every time you rebase, you have to resolve similar conflicts.
  • Sometimes you can reuse recorded resolutions (rerere), but merging is better since you only have to resolve conflicts once.
  • A good way to prevent creating many merge commits is to not frequently merge master into the feature branch.
  • keep your feature branches short-lived.
  • Most feature branches should take less than one day of work.
  • If your feature branches often take more than a day of work, try to split your features into smaller units of work.
  • You could also use feature toggles to hide incomplete features so you can still merge back into master every day.
  • you should try to prevent merge commits, but not eliminate them.
  • Your codebase should be clean, but your history should represent what actually happened.
  • If you rebase code, the history is incorrect, and there is no way for tools to remedy this because they can’t deal with changing commit identifiers
  • Commit often and push frequently
  • You should push your feature branch frequently, even when it is not yet ready for review.
  • A commit message should reflect your intention, not just the contents of the commit.
  • each merge request must be tested before it is accepted.
  • test the master branch after each change.
  • If new commits in master cause merge conflicts with the feature branch, merge master back into the branch to make the CI server re-run the tests.
  • When creating a feature branch, always branch from an up-to-date master.
  • Do not merge from upstream again if your code can work and merge cleanly without doing so.
張 旭

- 0 views

  • A fast-forward merge can happen when the current branch has no extra commits compared to the branch we’re merging.
  • With a no-fast-forward merge, Git creates a new merging commit on the active branch.
  • We can manually remove the changes we don't want to keep, save the changes, add the changed file again, and commit the changes
  • ...14 more annotations...
  • A git rebase copies the commits from the current branch, and puts these copied commits on top of the specified branch.
  • The branch that we're rebasing always has the latest changes that we want to keep!
  • A git rebase changes the history of the project as new hashes are created for the copied commits!
  • Rebasing is great whenever you're working on a feature branch, and the master branch has been updated.
  • An interactive rebase can also be useful on the branch you're currently working on, and want to modify some commits.
  • A git reset gets rid of all the current staged files and gives us control over where HEAD should point to.
  • A soft reset moves HEAD to the specified commit (or the index of the commit compared to HEAD)
  • Git should simply reset its state back to where it was on the specified commit: this even includes the changes in your working directory and staged files!
  • By reverting a certain commit, we create a new commit that contains the reverted changes!
  • Performing a git revert is very useful in order to undo a certain commit, without modifying the history of the branch.
  • By cherry-picking a commit, we create a new commit on our active branch that contains the changes that were introduced by the cherry-picked commit.
  • a fetch simply downloads new data.
  • A git pull is actually two commands in one: a git fetch, and a git merge
  • git reflog is a very useful command in order to show a log of all the actions that have been taken
張 旭

How to Write a Git Commit Message - 1 views

  • a well-crafted Git commit message is the best way to communicate context about a change to fellow developers (and indeed to their future selves).
  • A diff will tell you what changed, but only the commit message can properly tell you why.
  • a commit message shows whether a developer is a good collaborator
  • ...22 more annotations...
  • a well-cared for log is a beautiful and useful thing
  • Reviewing others’ commits and pull requests becomes something worth doing, and suddenly can be done independently.
  • Understanding why something happened months or years ago becomes not only possible but efficient.
  • how to write an individual commit message.
  • Markup syntax, wrap margins, grammar, capitalization, punctuation.
  • What should it not contain?
  • issue tracking IDs
  • pull request numbers
  • The seven rules of a great Git commit message
  • Use the body to explain what and why vs. how
  • Use the imperative mood in the subject line
  • it’s a good idea to begin the commit message with a single short (less than 50 character) line summarizing the change, followed by a blank line and then a more thorough description.
  • forces the author to think for a moment about the most concise way to explain what’s going on.
  • If you’re having a hard time summarizing, you might be committing too many changes at once.
  • shoot for 50 characters, but consider 72 the hard limit
  • Imperative mood just means “spoken or written as if giving a command or instruction”.
  • Git itself uses the imperative whenever it creates a commit on your behalf.
  • when you write your commit messages in the imperative, you’re following Git’s own built-in conventions.
  • A properly formed Git commit subject line should always be able to complete the following sentence: If applied, this commit will your subject line here
  • explaining what changed and why
  • Code is generally self-explanatory in this regard (and if the code is so complex that it needs to be explained in prose, that’s what source comments are for).
  • there are tab completion scripts that take much of the pain out of remembering the subcommands and switches.
張 旭

深度解读 - TDD(测试驱动开发) - 简书 - 0 views

  • TDD的原理是在开发功能代码之前,先编写单元测试用例代码,测试代码确定需要编写什么产品代码。
  • TDD 并不能直接提高设计能力,它只是给你更多机会和保障去改善设计。
  • 写测试,只关注需求,程序的输入输出,不关心中间过程
  • ...21 more annotations...
  • 写实现,不考虑别的需求,用最简单的方式满足当前这个小需求即可
  • 一次只关注一个点,思维负担更小
  • 提前澄清需求细节
  • 写一个失败的测试,它是对一个小需求的描述,只需要关心输入输出,这个时候根本不用关心如何实现
  • 专注在用最快的方式实现当前这个小需求,不用关心其他需求,也不要管代码的质量多么惨不忍睹
  • 既不用思考需求,也没有实现的压力,只需要找出代码中的坏味道,并用一个手法消除它,让代码变成整洁的代码
  • TDD 之前要拆分任务,把一个大需求拆成多个小需求。
  • 符合 Given-When-Then 格式
  • 包含断言
  • 可以重复执行
  • 单元测试基础设施
  • 只在没有信心的地方写测试代码
  • 有注释的地方都可以抽取方法,用方法名来代替注释
  • 选一个最简单的用例,直接开写,用最简单的代码通过测试。逐渐增加测试,让代码变复杂,用重构来驱动出设计。
  • 有了重构这个工具后,做设计的压力小了很多,因为有测试代码保护,我们可以随时重构实现了
  • 在纸上迭代总比改代码要快
  • 任务分解和列 Example
  • 步子迈太大,容易扯着蛋。
  • 做探索性的技术研究(Spike),不需要长期维护,而且测试基础设施搭建成本很高,那还是手工测试吧
  • TDD 是开发人员自己的事
  • 布置完一个任务,让新人先画图,就可以在他开始写代码前对他的设计提反馈。
  •  
    "TDD的原理是在开发功能代码之前,先编写单元测试用例代码,测试代码确定需要编写什么产品代码。"
張 旭

bbatsov/rails-style-guide: A community-driven Ruby on Rails 4 style guide - 0 views

  • custom initialization code in config/initializers. The code in initializers executes on application startup
  • Keep initialization code for each gem in a separate file with the same name as the gem
  • Mark additional assets for precompilation
  • ...90 more annotations...
  • config/environments/production.rb
  • Create an additional staging environment that closely resembles the production one
  • Keep any additional configuration in YAML files under the config/ directory
  • Rails::Application.config_for(:yaml_file)
  • Use nested routes to express better the relationship between ActiveRecord models
  • nest routes more than 1 level deep then use the shallow: true option
  • namespaced routes to group related actions
  • Don't use match to define any routes unless there is need to map multiple request types among [:get, :post, :patch, :put, :delete] to a single action using :via option.
  • Keep the controllers skinny
  • all the business logic should naturally reside in the model
  • Share no more than two instance variables between a controller and a view.
  • using a template
  • Prefer render plain: over render text
  • Prefer corresponding symbols to numeric HTTP status codes
  • without abbreviations
  • Keep your models for business logic and data-persistence only
  • Avoid altering ActiveRecord defaults (table names, primary key, etc)
  • Group macro-style methods (has_many, validates, etc) in the beginning of the class definition
  • Prefer has_many :through to has_and_belongs_to_many
  • self[:attribute]
  • self[:attribute] = value
  • validates
  • Keep custom validators under app/validators
  • Consider extracting custom validators to a shared gem
  • preferable to make a class method instead which serves the same purpose of the named scope
  • returns an ActiveRecord::Relation object
  • .update_attributes
  • Override the to_param method of the model
  • Use the friendly_id gem. It allows creation of human-readable URLs by using some descriptive attribute of the model instead of its id
  • find_each to iterate over a collection of AR objects
  • .find_each
  • .find_each
  • Looping through a collection of records from the database (using the all method, for example) is very inefficient since it will try to instantiate all the objects at once
  • always call before_destroy callbacks that perform validation with prepend: true
  • Define the dependent option to the has_many and has_one associations
  • always use the exception raising bang! method or handle the method return value.
  • When persisting AR objects
  • Avoid string interpolation in queries
  • param will be properly escaped
  • Consider using named placeholders instead of positional placeholders
  • use of find over where when you need to retrieve a single record by id
  • use of find_by over where and find_by_attribute
  • use of where.not over SQL
  • use heredocs with squish
  • Keep the schema.rb (or structure.sql) under version control.
  • Use rake db:schema:load instead of rake db:migrate to initialize an empty database
  • Enforce default values in the migrations themselves instead of in the application layer
  • change_column_default
  • imposing data integrity from the Rails app is impossible
  • use the change method instead of up and down methods.
  • constructive migrations
  • use models in migrations, make sure you define them so that you don't end up with broken migrations in the future
  • Don't use non-reversible migration commands in the change method.
  • In this case, block will be used by create_table in rollback
  • Never call the model layer directly from a view
  • Never make complex formatting in the views, export the formatting to a method in the view helper or the model.
  • When the labels of an ActiveRecord model need to be translated, use the activerecord scope
  • Separate the texts used in the views from translations of ActiveRecord attributes
  • Place the locale files for the models in a folder locales/models
  • the texts used in the views in folder locales/views
  • config/application.rb config.i18n.load_path += Dir[Rails.root.join('config', 'locales', '**', '*.{rb,yml}')]
  • I18n.t
  • I18n.l
  • Use "lazy" lookup for the texts used in views.
  • Use the dot-separated keys in the controllers and models
  • Reserve app/assets for custom stylesheets, javascripts, or images
  • Third party code such as jQuery or bootstrap should be placed in vendor/assets
  • Provide both HTML and plain-text view templates
  • config.action_mailer.raise_delivery_errors = true
  • Use a local SMTP server like Mailcatcher in the development environment
  • Provide default settings for the host name
  • The _url methods include the host name and the _path methods don't
  • _url
  • Format the from and to addresses properly
  • default from:
  • sending html emails all styles should be inline
  • Sending emails while generating page response should be avoided. It causes delays in loading of the page and request can timeout if multiple email are sent.
  • .start_with?
  • .end_with?
  • &.
  • Config your timezone accordingly in application.rb
  • config.active_record.default_timezone = :local
  • it can be only :utc or :local
  • Don't use Time.parse
  • Time.zone.parse
  • Don't use Time.now
  • Time.zone.now
  • Put gems used only for development or testing in the appropriate group in the Gemfile
  • Add all OS X specific gems to a darwin group in the Gemfile, and all Linux specific gems to a linux group
  • Do not remove the Gemfile.lock from version control.
張 旭

Ruby on Rails 實戰聖經 | 自動化測試 - 0 views

  • 最小的測試粒度叫做Unit Test單元測試,會對個別的類別和方法測試結果如預期。再大一點的粒度稱作Integration Test整合測試,測試多個元件之間的互動正確。最大的粒度則是Acceptance Test驗收測試,從用戶觀點來測試整個軟體。
  • 單元測試,通常會由開發者自行負責測試,因為只有你自己清楚每個類別和方法的內部結構是怎麼設計的。
  • 哪來的時間做自動化測試呢?這個想法是相當短視和業餘的想法
  • ...18 more annotations...
  • 這其實是一種投資,如果是簡單的程式,也許你手動執行一次就寫對了,但是如果是複雜的程式,往往第一次不會寫對,你會浪費很多時間在檢查到底你寫的程式的正確性,而寫測試就可以大大的節省這些時間。更不用說你明天,下個禮拜或下個月需要再確認其他程式有沒有副作用影響的時候,你有一組測試程式可以大大節省手動檢查的時間。
  • 幾乎每種語言都有一套叫做xUnit測試框架的測試工具
  • 標準流程是 1. (Setup) 設定測試資料 2. (Exercise) 執行要測試的方法 3. (Verify) 檢查結果是否正確 4. (Teardown) 清理還原資料
  • RSpec是一套改良版的xUnit測試框架,非常風行於Rails社群
  • 個別的單元測試應該是獨立不會互相影響的
  • 一個it區塊,就是一個單元測試,裡面的expect方法會進行驗證。
  • RSpec裡,我們又把一個小單元測試叫做example
  • BDD(Behavior-driven development)測試框架,相較於TDD用test思維,測試程式的結果。BDD強調的是用spec思維,描述程式應該有什麼行為。
  • describe和context幫助你組織分類,都是可以任意套疊的。
  • 每個it就是一小段測試,在裡面我們會用expect(…).to來設定期望
  • let可以用來簡化上述的before用法,並且支援lazy evaluation和memoized,也就是有需要才初始,並且不同單元測試之間,只會初始化一次,可以增加測試執行效率
  • let!則會在測試一開始就先初始一次,而不是lazy evaluation。
  • 先列出來預計要寫的測試,或是暫時不要跑的測試
  • specify和example都是it方法的同義字。
  • 進階一點你可以自己寫Matcher
  • RSpec分成數種不同測試,分別是Model測試、Controller測試、View測試、Helper測試、Route和Request測試
  • Rails內建有Fixture功能可以建立假資料,方法是為每個Model使用一份YAML資料。
  • 記得確認每個測試案例之間的測試資料需要清除
« First ‹ Previous 141 - 160 of 174 Next ›
Showing 20 items per page