"Riemann aggregates events from your servers and applications with a powerful stream processing language. Send an email for every exception in your app. Track the latency distribution of your web app. See the top processes on any host, by memory and CPU. Combine statistics from every Riak node in your cluster and forward to Graphite. Track user activity from second to second."
"Home Assistant is an open-source home automation platform running on Python 3. Track and control all devices at home and automate control. Installation in less than a minute.
"
A chart is a collection of files
that describe a related set of Kubernetes resources.
A single chart
might be used to deploy something simple, like a memcached pod, or
something complex, like a full web app stack with HTTP servers,
databases, caches, and so on.
Charts are created as files laid out in a particular directory tree,
then they can be packaged into versioned archives to be deployed.
A chart is organized as a collection of files inside of a directory.
values.yaml # The default configuration values for this chart
charts/ # A directory containing any charts upon which this chart depends.
templates/ # A directory of templates that, when combined with values,
# will generate valid Kubernetes manifest files.
version: A SemVer 2 version (required)
apiVersion: The chart API version, always "v1" (required)
Every chart must have a version number. A version must follow the
SemVer 2 standard.
non-SemVer names are explicitly
disallowed by the system.
When generating a
package, the helm package command will use the version that it finds
in the Chart.yaml as a token in the package name.
the appVersion field is not related to the version field. It is
a way of specifying the version of the application.
appVersion: The version of the app that this contains (optional). This needn't be SemVer.
If the latest version of a chart in the
repository is marked as deprecated, then the chart as a whole is considered to
be deprecated.
deprecated: Whether this chart is deprecated (optional, boolean)
one chart may depend on any number of other charts.
dependencies can be dynamically linked through the requirements.yaml
file or brought in to the charts/ directory and managed manually.
the preferred method of declaring dependencies is by using a
requirements.yaml file inside of your chart.
A requirements.yaml file is a simple file for listing your
dependencies.
The repository field is the full URL to the chart repository.
you must also use helm repo add to add that repo locally.
helm dependency update
and it will use your dependency file to download all the specified
charts into your charts/ directory for you.
When helm dependency update retrieves charts, it will store them as
chart archives in the charts/ directory.
Managing charts with requirements.yaml is a good way to easily keep
charts updated, and also share requirements information throughout a
team.
All charts are loaded by default.
The condition field holds one or more YAML paths (delimited by commas).
If this path exists in the top parent’s values and resolves to a boolean value,
the chart will be enabled or disabled based on that boolean value.
The tags field is a YAML list of labels to associate with this chart.
all charts with tags can be enabled or disabled by
specifying the tag and a boolean value.
The --set parameter can be used as usual to alter tag and condition values.
Conditions (when set in values) always override tags.
The first condition path that exists wins and subsequent ones for that chart are ignored.
The keys containing the values to be imported can be specified in the parent chart’s requirements.yaml file
using a YAML list. Each item in the list is a key which is imported from the child chart’s exports field.
specifying the key data in our import list, Helm looks in the exports field of the child
chart for data key and imports its contents.
the parent key data is not contained in the parent’s final values. If you need to specify the
parent key, use the ‘child-parent’ format.
To access values that are not contained in the exports key of the child chart’s values, you will need to
specify the source key of the values to be imported (child) and the destination path in the parent chart’s
values (parent).
To drop a dependency into your charts/ directory, use the
helm fetch command
A dependency can be either a chart archive (foo-1.2.3.tgz) or an
unpacked chart directory.
name cannot start with _ or ..
Such files are ignored by the chart loader.
a single release is created with all the objects for the chart and its dependencies.
Helm Chart templates are written in the
Go template language, with the
addition of 50 or so add-on template
functions from the Sprig library and a
few other specialized functions
When
Helm renders the charts, it will pass every file in that directory
through the template engine.
Chart developers may supply a file called values.yaml inside of a
chart. This file can contain default values.
Chart users may supply a YAML file that contains values. This can be
provided on the command line with helm install.
When a user supplies custom values, these values will override the
values in the chart’s values.yaml file.
Template files follow the standard conventions for writing Go templates
{{default "minio" .Values.storage}}
Values that are supplied via a values.yaml file (or via the --set
flag) are accessible from the .Values object in a template.
pre-defined, are available to every template, and
cannot be overridden
the names are case
sensitive
Release.Name: The name of the release (not the chart)
Release.IsUpgrade: This is set to true if the current operation is an upgrade or rollback.
Release.Revision: The revision number. It begins at 1, and increments with
each helm upgrade
Chart: The contents of the Chart.yaml
Files: A map-like object containing all non-special files in the chart.
Files can be
accessed using {{index .Files "file.name"}} or using the {{.Files.Get name}} or
{{.Files.GetString name}} functions.
.helmignore
access the contents of the file
as []byte using {{.Files.GetBytes}}
Any unknown Chart.yaml fields will be dropped
Chart.yaml cannot be
used to pass arbitrarily structured data into the template.
A values file is formatted in YAML.
A chart may include a default
values.yaml file
be merged into the default
values file.
The default values file included inside of a chart must be named
values.yaml
accessible inside of templates using the
.Values object
Values files can declare values for the top-level chart, as well as for
any of the charts that are included in that chart’s charts/ directory.
Charts at a higher level have access to all of the variables defined
beneath.
lower level charts cannot access things in
parent charts
Values are namespaced, but namespaces are pruned.
the scope of the values has been reduced and the
namespace prefix removed
Helm supports special “global” value.
a way of sharing one top-level variable with all
subcharts, which is useful for things like setting metadata properties
like labels.
If a subchart declares a global variable, that global will be passed
downward (to the subchart’s subcharts), but not upward to the parent
chart.
global variables of parent charts take precedence over the global variables from subcharts.
helm lint
A chart repository is an HTTP server that houses one or more packaged
charts
Any HTTP server that can serve YAML files and tar files and can answer
GET requests can be used as a repository server.
Helm does not provide tools for uploading charts to
remote repository servers.
the only way to add a chart to $HELM_HOME/starters is to manually
copy it there.
Helm provides a hook mechanism to allow chart developers to intervene
at certain points in a release’s life cycle.
Execute a Job to back up a database before installing a new chart,
and then execute a second job after the upgrade in order to restore
data.
Hooks are declared as an annotation in the metadata section of a manifest
Hooks work like regular templates, but they have special annotations
pre-install
post-install: Executes after all resources are loaded into Kubernetes
pre-delete
post-delete: Executes on a deletion request after all of the release’s
resources have been deleted.
pre-upgrade
post-upgrade
pre-rollback
post-rollback: Executes on a rollback request after all resources
have been modified.
crd-install
test-success: Executes when running helm test and expects the pod to
return successfully (return code == 0).
test-failure: Executes when running helm test and expects the pod to
fail (return code != 0).
Hooks allow you, the chart developer, an opportunity to perform
operations at strategic points in a release lifecycle
Tiller then loads the hook with the lowest weight first (negative to positive)
Tiller returns the release name (and other data) to the client
If the resources is a Job kind, Tiller
will wait until the job successfully runs to completion.
if the job
fails, the release will fail. This is a blocking operation, so the
Helm client will pause while the Job is run.
If they
have hook weights (see below), they are executed in weighted order. Otherwise,
ordering is not guaranteed.
good practice to add a hook weight, and set it
to 0 if weight is not important.
The resources that a hook creates are not tracked or managed as part of the
release.
leave the hook resource alone.
To destroy such
resources, you need to either write code to perform this operation in a pre-delete
or post-delete hook or add "helm.sh/hook-delete-policy" annotation to the hook template file.
Hooks are just Kubernetes manifest files with special annotations in the
metadata section
One resource can implement multiple hooks
no limit to the number of different resources that
may implement a given hook.
When subcharts declare hooks, those are also evaluated. There is no way
for a top-level chart to disable the hooks declared by subcharts.
Hook weights can be positive or negative numbers but must be represented as
strings.
sort those hooks in ascending order.
Hook deletion policies
"before-hook-creation" specifies Tiller should delete the previous hook before the new hook is launched.
By default Tiller will wait for 60 seconds for a deleted hook to no longer exist in the API server before timing out.
Custom Resource Definitions (CRDs) are a special kind in Kubernetes.
The crd-install hook is executed very early during an installation, before
the rest of the manifests are verified.
A common reason why the hook resource might already exist is that it was not deleted following use on a previous install/upgrade.
Helm uses Go templates for templating
your resource files.
two special template functions: include and required
include
function allows you to bring in another template, and then pass the results to other
template functions.
The required function allows you to declare a particular
values entry as required for template rendering.
If the value is empty, the template
rendering will fail with a user submitted error message.
When you are working with string data, you are always safer quoting the
strings than leaving them as bare words
Quote Strings, Don’t Quote Integers
when working with integers do not quote the values
env variables values which are expected to be string
to include a template, and then perform an operation
on that template’s output, Helm has a special include function
The above includes a template called toYaml, passes it $value, and
then passes the output of that template to the nindent function.
Go provides a way for setting template options to control behavior
when a map is indexed with a key that’s not present in the map
The required function gives developers the ability to declare a value entry
as required for template rendering.
The tpl function allows developers to evaluate strings as templates inside a template.
Rendering a external configuration file
(.Files.Get "conf/app.conf")
Image pull secrets are essentially a combination of registry, username, and password.
Automatically Roll Deployments When ConfigMaps or Secrets change
configmaps or secrets are injected as configuration
files in containers
a restart may be required should those
be updated with a subsequent helm upgrade
The sha256sum function can be used to ensure a deployment’s
annotation section is updated if another file changes
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
helm upgrade --recreate-pods
"helm.sh/resource-policy": keep
resources that should not be deleted when Helm runs a
helm delete
this resource becomes
orphaned. Helm will no longer manage it in any way.
create some reusable parts in your chart
In the templates/ directory, any file that begins with an
underscore(_) is not expected to output a Kubernetes manifest file.
by convention, helper templates and partials are placed in a
_helpers.tpl file.
The current best practice for composing a complex application from discrete parts
is to create a top-level umbrella chart that
exposes the global configurations, and then use the charts/ subdirectory to
embed each of the components.
SAP’s Converged charts: These charts
install SAP Converged Cloud a full OpenStack IaaS on Kubernetes. All of the charts are collected
together in one GitHub repository, except for a few submodules.
Deis’s Workflow:
This chart exposes the entire Deis PaaS system with one chart. But it’s different
from the SAP chart in that this umbrella chart is built from each component, and
each component is tracked in a different Git repository.
YAML is a superset of JSON
any valid JSON structure ought to be valid in YAML.
As a best practice, templates should follow a YAML-like syntax unless
the JSON syntax substantially reduces the risk of a formatting issue.
There are functions in Helm that allow you to generate random data,
cryptographic keys, and so on.
a chart repository is a location where packaged charts can be
stored and shared.
A chart repository is an HTTP server that houses an index.yaml file and
optionally some packaged charts.
Because a chart repository can be any HTTP server that can serve YAML and tar
files and can answer GET requests, you have a plethora of options when it comes
down to hosting your own chart repository.
It is not required that a chart package be located on the same server as the
index.yaml file.
A valid chart repository must have an index file. The
index file contains information about each chart in the chart repository.
The Helm project provides an open-source Helm repository server called ChartMuseum that you can host yourself.
$ helm repo index fantastic-charts --url https://fantastic-charts.storage.googleapis.com
A repository will not be added if it does not contain a valid
index.yaml
add the repository to their helm client via the helm
repo add [NAME] [URL] command with any name they would like to use to
reference the repository.
Helm has provenance tools which help chart users verify the integrity and origin
of a package.
Integrity is established by comparing a chart to a provenance record
The provenance file contains a chart’s YAML file plus several pieces of
verification information
Chart repositories serve as a centralized collection of Helm charts.
Chart repositories must make it possible to serve provenance files over HTTP via
a specific request, and must make them available at the same URI path as the chart.
We don’t want to be “the certificate authority” for all chart
signers. Instead, we strongly favor a decentralized model, which is part
of the reason we chose OpenPGP as our foundational technology.
The Keybase platform provides a public
centralized repository for trust information.
A chart contains a number of Kubernetes resources and components that work together.
A test in a helm chart lives under the templates/ directory and is a pod definition that specifies a container with a given command to run.
The pod definition must contain one of the helm test hook annotations: helm.sh/hook: test-success or helm.sh/hook: test-failure
helm test
nest your test suite under a tests/ directory like <chart-name>/templates/tests/
"As soon as you're connected to the Internet, applications can potentially send whatever they want to wherever they want. Most often they do this to your benefit. But sometimes, like in case of tracking software, trojans or other malware, they don't.
But you don't notice anything, because all of this happens invisibly under the hood."
Version constraints within the configuration
itself determine which versions of dependencies are potentially compatible,
but after selecting a specific version of each dependency Terraform remembers
the decisions it made in a dependency lock file
At present, the dependency lock file tracks only provider dependencies.
Terraform does not remember version selections for remote modules, and so
Terraform will always select the newest available module version that meets
the specified version constraints.
The lock file is always named .terraform.lock.hcl, and this name is intended
to signify that it is a lock file for various items that Terraform caches in
the .terraform
Terraform automatically creates or updates the dependency lock file each time
you run the terraform init command.
You should
include this file in your version control repository
If a particular provider has no existing recorded selection, Terraform will
select the newest available version that matches the given version constraint,
and then update the lock file to include that selection.
the "trust on first use" model
you can pre-populate checksums for a variety of
different platforms in your lock file using
the terraform providers lock command,
which will then allow future calls to terraform init to verify that the
packages available in your chosen mirror match the official packages from
the provider's origin registry.
The h1: and
zh: prefixes on these values represent different hashing schemes, each
of which represents calculating a checksum using a different algorithm.
zh:: a mnemonic for "zip hash"
h1:: a mnemonic for "hash scheme 1", which is the current preferred hashing
scheme.
To determine whether there still exists a dependency on a given provider,
Terraform uses two sources of truth: the configuration itself, and the state.
Version constraints within the configuration
itself determine which versions of dependencies are potentially compatible,
but after selecting a specific version of each dependency Terraform remembers
the decisions it made in a dependency lock file so that it can (by default)
make the same decisions again in future.
At present, the dependency lock file tracks only provider dependencies.
Terraform will always select the newest available module version that meets
the specified version constraints.
rails dbconsole figures out which database you're using and drops you into whichever command line interface you would use with it
The console command lets you interact with your Rails application from the command line. On the underside, rails console uses IRB
rake about gives information about version numbers for Ruby, RubyGems, Rails, the Rails subcomponents, your application's folder, the current Rails environment name, your app's database adapter, and schema version
You can precompile the assets in app/assets using rake assets:precompile and remove those compiled assets using rake assets:clean.
rake db:version is useful when troubleshooting
The doc: namespace has the tools to generate documentation for your app, API documentation, guides.
rake notes will search through your code for comments beginning with FIXME, OPTIMIZE or TODO.
You can also use custom annotations in your code and list them using rake notes:custom by specifying the annotation using an environment variable ANNOTATION.
rake routes will list all of your defined routes, which is useful for tracking down routing problems in your app, or giving you a good overview of the URLs in an app you're trying to get familiar with.
rake secret will give you a pseudo-random key to use for your session secret.
Custom rake tasks have a .rake extension and are placed in
Rails.root/lib/tasks.
rails new . --git --database=postgresql
All commands can run with -h or --help to list more information
The rails server command launches a small web server named WEBrick which comes bundled with Ruby
rails server -e production -p 4000
You can run a server as a daemon by passing a -d option
The rails generate command uses templates to create a whole lot of things.
Using generators will save you a large amount of time by writing boilerplate code, code that is necessary for the app to work.
With a normal, plain-old Rails application, your URLs will generally follow the pattern of http://(host)/(controller)/(action), and a URL like http://(host)/(controller) will hit the index action of that controller.
A scaffold in Rails is a full set of model, database migration for that model, controller to manipulate it, views to view and manipulate the data, and a test suite for each of the above.
Unit tests are code that tests and makes assertions about code.
Unit tests are your friend.
rails console --sandbox
rails db
Each task has a description, and should help you find the thing you need.
rake tmp:clear clears all the three: cache, sessions and sockets.
view templates are written in a language called ERB (Embedded Ruby) which is converted by the request cycle in Rails before being sent to the user.
Each action's purpose is to collect information to provide it to a view.
A view's purpose is to display this information in a human readable format.
routing file which holds entries in a special DSL (domain-specific language) that tells Rails how to connect incoming requests to controllers and actions.
You can create, read, update and destroy items for a resource and these operations are referred to as CRUD operations
A controller is simply a class that is defined to inherit from ApplicationController.
If not found, then it will attempt to load a template called application/new. It looks for one here because the PostsController inherits from ApplicationController
:formats specifies the format of template to be served in response. The default format is :html, and so Rails is looking for an HTML template.
:handlers, is telling us what template handlers could be used to render our template.
When you call form_for, you pass it an identifying object for this
form. In this case, it's the symbol :post. This tells the form_for
helper what this form is for.
that the action attribute for the form is pointing at /posts/new
When a form is submitted, the fields of the form are sent to Rails as parameters.
parameters can then be referenced inside the controller actions, typically to perform a particular task
params method is the object which represents the parameters (or fields) coming in from the form.
Active Record is smart enough to automatically map column names to
model attributes,
Rails uses rake commands to run migrations,
and it's possible to undo a migration after it's been applied to your database
every Rails model can be initialized with its
respective attributes, which are automatically mapped to the respective
database columns.
migration creates a method named change which will be called when you
run this migration.
The action defined in this method is also reversible, which
means Rails knows how to reverse the change made by this migration, in case you
want to reverse it later
Migration filenames include a timestamp to ensure that they're processed in the
order that they were created.
@post.save returns a boolean indicating
whether the model was saved or not.
prevents an attacker from
setting the model's attributes by manipulating the hash passed to the model.
If you want to link to an action in the same controller, you don't
need to specify the :controller option, as Rails will use the current
controller by default.
inherits from
ActiveRecord::Base
Active Record supplies a great deal of functionality to
your Rails models for free, including basic database CRUD (Create, Read, Update,
Destroy) operations, data validation, as well as sophisticated search support
and the ability to relate multiple models to one another.
Rails includes methods to help you validate the data that you send to models
Rails can validate a variety of conditions in a model,
including the presence or uniqueness of columns, their format, and the
existence of associated objects.
redirect_to will tell the browser to issue another request.
rendering is done within the same request as the form submission
Each request for a
comment has to keep track of the post to which the comment is attached, thus the
initial call to the find method of the Post model to get the post in question.
pluralize is a rails helper that takes a number and a string as its
arguments. If the number is greater than one, the string will be automatically pluralized.
The render method is used so that the @post object is passed back to the new template when it is rendered.
The method: :patch option tells Rails that we want this form to be submitted
via the PATCH HTTP method which is the HTTP method you're expected to use to
update resources according to the REST protocol.
it accepts a hash containing the attributes
that you want to update.
field_with_errors. You can define a css rule to make them
standout
belongs_to :post, which sets up an Active Record association
creates comments as a nested resource within posts
call destroy on Active Record objects when you want to delete
them from the database.
Rails allows you to
use the dependent option of an association to achieve this.
store all external data as UTF-8
you're better off
ensuring that all external data is UTF-8
use UTF-8 as the internal storage of your database
Rails defaults to converting data from your database into UTF-8 at
the boundary.
:patch
By default forms built with the form_for helper are sent via POST
The :method and :'data-confirm'
options are used as HTML5 attributes so that when the link is clicked,
Rails will first show a confirm dialog to the user, and then submit the link with method delete.
This is done via the JavaScript file jquery_ujs which is automatically included
into your application's layout (app/views/layouts/application.html.erb) when you
generated the application.
Without this file, the confirmation dialog box wouldn't appear.
just defines the partial template we want to render
As the render
method iterates over the @post.comments collection, it assigns each
comment to
a local variable named the same as the partial
use the authentication system
require and permit
the method is often made private to make sure
it can't be called outside its intended context.
standard CRUD actions in each
controller in the following order: index, show, new, edit, create, update
and destroy.
must be placed
before any private or protected method in the controller in order to work
objects carry both persistent data and behavior which
operates on that data
Object-Relational Mapping, commonly referred to as its abbreviation ORM, is
a technique that connects the rich objects of an application to tables in
a relational database management system
Represent associations between these models
Validate models before they get persisted to the database
The idea is that if
you configure your applications in the very same way most of the times then this
should be the default way.
Database Table - Plural with underscores separating words
use the ActiveRecord::Base.table_name= method to specify the table
name
Model Class - Singular with the first letter of each word capitalized
Foreign keys - These fields should be named following the pattern
singularized_table_name_id
Primary keys - By default, Active Record will use an integer column named
id as the table's primary key
created_at
updated_at
(table_name)_count - Used to cache the number of belonging objects on
associations.
Single Table Inheritance (STI)
Object Relational Mapping
class_name.yml
ActiveRecord::Base.primary_key=
CRUD is an acronym for the four verbs we use to operate on data: Create,
Read, Update and Delete.
new method will return a new
object
create will return the object and save it to the database.
Using the new method, an object can be instantiated without being saved
user.save will commit the record to the database
update_all class method
an Active Record object can be destroyed which removes
it from the database
Validation is a very important issue to consider when persisting to database, so
the methods create, save and update take it into account when
running: they return false when validation fails and they didn't actually
perform any operation on database.
a bang counterpart
Active Record callbacks allow you to attach code to certain events in the
life-cycle of your models
Rails keeps track of which files have been committed to the database and
provides rollback features
我想大概的意思就是:如果是 admin 可以看到全部 post,如果不是只能看到 published = true 的 post
use this class from your controller via the policy_scope method:
PostPolicy::Scope.new(current_user, Post).resolve
policy_scope(@user.posts).each
This
method will raise an exception if authorize has not yet been called.
verify_policy_scoped to your controller. This
will raise an exception in the vein of verify_authorized. However, it tracks
if policy_scope is used instead of authorize
need to
conditionally bypass verification, you can use skip_authorization
skip_policy_scope
Having a mechanism that ensures authorization happens allows developers to
thoroughly test authorization scenarios as units on the policy objects
themselves.
Pundit doesn't do anything you couldn't have easily done
yourself. It's a very small library, it just provides a few neat helpers.
all of the policy and scope classes are just plain Ruby classes
rails g pundit:policy post
define a filter that redirects unauthenticated users to the
login page
fail more gracefully
raise Pundit::NotAuthorizedError, "must be logged in" unless user
having rails handle them as a 403 error and serving a 403 error page.
retrieve a policy for a record outside the controller or
view
define a method in your controller called pundit_user
Pundit strongly encourages you to model your application in such a way that the
only context you need for authorization is a user object and a domain model that
you want to check authorization for.
Pundit does not allow you to pass additional arguments to policies
authorization is dependent
on IP address in addition to the authenticated user
create a special class which wraps up both user and IP and passes it to the policy.
set up a permitted_attributes method in your policy
policy(@post).permitted_attributes
permitted_attributes(@post)
Pundit provides a convenient helper method
permit different attributes based on the current action,
If you have defined an action-specific method on your policy for the current action, the permitted_attributes helper will call it instead of calling permitted_attributes on your controller
If you don't have an instance for the first argument to authorize, then you can pass
the class
restart the Rails server
Given there is a policy without a corresponding model / ruby class,
you can retrieve it by passing a symbol
after_action :verify_authorized
It is not some kind of
failsafe mechanism or authorization mechanism.
Pundit will work just fine without
using verify_authorized and verify_policy_scoped
most organizations practice continuous delivery, which means that your default branch can be deployed.
Merging everything into the master branch and frequently deploying means you minimize the amount of unreleased code, which is in line with lean and continuous delivery best practices.
you can deploy to production every time you merge a feature branch.
deploy a new version by merging master into the production branch.
you can have your deployment script create a tag on each deployment.
to have an environment that is automatically updated to the master branch
commits only flow downstream, ensures that everything is tested in all environments.
first merge these bug fixes into master, and then cherry-pick them into the release branch.
Merging into master and then cherry-picking into release is called an “upstream first” policy
“merge request” since the final action is to merge the feature branch.
“pull request” since the first manual action is to pull the feature branch
it is common to protect the long-lived branches
After you merge a feature branch, you should remove it from the source control software
When you are ready to code, create a branch for the issue from the master branch.
This branch is the place for any work related to this change.
A merge request is an online place to discuss the change and review the code.
If you open the merge request but do not assign it to anyone, it is a “Work In Progress” merge request.
Start the title of the merge request with “[WIP]” or “WIP:” to prevent it from being merged before it’s ready.
To automatically close linked issues, mention them with the words “fixes” or “closes,” for example, “fixes #14” or “closes #67.” GitLab closes these issues when the code is merged into the default branch.
If you have an issue that spans across multiple repositories, create an issue for each repository and link all issues to a parent issue.
With Git, you can use an interactive rebase (rebase -i) to squash multiple commits into one or reorder them.
you should never rebase commits you have pushed to a remote server.
Rebasing creates new commits for all your changes, which can cause confusion because the same change would have multiple identifiers.
if someone has already reviewed your code, rebasing makes it hard to tell what changed since the last review.
never rebase commits authored by other people.
it is a bad idea to rebase commits that you have already pushed.
always use the “no fast-forward” (--no-ff) strategy when you merge manually.
you should try to avoid merge commits in feature branches
people avoid merge commits by just using rebase to reorder their commits after the commits on the master branch.
Using rebase prevents a merge commit when merging master into your feature branch, and it creates a neat linear history.
you should never rebase commits you have pushed to a remote server
Sometimes you can reuse recorded resolutions (rerere), but merging is better since you only have to resolve conflicts once.
not frequently merge master into the feature branch.
utilizing new code,
resolving merge conflicts
updating long-running branches.
just cherry-picking a commit.
If your feature branch has a merge conflict, creating a merge commit is a standard way of solving this.
keep your feature branches short-lived.
split your features into smaller units of work
you should try to prevent merge commits, but not eliminate them.
Your codebase should be clean, but your history should represent what actually happened.
Splitting up work into individual commits provides context for developers looking at your code later.
push your feature branch frequently, even when it is not yet ready for review.
Commit often and push frequently
A commit message should reflect your intention, not just the contents of the commit.
Testing before merging
When using GitLab flow, developers create their branches from this master branch, so it is essential that it never breaks.
Therefore, each merge request must be tested before it is accepted.
When creating a feature branch, always branch from an up-to-date master