The parameter name
or symbol to be expanded may be enclosed in braces, which
are optional but serve to protect the variable to be expanded from
characters immediately following it which could be
interpreted as part of the name.
When braces are used, the matching ending brace is the first ‘}’
not escaped by a backslash or within a quoted string, and not within an
embedded arithmetic expansion, command substitution, or parameter
expansion.
${parameter}
braces are required
If the first character of parameter is an exclamation point (!),
and parameter is not a nameref,
it introduces a level of variable indirection.
${parameter:-word}
If parameter is unset or null, the expansion of
word is substituted. Otherwise, the value of
parameter is substituted.
${parameter:=word}
If parameter
is unset or null, the expansion of word
is assigned to parameter.
${parameter:?word}
If parameter
is null or unset, the expansion of word (or a message
to that effect if word
is not present) is written to the standard error and the shell, if it
is not interactive, exits.
${parameter:+word}
If parameter
is null or unset, nothing is substituted, otherwise the expansion of
word is substituted.
${parameter:offset}
${parameter:offset:length}
Substring expansion applied to an associative array produces undefined
results.
${parameter/pattern/string}
The pattern is expanded to produce a pattern just as in
filename expansion.
If pattern begins with ‘/’, all matches of pattern are
replaced with string.
Normally only the first match is replaced
The ‘^’ operator converts lowercase letters matching pattern
to uppercase
the ‘,’ operator converts matching uppercase letters
to lowercase.
Operators are clients of the Kubernetes API that act as controllers for
a Custom Resource.
choosing a leader for a distributed application without an internal
member election process
publishing a Service to applications that don't support Kubernetes APIs to
discover them
The core of the Operator is code to tell the API server how to make
reality match the configured resources.
If you add a new SampleDB, the operator sets up PersistentVolumeClaims
to provide durable database storage, a StatefulSet to run SampleDB and
a Job to handle initial configuration.If you delete it, the Operator takes a snapshot, then makes sure that
the StatefulSet and Volumes are also removed.
to deploy an Operator is to add the
Custom Resource Definition and its associated Controller to your cluster.
Once you have an Operator deployed, you'd use it by adding, modifying or
deleting the kind of resource that the Operator uses.
"the reason to use the bar is because you're switching content via JavaScript rather than loading a new page. This makes sense since the browser's own loading indicator may not get triggered. "
"The benefits of getting to grips with Vim are immense in terms of editing speed and maintaining your "flow" when you're on a roll, whether writing code, poetry, or prose, but because the learning curve is so steep for a text editor, it's very easy to retain habits from your time learning the editor that stick with you well into mastery. Because Vim makes you so fast and fluent, it's especially hard to root these out because you might not even notice them, but it's worth it. Here I'll list some of the more common ones."
The single responsibility principle asserts that every class should have exactly one responsibility. In other words, each class should be concerned about one unique nugget of functionality
fat models are a little better than fat controllers
when every bit of functionality has been encapsulated into its own object, you find yourself repeating code a lot less.
Services has the benefit of concentrating the core logic of the application in a separate object, instead of scattering it around controllers and models.
Additional initialize arguments might include other context information if applicable.
And as programmers, we know that when something can go wrong, sooner or later it will!
I’ll typically create an actions folder for things like create_invoice, and folders for other service objects such as decorators, policies, and support. I also use a services folder, but I reserve it for service objects that talk to external entities, like Stripe, AWS, or geolocation services.
You can create your own actions, decorators, support objects, and services.
rails dbconsole figures out which database you're using and drops you into whichever command line interface you would use with it
The console command lets you interact with your Rails application from the command line. On the underside, rails console uses IRB
rake about gives information about version numbers for Ruby, RubyGems, Rails, the Rails subcomponents, your application's folder, the current Rails environment name, your app's database adapter, and schema version
You can precompile the assets in app/assets using rake assets:precompile and remove those compiled assets using rake assets:clean.
rake db:version is useful when troubleshooting
The doc: namespace has the tools to generate documentation for your app, API documentation, guides.
rake notes will search through your code for comments beginning with FIXME, OPTIMIZE or TODO.
You can also use custom annotations in your code and list them using rake notes:custom by specifying the annotation using an environment variable ANNOTATION.
rake routes will list all of your defined routes, which is useful for tracking down routing problems in your app, or giving you a good overview of the URLs in an app you're trying to get familiar with.
rake secret will give you a pseudo-random key to use for your session secret.
Custom rake tasks have a .rake extension and are placed in
Rails.root/lib/tasks.
rails new . --git --database=postgresql
All commands can run with -h or --help to list more information
The rails server command launches a small web server named WEBrick which comes bundled with Ruby
rails server -e production -p 4000
You can run a server as a daemon by passing a -d option
The rails generate command uses templates to create a whole lot of things.
Using generators will save you a large amount of time by writing boilerplate code, code that is necessary for the app to work.
With a normal, plain-old Rails application, your URLs will generally follow the pattern of http://(host)/(controller)/(action), and a URL like http://(host)/(controller) will hit the index action of that controller.
A scaffold in Rails is a full set of model, database migration for that model, controller to manipulate it, views to view and manipulate the data, and a test suite for each of the above.
Unit tests are code that tests and makes assertions about code.
Unit tests are your friend.
rails console --sandbox
rails db
Each task has a description, and should help you find the thing you need.
rake tmp:clear clears all the three: cache, sessions and sockets.
objects carry both persistent data and behavior which
operates on that data
Object-Relational Mapping, commonly referred to as its abbreviation ORM, is
a technique that connects the rich objects of an application to tables in
a relational database management system
Represent associations between these models
Validate models before they get persisted to the database
The idea is that if
you configure your applications in the very same way most of the times then this
should be the default way.
Rails will
pluralize your class names to find the respective database table.
use the ActiveRecord::Base.table_name= method to specify the table
name
Model Class - Singular with the first letter of each word capitalized
Foreign keys - These fields should be named following the pattern
singularized_table_name_id
Primary keys - By default, Active Record will use an integer column named
id as the table's primary key
created_at
updated_at
(table_name)_count - Used to cache the number of belonging objects on
associations.
Object Relational Mapping
Single Table Inheritance (STI)
rake db:rollback
ActiveRecord::Base.primary_key=
CRUD is an acronym for the four verbs we use to operate on data: Create,
Read, Update and Delete.
new method will return a new
object
create will return the object and save it to the database.
Using the new method, an object can be instantiated without being saved
user.save will commit the record to the database
update_all class method
an Active Record object can be destroyed which removes
it from the database
Validation is a very important issue to consider when persisting to database, so
the methods create, save and update take it into account when
running: they return false when validation fails and they didn't actually
perform any operation on database.
a bang counterpart
Active Record callbacks allow you to attach code to certain events in the
life-cycle of your models
Rails keeps track of which files have been committed to the database and
provides rollback features
Refer the YAML Anchors/Aliases documentation for information about how to alias and reuse syntax to keep your .circleci/config.yml file small.
workflow orchestration with two parallel jobs
jobs run according to configured requirements, each job waiting to start until the required job finishes successfully
requires: key
fans-out to run a set of acceptance test jobs in parallel, and finally fans-in to run a common deploy job.
Holding a Workflow for a Manual Approval
Workflows can be configured to wait for manual approval of a job before
continuing to the next job
add a job to the jobs list with the
key type: approval
approval is a special job type that is only available to jobs under the workflow key
The name of the job to hold is arbitrary - it could be wait or pause, for example,
as long as the job has a type: approval key in it.
schedule a workflow
to run at a certain time for specific branches.
The triggers key is only added under your workflows key
using cron syntax to represent Coordinated Universal Time (UTC) for specified branches.
By default,
a workflow is triggered on every git push
the commit workflow has no triggers key
and will run on every git push
The nightly workflow has a triggers key
and will run on the specified schedule
Cron step syntax (for example, */1, */20) is not supported.
use a context to share environment variables
use the same shared environment variables when initiated by a user who is part of the organization.
CircleCI does not run workflows for tags
unless you explicitly specify tag filters.
CircleCI branch and tag filters support
the Java variant of regex pattern matching.
Each workflow has an associated workspace which can be used to transfer files to downstream jobs as the workflow progresses.
The workspace is an additive-only store of data.
Jobs can persist data to the workspace
Downstream jobs can attach the workspace to their container filesystem.
Attaching the workspace downloads and unpacks each layer based on the ordering of the upstream jobs in the workflow graph.
Workflows that include jobs running on multiple branches may require data to be shared using workspaces
To persist data from a job and make it available to other jobs, configure the job to use the persist_to_workspace key.
Files and directories named in the paths: property of persist_to_workspace will be uploaded to the workflow’s temporary workspace relative to the directory specified with the root key.
Configure a job to get saved data by configuring the attach_workspace key.
persist_to_workspace
attach_workspace
To rerun only a workflow’s failed jobs, click the Workflows icon in the app and select a workflow to see the status of each job, then click the Rerun button and select Rerun from failed.
if you do not see your workflows triggering, a configuration error is preventing the workflow from starting.
check your Workflows page of the CircleCI app (not the Job page)
bear in mind that the best way to configure ProxySQL is through its admin interface.
llow you to control the list of the backend servers, how traffic is routed to them, and other important settings (such as caching, access control, etc)
Once you've made modifications to the in-memory data structure, you must load the new configuration to the runtime, or persist the new settings to disk
mysql_variables: contains global variables that control the functionality for handling the incoming MySQL traffic.
mysql_users: contains rows for the mysql_users table from the admin interface. Basically, these define the users which can connect to the proxy, and the users with which the proxy can connect to the backend servers.
mysql_servers: contains rows for the mysql_servers table from the admin interface. Basically, these define the backend servers towards which the incoming MySQL traffic is routed.
mysql_query_rules: contains rows for the mysql_query_rules table from the admin interface. Basically, these define the rules used to classify and route the incoming MySQL traffic, according to various criteria (patterns matched, user used to run the query, etc.).