A replica set in MongoDB is a group of mongod processes
that maintain the same data set.
Replica sets provide redundancy and
high availability, and are the basis for all production
deployments.
With
multiple copies of data on different database servers, replication
provides a level of fault tolerance against the loss of a single
database server.
replication can provide increased read capacity as
clients can send read operations to different servers.
A replica set is a group of mongod instances that maintain
the same data set.
A replica set contains several data bearing nodes
and optionally one arbiter node.
one and
only one member is deemed the primary node, while the other nodes are
deemed secondary nodes.
A replica set can have only one primary capable of
confirming writes with { w: "majority" }
write concern; although in some circumstances, another mongod instance
may transiently believe itself to also be primary.
The secondaries replicate the
primary’s oplog and apply the operations to their data sets such that
the secondaries’ data sets reflect the primary’s data set
add a mongod instance to a replica set as an
arbiter. An arbiter participates in
elections but does not hold data
An arbiter will always be an arbiter
whereas a primary may step down and
become a secondary and a
secondary may become the primary
during an election.
Secondaries replicate the primary’s oplog and apply the operations to
their data sets asynchronously.
These slow oplog messages are logged
for the secondaries in the diagnostic log under the REPL component with the text applied
op: <oplog entry> took <num>ms.
Replication lag refers to the amount of time
that it takes to copy (i.e. replicate) a write operation on the
primary to a secondary.
When a primary does not communicate with the other members of the set
for more than the configured electionTimeoutMillis period
(10 seconds by default), an eligible secondary calls for an election
to nominate itself as the new primary.
The replica set cannot process write operations
until the election completes successfully.
The median time before a cluster elects a new primary should not
typically exceed 12 seconds, assuming default replica
configuration settings.
Factors such as network latency may extend the time required
for replica set elections to complete, which in turn affects the amount
of time your cluster may operate without a primary.
Your application connection logic should include tolerance for automatic
failovers and the subsequent elections.
MongoDB drivers
can detect the loss of the primary and automatically
retry certain write operations a single time,
providing additional built-in handling of automatic failovers and elections
By default, clients read from the primary [1];
however, clients can specify a read preference to send read operations to secondaries.
An equivalent in other languages would be Javascript’s npm, Ruby’s gems or PHP’s composer.
Maven expects a certain directory structure for your Java source code to live in and when you later do a mvn clean install , the whole compilation and packaging work will be done for you.
any directory that contains a pom.xml file is also a valid Maven project.
A pom.xml file contains everything needed to describe your Java project.
Java source code is to be meant to live in the "/src/main/java" folder
Maven will put compiled Java classes into the "target/classes" folder
Maven will also build a .jar or .war file, depending on your project, that lives in the "target" folder.
Maven has the concept of a build lifecycle, which is made up of different phases.
clean is not part of Maven’s default lifecycle, you end up with commands like mvn clean install or mvn clean package. Install or package will trigger all preceding phases, but you need to specify clean in addition.
Maven will always download your project dependencies into your local maven repository first and then reference them for your build.
local repositories (in your user’s home directory: ~/.m2/)
clean: deletes the /target folder.
mvn clean package
mvn clean install
package: Converts your .java source code into a .jar/.war file and puts it into the /target folder.
install: First, it does a package(!). Then it takes that .jar/.war file and puts it into your local Maven repository, which lives in ~/.m2/repository.
calling 'mvn install' would be enough if Maven was smart enough to do reliable, incremental builds.
figuring out what Java source files/modules changed and only compile those.
developers got it ingrained to always call 'mvn clean install' (even though this increases build time a lot in bigger projects).
make sure that Maven always tries to download the latest snapshot dependency versions
By default, rootless Podman runs as root within the container.
the processes in the container have the default list of namespaced capabilities which allow the processes to act like root inside of the user namespace
the directory is owned by UID 26, but UID 26 is not mapped into the container and is not the same UID that Postgres runs with while in the container.
Podman launches a container inside of the user namespace, which is mapped with the range of UIDs defined for the user in /etc/subuid and /etc/subgid
The easy solution to this problem is to chown the html directory to match the UID that Postgresql runs with inside of the container.
use the podman unshare command, which drops you into the same user namespace that rootless Podman uses
This setup also means that the processes inside of the container are running as the user’s UID. If the container process escaped the container, the process would have full access to files in your home directory based on UID separation.
SELinux would still block the access, but I have heard that some people disable SELinux.
If you run the processes within the container as a different non-root UID, however, then those processes will run as that UID. If they escape the container, they would only have world access to content in your home directory.
run a podman unshare command, or set up the directories' group ownership as owned by your UID (root inside of the container).
running containers as non-root should always be your top priority for security reasons.
Researching issues felt like bouncing a rubber ball between teams, hard to catch the root cause and harder yet to stop from bouncing between one another.
In the past, Edge Engineering had ops-focused teams and SRE specialists who owned the deploy+operate+support parts of the software life cycle
devs could push code themselves when needed, and also were responsible for off-hours production issues and support requests
What were we trying to accomplish and why weren’t we being successful?
These specialized roles create efficiencies within each segment while potentially creating inefficiencies across the entire life cycle.
Grouping differing specialists together into one team can reduce silos, but having different people do each role adds communication overhead, introduces bottlenecks, and inhibits the effectiveness of feedback loops.
devops principles
develops a system also be responsible for operating and supporting that system
Each development team owns deployment issues, performance bugs, capacity planning, alerting gaps, partner support, and so on.
Those centralized teams act as force multipliers by turning their specialized knowledge into reusable building blocks.
Communication and alignment are the keys to success.
Full cycle developers are expected to be knowledgeable and effective in all areas of the software life cycle.
ramping up on areas they haven’t focused on before
We run dev bootcamps and other forms of ongoing training to impart this knowledge and build up these skills
“how can I automate what is needed to operate this system?”
“what self-service tool will enable my partners to answer their questions without needing me to be involved?”
A full cycle developer thinks and acts like an SWE, SDET, and SRE. At times they create software that solves business problems, at other times they write test cases for that, and still other times they automate operational aspects of that system.
the need for continuous delivery pipelines, monitoring/observability, and so on.
Tooling and automation help to scale expertise, but no tool will solve every problem in the developer productivity and operations space
A git rebase copies the commits from the current branch, and puts these copied commits on top of the specified branch.
The branch that we're rebasing always has the latest changes that we want to keep!
A git rebase changes the history of the project as new hashes are created for the copied commits!
Rebasing is great whenever you're working on a feature branch, and the master branch has been updated.
An interactive rebase can also be useful on the branch you're currently working on, and want to modify some commits.
A git reset gets rid of all the current staged files and gives us control over where HEAD should point to.
A soft reset moves HEAD to the specified commit (or the index of the commit compared to HEAD)
Git should simply reset its state back to where it was on the specified commit: this even includes the changes in your working directory and staged files!
By reverting a certain commit, we create a new commit that contains the reverted changes!
Performing a git revert is very useful in order to undo a certain commit, without modifying the history of the branch.
By cherry-picking a commit, we create a new commit on our active branch that contains the changes that were introduced by the cherry-picked commit.
a fetch simply downloads new data.
A git pull is actually two commands in one: a git fetch, and a git merge
git reflog is a very useful command in order to show a log of all the actions that have been taken
With respect to permissions, all users on the system except those with a user role of No Access have read access to objects in partition Common, and by default, partition Common is their current partition.
The current partition is the specific partition to which the system is currently set for a logged-in user.
A
partition access assignment gives a user some level of access to the specified partition.
assigning partition access to a user does not necessarily give the user full access to all
objects in the partition
user account objects also reside in partitions
when you first install the BIG-IP system, every existing user account
(root and admin) resides in partition
Common
the partition in which a user account object
resides does not affect the partition or partitions to which that user is granted access
to manage other BIG-IP objects
the
object it references resides in partition Common
a referenced object must reside either in the same
partition as the object that is referencing it
AS3 manages topology records globally in /Common, it is required that records only be managed through AS3, as it will treat the records declaratively.
If a record is added outside of AS3, it will be removed if it is not included in the next AS3 declaration for topology records (AS3 completely overwrites non-AS3 topologies when a declaration is submitted).
using AS3 to delete a tenant (for example, sending DELETE to the /declare/<TENANT> endpoint) that contains GSLB topologies will completely remove ALL GSLB topologies from the BIG-IP.
When posting a large declaration (hundreds of application services in a single declaration), you may experience a 500 error stating that the save sys config operation failed.
Even if you have asynchronous mode set to false, after 45 seconds AS3 sets asynchronous mode to true (API swap), and returns an async response.
When creating a new tenant using AS3, it must not use the same name as a
partition you separately create on the target BIG-IP system.
If you use the
same name and then post the declaration, AS3 overwrites (or removes) the
existing partition completely, including all configuration objects in that
partition.
use AS3 to create a tenant (which creates a BIG-IP partition),
manually adding configuration objects to the partition created by AS3 can
have unexpected results
When you delete the
Tenant using AS3, the system deletes both virtual servers.
if a Firewall_Address_List contains zero addresses, a dummy IPv6 address of ::1:5ee:bad:c0de is added in order to maintain a valid Firewall_Address_List. If an address is added to the list, the dummy address is removed.
use /mgmt/shared/appsvcs/declare?async=true if you have a particularly large declaration which will take a long time to process.
reviewing the Sizing BIG-IP Virtual Editions section (page 7) of Deploying BIG-IP VEs in a Hyper-Converged Infrastructure
To test whether your system has AS3 installed or not, use GET with the /mgmt/shared/appsvcs/info URI.
You may find it more convenient to put multi-line texts such as iRules into
AS3 declarations by first encoding them in Base64.
no matter your BIG-IP user account name, audit logs show all
messages from admin and not the specific user name.
In default usage, terraform init
downloads and installs the plugins for any providers used in the configuration
automatically, placing them in a subdirectory of the .terraform directory.
allows each
configuration to potentially use different versions of plugins.
In automation environments, it can be desirable to disable this behavior
and instead provide a fixed set of plugins already installed on the system
where Terraform is running. This then avoids the overhead of re-downloading
the plugins on each execution
the desire for an
interactive approval step between plan and apply.
terraform init -input=false to initialize the working directory.
terraform plan -out=tfplan -input=false to create a plan and save it to the local file tfplan.
terraform apply -input=false tfplan to apply the plan stored in the file tfplan.
the environment variable TF_IN_AUTOMATION is set to any non-empty
value, Terraform makes some minor adjustments to its output to de-emphasize
specific commands to run.
it can be difficult or impossible to
ensure that the plan and apply subcommands are run on the same machine,
in the same directory, with all of the same files present.
to allow only one plan to be outstanding at a
time.
forcing plans to be approved (or dismissed) in
sequence
-auto-approve
The -auto-approve option tells Terraform not
to require interactive approval of the plan before applying it.
obtain the archive created in the previous step
and extract it at the same absolute path. This re-creates everything
that was present after plan, avoiding strange issues where local files
were created during the plan step.
"In default usage, terraform init downloads and installs the plugins for any providers used in the configuration automatically, placing them in a subdirectory of the .terraform directory. "